Documentation for version v0.49.0 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
kubectl apply to kapp ¶
kubectl apply to
kapp deploy will allow kapp to adopt resources mentioned in a given config.
However, kapp will try to insert a few of its labels in bodies of some resources, like Deployments, which may fail due to those resources having immutable fields that kapp tries to update (spec.selector on Deployments).
To prevent this failure, add the
kapp.k14s.io/disable-default-label-scoping-rules: "" annotation as a kapp configuration to prevent kapp from touching the immutable fields when adopting a resource.
Error: Asking for confirmation: EOF ¶
This probably means you have piped configuration into kapp and did not specify
-y) flag to continue. It’s necessary because kapp can no longer ask for confirmation via stdin. Feel free to re-run the command with
-c) to make sure pending changes are correct. Instead of using a pipe you can also use an anonymous fifo keeping stdin free for the confirmation prompt, e.g.
kapp deploy -a app1 -f <(ytt -f config/)
Where to store app resources (i.e. in which namespace)? ¶
See state namespace doc page.
... Field is immutable error ¶
After changing the labels/selectors in one of my templates, I’m getting the
MatchExpressions:v1.LabelSelectorRequirement(nil)}: field is immutable (reason: Invalid)errors on deployment resource. Is there a way to tell kapp to force the change?
Some fields on a resource are immutable. kapp provides a
kapp.k14s.io/update-strategy annotation that controls how kapp will update resource. One of the strategies is
fallback-on-replace which will have kapp recreate an object (delete, wait, then create) if initial update results in
Invalid error. See Controlling apply via resource annotations for details.
Job.batch is invalid: ... spec.selector: Required value error ¶
batch.Job resource is augmented by the Job controller with unique labels upon its creation. When using kapp to subsequently update existing Job resource, API server will return
Invalid error since given configuration does not include
controller-uid labels. kapp’s rebase rules can be used to copy over necessary configuration from server side copy; however, since Job resource is mostly immutable, we recommend to use
kapp.k14s.io/update-strategy annotation set to
fallback-on-replace to recreate Job resource with any updates.
Updating Deployments when ConfigMap changes ¶
Can kapp force update on ConfigMaps in Deployments/DaemonSets? Just noticed that it didn’t do that and I somehow expected it to.
kapp has a feature called versioned resources that allows kapp to create uniquely named resources instead of updating resources with changes. Resources referencing versioned resources are forced to be updated with new names, and therefore are changed, thus solving a problem of how to propagate changes safely.
Quick way to find common kapp command variations ¶
Limit number of ReplicaSets for Deployments ¶
Everytime I do a new deploy w/ kapp I see a new replicaset, along with all of the previous ones.
Deployment resource has a field
.spec.revisionHistoryLimit that controls how many previous
ReplicaSets to keep. See Deployment’s clean up polciy for more details.
Changes detected immediately after successful deploy ¶
Sometimes Kubernetes API server will convert submitted field values into their canonical form server-side. This will be detected by kapp as a change during a next deploy. To avoid such changes in future, you will have to change your provided field values to what API server considers as canonical.
... 186 - cpu: "2" 187 - memory: 1Gi 170 + cpu: 2000m 171 + memory: 1024Mi ...
Changes detected after resource is modified server-side ¶
There might be cases where other system actors (various controllers) may modify resource outside of kapp. Common example is Deployment’s
spec.replicas field is modified by Horizontal Pod Autoscaler controller. To let kapp know of such external behaviour use custom
rebaseRules configuration (see HPA and Deployment rebase for details).
Colors are not showing up in my CI build, in my terminal, etc. ¶
FORCE_COLOR=1 environment variable to force enabling color output. Available in v0.23.0+.
How can I version apps deployed by kapp? ¶
kapp itself does not provide any notion of versioning, since it’s just a tool to reconcile config. We recommend to include a ConfigMap in your deployment with application metadata e.g. git commit, release notes, etc.
Resource ... is associated with a different label value ¶
Resource ownership is tracked by app labels. kapp expects that each resource is owned by exactly one app.
If you are receiving this error and are using correct app name, it might be that you are targeting wrong namespace where app is located. Use
--namespace to set correct namespace.
Why does kapp hang when trying to delete a resource? ¶
By default, kapp won’t delete resources it didn’t create. You can see which resources are owned by kapp in output of
kapp inspect -a app-name in its
Owner column. You can force kapp to apply this ignored change using
--apply-ignored flag. Alternatively if you are able to set kapp.k14s.io/owned-for-deletion annotation on resource that will be created, kapp will take that as a request to “own it” for deletion. This comes in handy for example with PVCs created by StatefulSet.
How does kapp handle merging? ¶
kapp explicitly decided against basic 3 way merge, instead allowing the user to specify how to resolve conflicts via rebase rules.
Can I force an update for a change that does not produce a diff? ¶
If kapp does not detect changes, it won’t perform an update. To force changes every time you can set
kapp.k14s.io/nonce annotation. That way, every time you deploy the resource will appear to have changes.
How can I remove decorative headings from kapp inspect output? ¶
--tty=false flag which will disable decorative output. Example:
kapp inspect --raw --tty=false.
Additional resources: tty flag in kapp code
How can I get kapp to skip waiting on some resources? ¶
kapp allows to control waiting behavior per resource via resource annotations. When used, the resource will be applied to the cluster, but will not wait to reconcile.
(Help improve our docs: edit this page on GitHub)