Migrate existing resources to a new kapp app
by Praveen Rewar — Mar 3, 2022
kapp CLI encourages Kubernetes users to manage resources in bulk by working with “Kubernetes applications” (a set of resources with the same label). But how do we manage resources already present on the cluster (created by kubectl apply or are part of another kapp app)?
In this blog, we learn how to migrate from kubectl apply
to a kapp app and move existing resources across kapp apps.
Migrating from kubectl apply
to kapp ¶
Let’s take a look at a simple manifest which consists of a Namespace and a ConfigMap and deploy it with kubectl
.
apiVersion: v1
kind: Namespace
metadata:
name: ns-sample
---
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-sample
namespace: ns-sample
data:
player_initial_lives: "3"
$ kubectl apply -f config-1.yaml
namespace/ns-sample created
configmap/configmap-sample created
Great! Our Namespace and ConfigMap are created. Now if we want to deploy these 2 resources using kapp as an application, we can do so by deploying the same manifest using kapp and providing an app name. Let’s also see what changes kapp makes on these resources using the option --diff-changes
$ kapp deploy -a my-first-app -f config-1.yaml --diff-changes
Target cluster 'https://192.168.64.80:8443' (nodes: minikube)
@@ update namespace/ns-sample (v1) cluster @@
...
2, 2 metadata:
3 - annotations:
4 - kubectl.kubernetes.io/last-applied-configuration: |
5 - {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"name":"ns-sample"}}
6, 3 creationTimestamp: "2022-03-02T14:44:55Z"
7, 4 labels:
8 - kubernetes.io/metadata.name: ns-sample
5 + kapp.k14s.io/app: "1646232299450186000"
6 + kapp.k14s.io/association: v1.39ba69aab0e3b496f83369308d08f5b9
9, 7 managedFields:
10, 8 - apiVersion: v1
@@ update configmap/configmap-sample (v1) namespace: ns-sample @@
...
4, 4 metadata:
5 - annotations:
6 - kubectl.kubernetes.io/last-applied-configuration: |
7 - {"apiVersion":"v1","data":{"player_initial_lives":"3"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"configmap-sample","namespace":"ns-sample"}}
8, 5 creationTimestamp: "2022-03-02T14:44:55Z"
6 + labels:
7 + kapp.k14s.io/app: "1646232299450186000"
8 + kapp.k14s.io/association: v1.216aa44a56073ff0ff55bfae60148595
9, 9 managedFields:
10, 10 - apiVersion: v1
Changes
Namespace Name Kind Conds. Age Op Op st. Wait to Rs Ri
(cluster) ns-sample Namespace - 4s update - reconcile ok -
ns-sample configmap-sample ConfigMap - 4s update - reconcile ok -
Op: 0 create, 0 delete, 2 update, 0 noop, 0 exists
Wait to: 2 reconcile, 0 delete, 0 noop
Continue? [yN]: y
8:15:35PM: ---- applying 1 changes [0/2 done] ----
8:15:35PM: update namespace/ns-sample (v1) cluster
8:15:35PM: ---- waiting on 1 changes [0/2 done] ----
8:15:35PM: ok: reconcile namespace/ns-sample (v1) cluster
8:15:35PM: ---- applying 1 changes [1/2 done] ----
8:15:35PM: update configmap/configmap-sample (v1) namespace: ns-sample
8:15:35PM: ---- waiting on 1 changes [1/2 done] ----
8:15:35PM: ok: reconcile configmap/configmap-sample (v1) namespace: ns-sample
8:15:35PM: ---- applying complete [2/2 done] ----
8:15:35PM: ---- waiting complete [2/2 done] ----
Succeeded
As we can see in the diff generated by kapp, it merely removes the annotations created by kubectl and adds a few annotation and labels of it’s own. These labels are used by kapp to group resources together as an application.
Switching from kubectl apply to kapp deploy will allow kapp to adopt resources mentioned in a given config. However, kapp will try to insert few of its labels in bodies of some resources, like Deployments and DaemonSets, which may fail due to those resources having immutable fields that kapp tries to update (spec.selector
on Deployments).
For example, let’s consider that we created this deployment using kubectl apply, and now we want to deploy the same using kapp, we would then get the following error:
kapp: Error: Applying update deployment/nginx-deployment (apps/v1) namespace: default:
Updating resource deployment/nginx-deployment (apps/v1) namespace: default:
API server says:
Deployment.apps "nginx-deployment" is invalid: spec.selector:
Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx", "kapp.k14s.io/app":"1646214670391021000"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable (reason: Invalid)
Option 1 ¶
To prevent this failure, add the kapp.k14s.io/disable-default-label-scoping-rules: ""
annotation to individual resources to prevent kapp from touching the immutable fields when adopting them. Adding the annotation will exclude the resources from label scoping (used to scope resources within the current application).
The example deployment would then look something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The annotation doesn’t affect the app’s ability to track such resources, the only consequence of disabling label scoping is that other resources in other apps cannot have the same label selector. For example, with the above change in place, any other app cannot have a deployment with the same label selector, i.e. app: nginx
.
Option 2 ¶
Another way to overcome this issue is to use fallback-on-replace
as an update strategy, which will ask kapp to delete+create the resource on encountering an error while updating it. The update strategy can be used via adding the annotation "kapp.k14s.io/update-strategy": "fallback-on-replace"
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
annotations:
kapp.k14s.io/update-strategy: "fallback-on-replace"
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The consequence of using this update-strategy is having downtime as the deployment will be deleted and re-created.
Moving resources across kapp apps ¶
When we try to create an app with resources that are already part of another app, kapp will through an ownership error that says that the resource is already part of another app.
kapp: Error: Ownership errors:
- Resource 'deployment/nginx-deployment (apps/v1) namespace: default' is already associated with a different app 'foo' namespace: default (label 'kapp.k14s.io/app=1646214636675344000')
To overcome this, we can use the flag --dangerous-override-ownership-of-existing-resources
to override the ownership label on those resources. This would remove these resources from the other app and make them part of the current app.
For some resources like Deployment, if label scoping was turned on (default=yes), then its spec.selector
would also include app label, which would have to be rewritten by kapp when ownership changes. This will cause an immutable field error during update change since k8s deployment does not support changing selector labels.
Option 1 ¶
Even if we add the annotation to disable label scoping for resources like Deployment, kapp would still try to remove the already present label selector added by kapp in the previous app. For example, if Deployment dep was present in an app called foo and we would now want to move dep to a new app bar, kapp would try to remove existing label of app foo as it’s not present in the manifest. To prevent this from happening, we can copy the existing label from the Deployment and use it in our manifest. We can use kubectl get deployment dep -o yaml
to get the label.
...
spec:
selector:
matchLabels:
app: nginx
kapp.k14s.io/app: "1646196132347292000"
...
We can copy the label generated by kapp and add it to our new manifest along with the annotation to disable label scoping and then deploy our new app bar.
The spec.selector
field should be a subset of spec.template.metadata
field and hence, we also need to copy the label to spec.template.metadata
field and to make sure that kapp doesn’t update it, we need to use the kapp.k14s.io/disable-default-ownership-label-rules: “” annotation.
The deployment would now look something like this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
kapp.k14s.io/disable-default-ownership-label-rules: ""
spec:
replicas: 3
selector:
matchLabels:
app: nginx
kapp.k14s.io/app: "1646196132347292000"
template:
metadata:
labels:
app: nginx
kapp.k14s.io/app: "1646196132347292000"
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The disadvantage of using this annotation is that kapp wouldn’t show the ReplicaSet and Pod created by the Deployment as part of the app, however it won’t affect kapp’s ability to modify the deployment in any way.
Option 2 ¶
The other option is to use the fallback-on-replace
update-strategy as mentioned above which would delete and recreate the Deployment, but with a downtime.
Join the Carvel Community ¶
We are excited to hear from you and learn with you! Here are several ways you can get involved:
- Join Carvel’s slack channel, #carvel in Kubernetes workspace, and connect with over 1000+ Carvel users.
- Find us on GitHub. Suggest how we can improve the project, the docs, or share any other feedback.
- Attend our Community Meetings! Check out the Community page for full details on how to attend.
We look forward to hearing from you and hope you join us in building a strong packaging and distribution story for applications on Kubernetes!