Carvel Logo

Coding Guidelines for Carvel

Audience: any contributor, especially new, whether internal/external, full-time/occasional

Purpose:

  • Describe some current practices and aspirational directions
  • OSS focus
  • How to make changes
  • How our development flow / process works
  • How to understand and evolve our codebases

General Mindset and Sensibilities

Naming

  • Consistent and descriptive, without abandoning golang’s terse style.
  • Flag names should be nouns not verbs ( –warnings, not –warn)

Modularity

  • Each Carvel tool is modular and composable, with aggressively limited scope
  • Within a codebase each file / package / “class” should be modular - each package is almost its own program that exposes an API, and can be a unit of documentation.
    • At each layer, consider the API you’re exposing and the subset of responsibilities you’re abstracting.
    • Layered abstractions are often combined via dependency injection.
  • Prefer to apply modularity at each level of the codebase so that each layer is sufficiently compact to fit in a developer’s head. (What fits in a developer’s head? Seven plus or minus two items.)
    • Extract code to functions when it’s not helpful to see those details at the current level, and when that will help pull the number of “code chunks” in the current function back down below 7 ± 2.
    • Don’t extract to functions when the detail doesn’t add complexity at the current level, especially if there’s < 7 “code chunks” in the current level..

Conformity

  • Prefer to use tools or patterns that match prior art in the codebase.
  • However, value clarity and readability of the logic you’re working on over awkwardly forcing that code to conform to a pattern elsewhere in the codebase. (“Be different when you have a reason to be”)

DRY within domains

  • We often strive for a “single point of truth” or single codepath to avoid paying the cost of the same complexity twice, and maximize maintainability.
  • In different packages or domains, It’s fine for code to “rhyme”: Some implementations are superficially similar but have fine-grained divergence, because they solve similar problems for distinct domains.
  • We’re especially tolerant of duplication in new/young code where we expect to learn and iterate.

Structure

Developers should feel free to add more structure as complexity grows by making new files and subdirectories in both test and application code.

Planning

  • Prefer to balance speed, quality, and delivery in a way that considers the value and complexity of the feature, its edge cases, cost of failure: aim for the 80% of the 80/20 split to deliver iterative, incremental value quickly and often.
  • Estimates are often noisy and probably low by a small factor.

Golang specific concerns

  • Named Returns: Prefer to avoid; use rarely and judiciously
    • Often these add burden to the reader
    • When returning multiple items of same type, especially in a short function, named returns can remove burden from the reader
  • Expose crisp abstractions and intentional APIs by keeping scoping minimal. Prefer to use restricted scope such as private or receiver in situations like:
  • Godocs:
    • Should impart more context or information than the name alone.
    • Note: older code may not already have godocs, great to add docs as you learn what something does
  • Prefer multi-line over single-line err check:
    • Prefer:
      err := foo()
      if err != nil { //…
      
    • Occasionally appropriate: (example: highly indented guard clause)
      if err := foo(); err != nil { //…
      
  • Dependencies
    • are all vendored locally and version controlled
    • go mod vendor; go mod tidy are your friends; hack/build.sh runs them.
    • Prefer not to bump dependencies until necessary
      • We rarely want to hop onto the latest version with all its latest bugs :), but we do want to patch CVEs
      • Value stability
    • Kubernetes versions
      • Kubernetes isn’t one thing from a go modules / deps perspective, but the many libraries of k8s do need to be kept on compatible versions
      • If you need to upgrade k8s deps,
        • You can use specific directives, one per each library, of the form: go get -u k8s.io/thing@v1.2.3
        • You can edit the go.mod file directly and then re-vendor.
    • Debugging
      • Because we vendor all our libraries, println style debugs can also be added to 3rd party source code. However you must comment out the go mod vendor and tidy commands in hack/build.sh.

K8s Controller-Specific Concerns: API Changes

  • Adding fields to CRDs is not a breaking change; removing fields is; see guidance on breaking changes below.
  • Kubernetes auto-generates code for APIs and Custom Resource objects
    • Generators can be run via hack/gen.sh
    • Kapp-controller’s aggregated API server has a separate generator: hack/gen-apiserver.sh
    • The CRD yaml is generated via a separate script that is run by hack/build.sh

Development Process

Controller specific workflows

  • Deploy in order to test changes to a local or dev cluster via hack/deploy-test.sh
    • We use minikube; while other solutions may work, minikube definitely works. Remember to > eval $(minikube docker-env)
    • You can also use hack/deploy.sh to deploy
    • See dev.md for more details.

Automated Testing

We write mainly e2es and units; some tools have performance tests

  • e2es
  • Unit
    • Can be found mixed in with the code, per golang custom
    • Can be run via hack/test.sh (or test-all.sh)
    • Code that is ‘functional’ (input/output, as opposed to relying on side effects) should be unit tested
      • E.g. we don’t mock the kubernetes API for the sake of a unit test, we just rely on e2e to cover that part of the code.
    • A meaningful ‘unit’ for a test may include multiple structs or files - particularly for middle layers in Dependency Injection patterns there is no need to test them in isolation. (for instance, these controller reconciliation tests)
    • If I remove one line/thing in the code only one test should fail. In other words, test only one unit per test.
    • While not all old tests use it, we prefer testify for assertions in new tests
    • Targeted Unit tests for specific external integrations with detailed error and edge-case handling may mock those external dependencies
  • Coverage
    • We aren’t concerned with any fixed coverage percentage
    • New large features generally should add an e2e test (targeting at least the happy path, but consider which failure cases are important)?
      • A single test doesn’t have to fully cover a feature if the test suite covers the feature: consider whether another test already covers this codepath before adding a new test.
    • New code that “fits” a unit test (per the above) should add a unit test
    • Updates to code with existing unit tests should update tests.
    • While ideally every bugfix includes a new test to prevent regression, this may not always be practical.
    • Example of our 80/20 approach: our test suite tends to cover only the majority case of integrations and tools (e.g. harbor not dockerhub; linux not windows). OSS contributors are welcome to improve our testing integration with third parties.
  • Test Assets:
    • If a test requires additional artifacts or assets they should live in a separate /assets subfolder.
    • Test assets should include comments describing how to generate or modify those assets. example

Issues, Branching, Pull Requests, Approval

  • Issues (see also, issue triaging docs for more info!)
    • Proposal Process
    • Prefer to leave issues open until documentation is complete
    • Docs typically live in a separate repo which renders to https://carvel.dev
    • When closing the issue manually, comment which release includes the issue so that others can easily find it.
  • Branching
    • Our default / primary branch is named develop
    • We do not have convention around branch names
    • Prefer to delete branches from github after they’ve been merged.
  • Pull Requests
    • Commits
      • Currently open-ended: can be intentionally staged, messy with the intention of squashing them, etc.
      • We may revisit automated release tooling and commit squashing.
    • Generally author should ping in slack after a PR is filed and ready for review
    • See our issues/triage doc for more info!
  • Refactors
    • If a new feature needs a large refactor, we prefer that refactor in a separate PR. At a minimum developers should put refactor in a separate commit. This helps scope reviews and minimize changesets.
  • Approvals
    • Maintainers will offer feedback on PRs but we have very little formalization around approval
    • Reviewers leave a “LGTM” or approved comment indicating a timestamped approval
    • Maintainers merge PRs when no outstanding comments are left other than LGTM / approval.

Versioning and Backwards Compatibility

Carvel uses semver, x.y.z version structure, with all tools at major version x=0 currently.

  • Bug fixes to existing releases increment the patch version (z), especially for “small” fixes
  • Otherwise, even just for important bug fixes, new releases increment the minor version (y) whether or not there is a breaking change.
  • API changes within each tool are at different stages of maturity (e.g. kapp-controller’s “app” CR is much more mature than “packageInstall”)
  • Prefer to avoid breaking changes
    • We consider our “pre-1.0” phase to give us more flexibility, while still prioritizing user delight and non-breaking changes.
      • 1.0 should indicate 3-5+ years of hardened production use
      • 1.0 should offer a stronger guarantee against breaking changes
    • If we do have a breaking change we want to fully automate upgrading that break.
    • Sometimes we do have to make a breaking change e.g. when a bugfix closes a behavior loophole.

Github Actions: what gets run on PRs?

  • golangci-lint (can be run locally; note not all projects have identical linter configs)
  • hack/verify-no-dirty-files.sh - verifies that CI can build without any changes to git working directory
  • Test suite (hack/test-all.sh)
  • Aspirational: a single script that runs linter, no-dirty-files, and unit tests locally

Release Process

  • Open Source Software Releases: We’re mostly trying to use goreleaser, which relies on git tags, but this varies by repo; see the relevant dev.md for details
  • VMware releases: have their own process; VMware employees should see internal docs.

(Help improve our docs: edit this page on GitHub)