Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Support copying or aliasing contexts #102

Open
joemiller opened this issue Dec 14, 2018 · 18 comments
Open

Proposal: Support copying or aliasing contexts #102

joemiller opened this issue Dec 14, 2018 · 18 comments

Comments

@joemiller
Copy link

joemiller commented Dec 14, 2018

Example:

An engineering team using multiple GKE clusters may reasonably have these desires:

  1. Common tooling used across the team which needs to rely on predictable cluster naming conventions, eg: gke_project_zone_cluster-01
  2. Ability to reference these clusters by friendlier shorternames such as cluster-01, cluster-02, and so on.

Currently kubectx supports renaming clusters which would address #2. However, renaming clusters would break use case #1.

Providing an easy mechanism for copying contexts would allow for a straightforward way for the team to address both desires. By not renaming and keeping the full GKE name the team can distribute common tooling that relies on the full context name. And copying to a friendlier name makes it easier for the team to operate against multiple clusters, eg: kubectl get pods --context cluster-01 or kubectx cluster-01

Implementing "copy" functionality would satisfy both desires.

@ahmetb
Copy link
Owner

ahmetb commented Dec 14, 2018

As I said in #101 I don't think it's a common case to both rely on GKE context names on dev machines AND provide user-friendly names for them.

For a GKE advocate, I would highly discourage anyone from relying on gke context name pattern (i.e. gke_PROJECT_ZONE_CLUSTERNAME as you said) in any programming environment or scripting. Please use multiple kubeconfig files like the following for scripting/tooling:

KUBECONFIG=cluster-01.yaml gcloud container clusters get-credentials cluster-01

However, this issue may have some merit. So I'll keep it open for a while to see if there's enough demand for it and what are their use cases.

We would probably support this with a syntax similar to current renaming syntax (A=B). So maybe A==B or something like that may work.

As you noted in your PR #101, this complicates the context deletion since they would be sharing user/cluster entries.

@cruschke
Copy link

I can confirm @joemiller's example, we are running into similar issues where people want to use peronalized aliases for clusters. People from different teams work with different sets of cluster, some just with "their" clusters ("prod", "dev"), SRE do support multiple teams so they need "team1.prod", "team1.dev", "team2.prod" ... etc.

We have implemented some developer workstation configuration management that does rollout team specific sets of kubeconfigs and the script itself is idempotent. If developers are suppossed to edit kubeconfigs for having cluster nicknames it would break any kind of kubeconfig configuration management.
Having an alias for cluster names would still allow tools to do regenerate kubeconfigs while keeping the cluster nickname.

@ahmetb
Copy link
Owner

ahmetb commented Dec 20, 2018

@cruschke this is a great use case report, thank you.

@stefanotorresi
Copy link

I also am in need of this feature.

This was actually implemented in the past, judging from 9ed6690#diff-10bafbbcaaa2bf26d2f237f58c279417

What was the rationale behind the decision to replace aliasing with renaming, instead of keeping both functions?

@ahmetb
Copy link
Owner

ahmetb commented Feb 5, 2019

Every time you alias, you end up with more contexts. And context name is meant to be used for human friendlines. This is also the reason why kubectl has a rename-context method.

Can you explain your use case? Why do you need to clone your contexts?

@stefanotorresi
Copy link

Every time you alias, you end up with more contexts.

I am aware of that :)

context name is meant to be used for human friendlines

Yet prominent cloud providers generate kubeconfigs via their CLI sdk with names that are not exactly user friendly. GKE has been cited above; DOKS is another one, for example.

Why do you need to clone your contexts?

because original ones are generated programmatically by third party tools, and I want to keep those untouched because I may need to re-run these tools, e.g. for credentials rotation or other configuration updates.

@ahmetb
Copy link
Owner

ahmetb commented Feb 6, 2019

I think as you said, I'd recommend doing what was deleted in 9ed6690#diff-10bafbbcaaa2bf26d2f237f58c279417 as a workaround. I'm hesitant about increasing the interaction surface for a feature that only a few people would use.

The algorithm was:

  • for $CONTEXT, read its $USER and $CLUSTER
    • $USER: kubectl config view -o=jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.user}"
    • $CLUSTER= kubectl config view -o=jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.cluster}"
  • create a $NEW context with kubectl config set-context $NEW --cluster="$CLUSTER" ---user="$USER"

Please do this as a bash function in the meanwhile as a workaround.

If more people report this I might be open to adding, so keeping the issue open. 👍

@lucianf
Copy link

lucianf commented Jan 23, 2020

+1 for aliases. Oracle's OKE also programatically generates kubeconfigs with randomised contexts, and I'd rather not touch the default configuration for reasons similar to those mentioned by @stefanotorresi. Renaming and aliasing should co-exist, since they cater to different use cases.

@unixorn
Copy link

unixorn commented Jan 23, 2020

+1 for aliases and the use case @cruschke mentioned where different teams have different aliases for the same clusters, and may have helper scripts that depend on those aliases.

Yes, they should use a config file to store the cluster names, but they don't and my group doesn't have the ability to force them to.

@ahmetb ahmetb pinned this issue Mar 19, 2020
@dharamsk
Copy link

dharamsk commented Jul 2, 2020

+1 we ran into the same issue with internal tooling expecting real cluster names instead of aliases. I'll do the work around and share with others, but I can't imagine a downside for making this the default behavior for aliasing contexts..

@dharamsk
Copy link

dharamsk commented Jul 2, 2020

strike that. I was able to get this working by recreating the original context. I now have the aliased context and the original context listed, so I can reference the original name or the alias.

@aisrael
Copy link

aisrael commented Sep 8, 2020

Came here to file a similar request and saw this so just adding my vote in here.

Use case: we have standardized, long-form context names that our (internal) tooling generates and expects (e.g. cluster.prod.us.domain.com).

I just want to be able to reference it when doing ad hoc CLI work by going: kubectx us

@zhangsean
Copy link

Usually, when I create a new k8s cluster, it provides a new k8s config for this cluster, I have to manually merge the new context to ~/.kube/config, then kubectx switch context work normally.
If improve merge or copy new context into the current k8s config, will be very helpful.

# Assume default k8s config contains 5 contexts
kubectx | wc -l
5
# There is a new k8s config file contains 1 context
grep "\- context" ./config | wc -l
1
# Merge the new context into default config file
kubectx -m ./config
# The default k8s config will contain 6 contexts
kubectx | wc -l
6

@ahmetb
Copy link
Owner

ahmetb commented Dec 14, 2020

@zhangsean kubectl already lets you merge kubeconfigs. You could easily search and find it. https://ahmet.im/blog/mastering-kubeconfig/

Furthermore this issue is clearly about duplicating/aliasing contexts which is NOT related to what you want. Please do not conflate these two things. Your request is a separate issue, and won’t be implement since it’s more suitable for kubectl itself or kubectl plugins (which I believe already exist).

@zhangsean
Copy link

@ahmetb Thanks!

@danielfoehrKn
Copy link

In case anyone is still interested in this functionality, the tool kubeswitch might solve the alias problem for you by transparently defining the alias without changing the underlying kubeconfig file. Full disclosure: I am the author of it.

@morningspace
Copy link

Come across here when looking for a similar feature to manage my contexts. Thanks @danielfoehrKn for your sharing and I like the idea of creating alias separately other than modifying kubeconfig directly. It also allows the one-to-many mapping between alias and context when needed. Actually I was using a similar approach when I was creating a homegrown tool for this. The only thing makes it a bit tricky is that to maintain the mapping, we need to be careful when the context is deleted.

OTOH, I'm still thinking to have this feature inside kubectx would be a good idea as it doesn't have to be thousands of contexts which can have this issue. For example, when I was working w/ OpenShift, any time when I use oc login to log into a cluster, it will auto-create a context for me where the name is not user friendly. If I rename it to something else, next time when I re-login for some reason e.g. token expired, oc login will generate a new context that's duplicated. This is annoying.

@ahmetb If that makes sense, I'd be glad to spend some time to dig into it and submit a PR. Or, if that's not appropriate to have such stuff in kubectx, would it make sense to have a separate kubectl plugin such as ctxalias on top of ctx?

@niclan
Copy link

niclan commented Apr 11, 2024

Somewhere in your $PATH kubectx:

#!/bin/bash
#
# my wrapper script for kubectl ctx

# No argument, just do the normal thing
case $1 in
    '') kubectl ctx
        exit $?;;
esac

CONTEXTS="$(kubectl ctx)"

SELECT="$(echo "$CONTEXTS" | grep -i "$1")"
COUNT=$(echo "$SELECT" | wc -l)

case $COUNT in
    0) echo "No context found for '$1'"
       exit 1;;
    1) kubectl ctx "$SELECT"
       exit 0;;
    *) echo "Multiple contexts found for '$1':"
       echo "$SELECT"
       exit 1;;
esac

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests