Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress only accessible from first server node #6925

Open
fatihmete opened this issue Oct 3, 2024 · 3 comments
Open

Ingress only accessible from first server node #6925

fatihmete opened this issue Oct 3, 2024 · 3 comments

Comments

@fatihmete
Copy link

fatihmete commented Oct 3, 2024

Environmental Info:
RKE2 Version:
rke2 -v1.30.5+rke2r1

Node(s) CPU architecture, OS, and Version:
Various nodes available (rhel7-rhel8)

Cluster Configuration:
3 server, 3 worker

Describe the bug:

After installation of RKE2, Ingress only available for first server node. nginx ingress controller deployed as DaemonSet and pods are running state.

Steps To Reproduce:

  • fresh install RKE2.

Expected behavior:
Ingress ports 443 and 80 should be accessible from all nodes which ingress controller installed.

Actual behavior:
Ingress ports 443 and 80 only accessible from first server node.

Additional context / logs:
When I tried https://docs.rke2.io/known_issues#ingress-in-cis-mode this policy, I lost access from first server too.
nginx-ingress version: 4.10.402

@brandond
Copy link
Member

brandond commented Oct 3, 2024

Are you sure that you don't have a host firewall blocking access to those ports? There's nothing in the default configuration that would do what you're describing.

@fatihmete
Copy link
Author

Are you sure that you don't have a host firewall blocking access to those ports? There's nothing in the default configuration that would do what you're describing.

Firewall service is disabled.
Interestingly, Ingress started working on one of the nodes after an unexpected reboot. Is there any service that could affect the functioning of Ingress?

@brandond
Copy link
Member

brandond commented Oct 4, 2024

Not that I'm aware of. Have you checked the pod status to make sure that everything's running where it should? What's the output of kubectl get pod -o wide ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants