Open In App

Amazon Web Services – Resolving Server Authorization Error in Amazon EKS API Server

Last Updated : 28 Mar, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we will look into how users who get the error you must be logged in to the server unauthorized when connecting to an Amazon Elastic Kubernetes Service API server. 

Here we have an Amazon EKS cluster that was created by a user initially. Only the creator of the Amazon EKS cluster has system masters permission to access and communicate with the cluster through Kubectl command line.

 Let us first verify our AWS Identity and Access Management user. We can configure this in our terminal. In the AWS command-line interface we will run the below command to show the current user that we configured in our local machine to use using the below command:

$ aws sts get-caller-identity

As this user is the same cluster creator we’ll update the Kube config file by using the below command:

$ aws eks update-kubeconfig --name eks-cluster-name --region aws-region

The resulting configuration file is created at the default Kubeconfig path(.kube/config) in your home directory. The kubeconfig file is a way to organize information about clusters, users, namespaces, and authentication mechanisms.

So the Kubectl command-line tool uses Kubeconfig files to find the information it needs about the cluster and to understand how to communicate with the API server of that cluster.

Now we have the Kubeconfig updated, and we are the cluster creator. We are able to perform Kubectl command such as Kubectl get-service as below:

$ kubectl get svc

However, if we switch the user to any other user or assumed any role then we can’t communicate with the cluster using Kubectl. 

Here is where we are logged into an amazon elastic computer cloud instance. We’ll perform sts get caller identity. This shows the role attached to the instance.

We will perform an update Kubeconfig command as we did before and see if we can communicate with the cluster. Now we will apply the Kubectl command. This command generates an unauthorized error because the IAM role that’s attached to the EC2 instance does not have permissions.

So from the cluster creator window, we will add permissions to the IAM role. This enables the EC2 instance to communicate with the cluster using the Kubectl command.

 First, we will add the IAM role to the AWS config map for the cluster using the below command:

$ kubectl edit configmap aws-auth -n kube-system

Under map roles, we will add the role and will give it system masters permission. The system masters group allows superuser access to perform any action on any resource. We will save the changes and try again from the EC2 instance.

Note that the system masters group allows superuser access to perform any action on any resource. If you want to restrict the access for this user then you need to create a Kubernetes role and role binding for this user that’s mapped in role-based access control.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads