Mastering Microservices: From Zero to EKS: A Hands-On Guide to Building Your Microservice Project.
Table of contents
No headings in the article.
Key Takeaways:
Microservices are Complex: Embrace the complexity and be prepared to invest significant time and effort in learning the intricacies of microservice architecture, containerization, orchestration, and automation.
Documentation is Key: Document every aspect of your project, from infrastructure setup to pipeline configurations. This will save you countless hours of debugging and troubleshooting.
Automate Relentlessly: Automate everything that can be automated, from testing and deployment to infrastructure provisioning. This will reduce human error, speed up development cycles, and improve reliability.
Monitor Vigilantly: Implement comprehensive monitoring and alerting systems to detect and address issues proactively. Don't underestimate the importance of logging and centralized log management.
Security First: Security should be a top priority from the outset. Regularly scan for vulnerabilities, apply patches promptly, and enforce strict access controls.
Learn from Mistakes: Don't be discouraged by failures. Every error is an opportunity to learn and improve.
Toolset:
AWS CLI: For interacting with AWS resources.
KUBECTL: For managing Kubernetes objects.
EKSCTL: For creating and managing EKS clusters.
Launch EC2 instances with these specifications:
Ubuntu(preferable 22.04 or higher)
t2.large type
25Gib memory
//AWS CLI (JUST COPY AND PASTE :))
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
aws configure
//KUBECTL CLI
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
//EKSCTL
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
With our EC2 instance now active, it's time to establish its communication rules. Head to the instance's security settings and locate the "Security Groups" tab. Click on "Edit Inbound Rules" to start adjusting which types of traffic the instance can receive. From here, add or modify the following ports as needed for your specific applications and services. Refer the image above for the same.
Now that the security groups are set, let's connect to the instance. Use your terminal and enter the following command,replacing 'keypair.pem' with the name of your SSH key file and 'publicip' with the instance's public IP address:
ssh -i keypair.pem ubuntu@publicip
Let's establish a designated workspace for our scripts. Create a new folder named "scripts" within your instance. This folder will serve as the central repository for all the scripts required to manage Jenkins, AWS CLI, KUBECTL, and EKSCTL commands throughout our project. BUT BEFORE RUNNING ANYTHING ON LINUX EC2 INSTANCE, UPDATE YOUR INSTANCE.
With our workspace ready, it's time to empower our EC2 instance with the right permissions to orchestrate our Kubernetes cluster. This is where AWS Identity and Access Management (IAM) comes into play.
IAM Role Creation:
Navigate to the AWS Management Console and search for "IAM."
In the left pane, select "Roles" and then click on "Create role."
Choose the "EC2" use case, then click "Next: Permissions."
Policy Attachment:
Opt for "Attach policies directly."
Search for and add the following AWS managed policies:
AmazonEC2FullAccess
AmazonEKSClusterPolicy
AmazonEKSWorkerNodePolicy
AWSCloudFormationFullAccess
IAMFullAccess
Creating a Custom Policy:
Click "Create policy" in the top right corner.
Select "JSON" as the policy editor and remove any existing content.
Paste the following JSON code into the editor:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "eks:*", "Resource": "*" } ] }
Now that we've established our IAM user, it's time to equip it with the tools to interact with AWS resources programmatically. This is where access keys come in.
- Access Key Creation:
Within the IAM user's page, click on the "Security credentials" tab.
Scroll down to the "Access keys" section and click "Create access key."
Choose "Command Line Interface (CLI)" as the access key type and click "Create access key."
- Secure Download:
A pop-up will appear displaying your new access key ID and secret access key. This is the ONLY time you will be able to view or download the secret access key, so be sure to:
Download the
.csv
file containing the keys.Store it in a secure location (e.g., a password manager or encrypted storage).
NEVER share your secret access key with anyone.
Building our EKS cluster. This is where your microservices will reside and interact.
Step 1: Cluster Creation
The first step is to create the EKS cluster itself. Execute the following command:
eksctl create cluster --name=EKS-1 \ --region=ap-south-1 \ --zones=ap-south-1a,ap-south-1b \ --without-nodegroup
Let's break this down:
--name
: Specifies the name of your cluster (EKS-1 in this case).--region
: Indicates the AWS region to create the cluster in (ap-south-1).--zones
: Defines the availability zones for high availability (ap-south-1a and ap-south-1b).--without-nodegroup
: This means we'll create the control plane (the brains of the cluster) first, and the worker nodes (where your apps run) later.
This step might take a while (10-12 minutes) as AWS provisions the necessary resources.
Step 2: IAM OIDC Provider Association
Before we can create worker nodes, we need to associate an IAM OpenID Connect (OIDC) provider with our cluster. This allows Kubernetes to securely authenticate with AWS services. Run the following command:
eksctl utils associate-iam-oidc-provider \ --region ap-south-1 \ --cluster EKS-1 \ --approve
Step 3: Node Group Provisioning
Now, let's create a group of worker nodes (EC2 instances) that will run your microservices. This command does the heavy lifting:
eksctl create nodegroup --cluster=EKS-1 \ --region=ap-south-1 \ --name=node2 \ --node-type=t3.medium \ --nodes=3 \ --nodes-min=2 \ --nodes-max=4 \ --node-volume-size=20 \ --ssh-access \ --ssh-public-key=DevOps \ --managed \ --asg-access \ --external-dns-access \ --full-ecr-access \ --appmesh-access \ --alb-ingress-access
Key points:
--name
: Name of the node group (node2).--node-type
: EC2 instance type for the nodes (t3.medium).--nodes
: Initial number of nodes (3).--nodes-min
and--nodes-max
: Autoscaling range for the node group (2-4).--node-volume-size
: Storage size for the nodes (20GB).--ssh-access
: Enables SSH access to the nodes (replace "DevOps" with your SSH key pair name).--managed
: Tells EKS to manage the node group for you.The remaining flags grant specific permissions to AWS services for the node group.
Additional Configuration:
Open Inbound Traffic: In your EC2 security group, ensure that inbound traffic from the Kubernetes control plane is allowed.
Service Account and Roles: Create a Kubernetes service account, an IAM role, and bind them together. This allows your applications running in pods to interact with AWS resources.
ALL THE CLUSTER CREATION WILL TAKE EASILY 10-12 MINUTES
Installing Java
First, we need Java as the foundation for Jenkins. Use the following command to install OpenJDK 17, a popular open-source Java Development Kit:
sudo apt install openjdk-17-jre-headless -y
Installing Jenkins
Next, let's get Jenkins up and running. Remember, it's always wise to refer to the official documentation when installing new technologies. Jenkins' official documentation provides the most accurate and up-to-date instructions:
Head to the Source: Go to the official Jenkins Debian package repository: https://pkg.jenkins.io/debian-stable/
Follow the Instructions: The instructions on the page will guide you through adding the Jenkins repository to your system and installing the latest stable version using apt.
By following the official guide, you ensure you get the most reliable installation process and access to any specific configuration advice for your setup.
With Jenkins ready to serve as our automation hub, we need to enhance its capabilities to work with Docker and GitHub seamlessly.
Installing Docker
First, let's enable Jenkins to manage containers. Execute the following command to install Docker:
sudo apt install docker.io
Configuring Docker in Jenkins
Plugin Installation:
Go to the Jenkins dashboard and click on "Manage Jenkins."
Select "Manage Plugins" and navigate to the "Available" tab.
Search for "Docker" and select the relevant plugins (e.g., "Docker Pipeline," "Docker").
Click "Install without restart."
Docker Cloud Configuration:
After installation, return to "Manage Jenkins" and then "Configure System."
Scroll to the "Cloud" section and add a new "Docker" cloud.
Enter the name of your Docker Hub credentials (make sure it matches the name you used in your GitHub repository).
GitHub Integration: Multibranch Scan Webhook
To automate the process of fetching code from GitHub and triggering builds in Jenkins, we'll utilize the "Multibranch Scan Webhook" plugin:
Plugin Installation:
Go to "Manage Jenkins" > "Manage Plugins" > "Available."
Search for "Multibranch Scan Webhook" and install it.
Refresh the Jenkins configuration page.
Webhook Setup:
In your Jenkins job configuration, under "Branch Sources," you'll now see a "Scan by webhook" option.Check this box.
Click on the "?" icon next to it. Copy the provided token and modify it by replacing "jenkins_url" with your Jenkins URL and "token_name" with a suitable name.
GitHub Configuration:
Go to your GitHub repository's settings and click on "Webhooks."
Click "Add webhook" and paste the modified token from Jenkins into the "Payload URL" field.
Set the content type to "application/json" and ensure the webhook is active.
With these configurations, Jenkins will now automatically scan your GitHub repository for changes and trigger builds whenever new code is pushed. This streamlines your development workflow and ensures your Dockerized microservices are always up-to-date.
With our EKS cluster up and running, it's time to grant Jenkins the authority it needs to interact with Kubernetes resources securely. This is where Kubernetes' robust Role-Based Access Control (RBAC) mechanism comes in. By defining Service Accounts, Roles, and RoleBindings, we can meticulously control what Jenkins can and cannot do within our cluster.
Why RBAC is Crucial
Remember the security stumbles I mentioned in my previous microservice project? Misconfigured access controls were a major pain point. RBAC helps prevent unauthorized access, minimizing the risk of accidental or malicious actions that could compromise our applications.
Kubernetes Manifests: The Blueprint for RBAC
Let's break down the Kubernetes manifests we'll be using to establish Jenkins' permissions:
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: webapps
This manifest creates a ServiceAccount named "jenkins" within the "webapps" namespace. Think of it as an identity for Jenkins within Kubernetes.
Role (app-role):
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: app-role namespace: webapps rules:
This Role defines the permissions we want to grant to the "jenkins" ServiceAccount. It outlines which Kubernetes resources (pods, deployments, services, etc.) Jenkins can access and what actions (get, list, create, update, delete) it can perform on them.
- RoleBinding (app-rolebinding):
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
namespace: webapps
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: app-role
subjects:
- namespace: webapps
kind: ServiceAccount
name: jenkins
This RoleBinding acts as the glue that binds the "app-role" to the "jenkins" ServiceAccount. It essentially says, "The 'jenkins' ServiceAccount is allowed to do everything specified in the 'app-role'."
Applying the Manifests
To make these permissions a reality, we'll use kubectl apply
:
kubectl apply -f serviceaccount.yaml
kubectl apply -f role.yaml
kubectl apply -f rolebinding.yaml
Fine-Tuning Permissions
The provided app-role
is quite broad, granting Jenkins extensive permissions. In a production environment, it's crucial to tailor the Role to match the specific needs of your Jenkins pipelines. For instance, you might want to restrict Jenkins from deleting certain resources or limit its access to specific namespaces.
By carefully crafting these RBAC configurations, you ensure that Jenkins operates safely within your Kubernetes cluster, reducing the risk of security vulnerabilities and ensuring the smooth operation of your microservices.
Obtaining the External IP
The "FRONTEND-EXTERNAL" service is likely configured as a LoadBalancer service type in Kubernetes. This means it will be assigned an external IP address that you can use to access your application from the internet. To obtain this IP,use the following command:
kubectl get svc FRONTEND-EXTERNAL -n <your-namespace>
Replace <your-namespace>
with the actual namespace where the "FRONTEND-EXTERNAL" service is deployed. The command output will show the service's external IP.
Navigating to Your Application
Once you have the external IP, enter it into your web browser's address bar. This should direct you to your e-commerce website's frontend, allowing you to browse products, add items to your cart, and complete purchases.
A Word of Caution: Patience and Persistence
It's important to note that achieving a successful build and deployment can often take multiple tries. The complexity of microservices and the intricacies of Kubernetes can sometimes lead to unexpected errors or configuration issues.
Don't be discouraged if your first few attempts don't go as planned. Leverage resources like Stack Overflow and online communities to troubleshoot problems and seek guidance. Remember, every error is a learning opportunity that brings you closer to mastering microservice deployment on EKS.
Thank you for reading, I can do it then you can also do it.