Getting started with Runtime Fabric on AWS Elastic Kubernetes Service (EKS)

header eks
Developer Relations
30 min read

In this section we'll walk you through the steps to (manually) provision an Elastic Kubernetes Service (eKS) cluster in AWS to be used to install RTF on BYO EKS. This guide is intended to provide you with basic knowledge on how to get quickly started and understand how it all fits together end to end from preparing to installation and deploying, testing an application. But it is not intended to provide deep-dive into specific use cases. 

There are multiple ways to create an EKS cluster. You can do it manually using the GUI interface from AWS console, which is the simplest way and requires the least amount of preparation. You can also use automation tools like Terraform to set up and manage your infrastructure and you need to install terraform. You can also use eksctl to create and manage an EKS cluster as well

If this is your first time using MuleSoft, click the signup button below to get started.

Step 1: Setting up EKS Prerequsites

Depending on the needs and requirements, you will have certain prerequisites. For example: Do you need to have a dedicated VPC for your worker nodes? What are the egress/ingress requirements? Is the cluster public? private? Answering these questions will help determine what you need to prepare before building your EKS cluster. A couple of references you can look into are vailable at AWS docs site: Create a VPC for EKS and EKS VPC Considerations.

There are other internal resources to help you with creating a VPC. Creating a VPC sounds simple but it could be a daunting task with all the planning for inbound and outbound traffic, instances are private or public, as well as redundancy requirement (aka availability zones). Sometimes, the region you want to create a VPC might have hit limits (VPC, NAT Gateway...) that prevents you from creating one. If you plan on creating a VPC, review the AWS documents above, and also can use this doc as an example (see AWS Setup section on p.7). Once the VPC is in place, you can create an EKS cluster using the newly created VPC

To get you started quickly with RTF on EKS, this guide uses the command below to create a public (meaning your pods can be exposed to the world with a load balancer) EKS cluster in the default VPC of the specified region.

You will need to install eksctl. Here is an example to create an EKS cluster with eksctl

Once eksctl is installed, run the command below after tailoring it to your name, region, node counts, key...

eksctl create cluster \
--name={yourname}-eks \
--version=1.18 \
--region=us-east-2 \
--nodegroup-name=standard-workers \
--node-type=m4.large \
--nodes=2 \
--nodes-min=1 \
--nodes-max=4 \
--ssh-access=true \


To create an EKS cluster in your VPC, follow the example below where you specify your own network for your workers. Remove { } and replace subnet-id with your actual subnet ids.

eksctl create cluster \
--name={yourname}-eks \
--version=1.18 \
--region=us-east-2 \
--nodegroup-name=standard-workers \
--node-type=m4.large \
--nodes=2 \
--nodes-min=1 \
--nodes-max=4 \
--ssh-access=true \
--ssh-public-key=~/.ssh/ --managed \
--vpc-private-subnets={subnet-id,subnet-id} \


Step 2: EKS Validation

Next, install the following tools to get started:

aws eks --region region update-kubeconfig --name cluster_name


Then run the command:

kubectl get nodes


If you see the nodes listed with each status saying Ready, you are now good to move onto the next step.

Step 3: Install Runtime Fabric

rtfctl is supported on Windows, MacOS (Darwin), and Linux. Download this utility using the URLs below:


curl -L -o rtfctl.exe


MacOS (Darwin):

curl -L -o rtfctl



curl -L -o rtfctl


Change file permissions for the rtfctl command:

sudo chmod +x rtfctl


Step 4: Create RTF in Runtime Manager

Log in to Anypoint Plaform and navigate to Runtime Manger, click on Runtime Fabric on the left navigation bar and hit Create Runtime Fabric button. Give your cluster a name:


Hit Continue to acknowlege you have read the Support responsibility.

runtime fabric

Copy the validation code when the fabric is created.

runtime fabric

Next, validate that your kubernetes environment is ready for installation. Run this command to validate - activation data is the string you copied from the previous step

rtfctl validate {activation_data}


The validate option verifies that: The Kubernetes environment is running. All required components exist. All required services are available. The rtfctl command outputs any incompatibilities with the Kubernetes environment. Next, install Runtime Fabric by running the following command:

rtfctl install {activation_data}


{activation_data} is the activation data obtained after creating the Runtime Fabric using Runtime Manager. During installation, the rtfctl utility displays any errors encountered.

Step 5: Validate the installation

If everything went well, you will see Runtime Fabric is ready. The installation exits without errors. At this point, validate the installation with these steps:

Run the command below to verify if any pods are in the error state:

get pods --all-namespaces


Check Runtime Manager UI and make sure the status is Green and health checks are passed.

cluster status

Next, assign RTF to your environment.

rtf enviorments
Step 6: Update your Mule license key

After the installation has completed succesfully, update the Mule license key. Base64 encode the new Mule .lic license file provided by MuleSoft: On macOS, run the following command:

base64 -b0 license.lic


On Unix, run the following command:

base64 -w0 license.lic


On Windows, a shell terminal emulator (such as cygwin) or access to a Unix-based computer is required. Transfer to your Unix environment if necessary. Run the following command to Base64 encode the license key:

base64 -w0 license.lic


Update the Mule license key:

rtfctl apply mule-license BASE64_ENCODED_LICENSE


To verify the Mule license key has applied correctly, run:

rtfctl get mule-license


Step 7: Create Ingress Controller (Using Nginx)

One of the most popular Ingress controllers being used is Nginx. However, customers might be using a different solutions. In this example, we are using AWS Network Load Balancer with Nginx ingress controller. More info can be found here.

The high level architecture diagram looks like this:

ingress arch

The following command will create all necessary resources for the ingress controller:

kubectl apply -f


After all the kubernetes resources are created, log in to AWS Console to validate that a new Network Load Balancer has been created and note down the hostname of the NLB. You can also run the below command to get the URL/IP address of your load balancer

get services -n ingress-nginx


Next, enable inbound traffic and configure the DNS by updating Runtime Fabric with a domain that you own and want you application/apis to use. Remember to include protocol while adding the domain(s). By default, application will accept all domains if no domain is defined.


If you would like to have a more user friendly to access your application/EKS cluster, you can Create a public hosted Zone in Route53 and create a new record set (for example: that points to the NLB (Refer to Route53 doc). If you dont have access to Route53, edit /etc/hosts (on your *nix system) to add the IP <-> hostname mapping.

Important notes: you are sharing a hostname for multiple application using different paths for each app. You need to follow the step in Update application Ingress section to update your ingress after application deployment.

If you are using a wildcard approach (like the screenshot above) where the app's url is prepended to the domain name. For example, if you use http://* as your domain. Then Update application Ingress section is not needed.

Step 8: Deploy a test application

A hello word app is provided as an example for you the get started quickly. The app has a HTTP listener at /hello on port 8081. Once deployed, make sure your app's URL is reflected accordingly if you added a domain in inbound traffic tab (in the previous step). If no domain defined, the URL is not available but you can still access your app via the FQDN of the NLB.

deploy app

Next, update your application Ingress which is only applicable if you are NOT using wildcard approach for Ingress.

Update rewrite annotation: Change default / to /$2

annotations: /$2


Update the path specs: update /{app-name} to /{app-name}(/|$}(.*)

 ingressClassName: nginx
  - host:
    - path: /hello-byok8s-2(/|$)(.*)
    pathType: ImplementationSpecific
     serviceName: hello-byok8s-2
     servicePort: 8081


To do this, you can run this command to edit and update the ingress definition for your app/api.

kubectl edit ingress {app-name} -n{namespace}


You will get a vi console with the content of the ingress yaml. Now, edit and update the definition as mentioned above.

To access your application, open your web browser or Postman and hit your app with an HTTP request and you should get a response like the following:

 "greeting": "Hello World!",
 "from": ""


If you see Hello World! returned congratulations! You have successfully installed RTF on EKS!

Try Anypoint Platform for free

Start free trial

Already have an account? Sign in.

Related tutorials