Skip to content

A guide on how to deploy microservices to Azure Kubernetes Service (AKS) on Microsoft Azure.

License

Notifications You must be signed in to change notification settings

OpenLiberty/guide-cloud-azure

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploying microservices to Azure Kubernetes Service

Note
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website.

Explore how to deploy microservices to Azure Kubernetes Service (AKS) on Microsoft Azure.

What you’ll learn

You will learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on Azure Kubernetes Service (AKS).

Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. If you would like to learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.

There are different cloud-based solutions for running your Kubernetes workloads. A cloud-based infrastructure enables you to focus on developing your microservices without worrying about low-level infrastructure details for deployment. Using a cloud helps you to easily scale and manage your microservices in a high-availability setup.

Azure offers a managed Kubernetes service called Azure Kubernetes Service (AKS). Using AKS simplifies the process of running Kubernetes on Azure without needing to install or maintain your own Kubernetes control plane. It provides a hosted Kubernetes cluster that you can deploy your microservices to. You will use AKS with an Azure Container Registry (ACR). ACR is a private registry that is used to store and distribute your container images. Note, because AKS is not free a small cost is associated with running this guide. See the official AKS pricing documentation for more details.

The two microservices you will deploy are called system and inventory. The system microservice returns the JVM system properties of the running container. It also returns the name of the pod in the HTTP header, making replicas easy to distinguish from each other. The inventory microservice adds the properties from the system microservice to the inventory. This demonstrates how communication can be established between pods inside a cluster.

Additional prerequisites

Before you begin, the following additional tools need to be installed:

  • Docker: You need a containerization software for building containers. Kubernetes supports various container types, but you will use Docker in this guide. For installation instructions, refer to the official Docker documentation. If you already have Docker installed, make sure to have it running.

  • Azure Subscription: To run this guide, you will need an Azure subscription. Navigate to the Microsoft Azure Purchase Options to create an account with your email and start a Pay-As-You-Go subscription.

  • Azure CLI: You will need to use the Azure Command Line Interface (CLI). See the official Install the Azure CLI documentation for information about setting up the Azure CLI for your platform. To verify that the Azure CLI is installed correctly, run the following command:

    az --version
  • kubectl: You need the Kubernetes command-line tool kubectl to interact with your Kubernetes cluster. If you do not have kubectl installed already, use the Azure CLI to download and install kubectl with the following command:

    az aks install-cli

To begin this guide, make sure that you are logged in to Azure to get access to your subscription:

az login

Managing an Azure Container Registry

To deploy your microservices, you need to create an Azure Container Registry in the same location where your services are deployed, and link the registry to a resource group. Your registry will manage container instances that will be deployed to a Kubernetes cluster.

Creating a resource group

A resource group is an Azure construct to manage a logical collection of resources for your cloud deployments on Azure. You must create a new resource group to manage the resources you need for your Kubernetes deployment.

To create a resource group, an Azure location must be specified. The metadata for your resources are stored at this specified Azure location. If resources are created later without specifying a location, these new resources run in the same region that you specified for creating a resource group.

See the list of available Azure regions for your Azure subscription:

az account list-locations -o table

You will see an output similar to the following:

DisplayName          Latitude    Longitude    Name
-------------------  ----------  -----------  ------------------
Central US           41.5908     -93.6208     centralus
East US              37.3719     -79.8164     eastus
East US 2            36.6681     -78.3889     eastus2
West US              37.783      -122.417     westus
North Central US     41.8819     -87.6278     northcentralus
South Central US     29.4167     -98.5        southcentralus
Canada Central       43.653      -79.383      canadacentral
Canada East          46.817      -71.217      canadaeast
UK South             50.941      -0.799       uksouth
UK West              53.427      -3.084       ukwest
West Central US      40.890      -110.234     westcentralus
West US 2            47.233      -119.852     westus2

The name column specifies the region name that you use to create your resource group.

However, AKS is not available in all regions. Make sure that the region you select is compatible with AKS.

Create a resource group using the az group create command. Remember to replace [location] with a region that is available for your subscription and compatible with AKS.

az group create -l [location] -n guideGroup

You will see an output similar to the following:

{
  "id": "/subscriptions/[subscription-id]/resourceGroups/guideGroup",
  "location": "[location]",
  "managedBy": null,
  "name": "guideGroup",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": null
}

Creating a container registry

Your private container registry manages Docker images that you build in later steps. With the Azure az acr command, create an Azure Container Registry. Replace [registry-name] with a container registry name that is unique within Azure and contains 5-50 alphanumeric characters. You can check whether a registry name already exists by using the az acr check-name -n [registry-name] command.

az acr create -g guideGroup -n [registry-name] --sku Basic --admin-enabled

In the az acr create command, the -g option specifies the resource group to designate to the container registry. You created this resource group before as guideGroup. The -n option specifies the name of the container registry to be created, which is defined as [registry-name]. The --admin-enabled flag indicates that the admin user is enabled.

The possible Stock Keeping Unit (SKU) values that can be passed into the --sku option are Basic, Standard, and Premium. These different SKU options provide pricing for various levels of capacity and usage. You use a Basic SKU because it is cheaper and the services you deploy have low storage and throughput requirements.

You will see an output similar to the following:

{
  "adminUserEnabled": true,
  "creationDate": "2019-06-05T20:28:09.637994+00:00",
  "id": "/subscriptions/[subscription-id]/resourceGroups/guideGroup/providers/Microsoft.ContainerRegistry/registries/[registry-name]",
  "location": "[location]",
  "loginServer": "[registry-name].azurecr.io",
  "name": "[registry-name]",
  "networkRuleSet": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "guideGroup",
  "sku": {
    "name": "Basic",
    "tier": "Basic"
  },
  "status": null,
  "storageAccount": null,
  "tags": {},
  "type": "Microsoft.ContainerRegistry/registries"
}

In the output, the value for loginServer is the server name for your container registry, which is [registry-name].azurecr.io, with all lowercase letters.

Logging into the container registry

To push Docker images to your registry, you must log in to your Azure Container Registry by using the Azure CLI.

az acr login -n [registry-name]

Once you log in, you will see the following message:

Login Succeeded

Uploading images to a container registry

Building your Docker images

The starting Java project, which you can find in the start directory, is a multi-module Maven project. It is made up of the system and inventory microservices. Each microservice exists in its own directory, start/system and start/inventory. Both of these directories contain a Dockerfile, which is necessary for building the Docker images. If you’re unfamiliar with Dockerfiles, check out the Containerizing microservices guide.

Navigate to the start directory and run the following command:

mvn package

Now that your microservices are packaged, build your Docker images using the docker build command. To build your image, you need to have Docker installed and your Docker daemon started.

Run the following commands to build and containerize the application:

docker build -t system:1.0-SNAPSHOT system/.
docker build -t inventory:1.0-SNAPSHOT inventory/.

To verify that the images are built, run the docker images command to list all local Docker images:

docker images

Your two images system and inventory should appear in the list of all Docker images:

REPOSITORY    TAG             IMAGE ID        CREATED          SIZE
inventory     1.0-SNAPSHOT    08fef024e986    4 minutes ago    471MB
system        1.0-SNAPSHOT    1dff6d0b4f31    5 minutes ago    470MB

Pushing the images to a container registry

Pushing the images to a registry allows the cluster to create pods using your container images.

First, tag your container images with your registry. Replace [registry-server] with the server name of your container registry. To get the server for your registry, run the az acr show -n [registry-name] --query loginServer command. The [registry-server] looks like [registry-name].azurecr.io.

docker tag system:1.0-SNAPSHOT [registry-server]/system:1.0-SNAPSHOT
docker tag inventory:1.0-SNAPSHOT [registry-server]/inventory:1.0-SNAPSHOT

Finally, push your images to the registry:

docker push [registry-server]/system:1.0-SNAPSHOT
docker push [registry-server]/inventory:1.0-SNAPSHOT

Creating a Kubernetes cluster on AKS

Provisioning a cluster

To create your AKS cluster, use the az aks create cluster command. When the cluster is created, the command outputs information about the cluster. You might need to wait while your cluster is being created.

az aks create -g guideGroup -n guideCluster

Running this command creates an AKS cluster that is called guideCluster with the resource group guideGroup.

The option --node-count -c can also be added to this az aks create command to create a cluster with a certain number of nodes in the Kubernetes node pool. By default, if this option is excluded, three nodes are assigned to the node pool.

An AKS cluster requires a service principal, which is an identity that is used to represent a resource in Azure that can be assigned roles and permissions to interact with other resources and the Azure API. The az aks create command automatically generates a service principal to use with your newly created cluster. Optionally, you can manually create a service principal yourself and create a cluster with this new service principal. However, to run this command, your Azure account must have permission access to create service principals.

Merge the credentials of your cluster into your current Kubernetes configuration by using the az aks get-credentials command. The default Kubernetes configuration file that is updated with your cluster credentials is located within the ~/.kube/config filepath.

az aks get-credentials -g guideGroup -n guideCluster

You will see an output similar to the following:

Merged "guideCluster" as current context in /Users/.kube/config

Run the following command to check the status of the available nodes in your AKS cluster:

kubectl get nodes

The kubectl get nodes command outputs information about three nodes, as the cluster was created with the default number of nodes in a node pool. The STATUS of each node is in the Ready state.

NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-21407934-0   Ready    agent   2m25s   v1.12.8
aks-nodepool1-21407934-1   Ready    agent   2m48s   v1.12.8
aks-nodepool1-21407934-2   Ready    agent   2m34s   v1.12.8

Storing registry credentials in a secret

To be able to pull the images from your Azure container registry, the credentials of your registry must be added to your service through a secret.

View the password for your Azure container registry:

az acr credential show -n [registry-name] --query "passwords[0].value" -o tsv

Use the kubectl create secret docker-registry command to create a secret to hold your registry credentials. Replace [password] with the registry password that you viewed with the az acr credential show -n [registry-name] command. The email that is associated with your Docker account replaces [email-address].

kubectl create secret docker-registry guidesecret \
    --docker-server=[registry-server] \
    --docker-username=[registry-name] \
    --docker-password=[password] \
    --docker-email=[email-address]

The secret is successfully created with the following output:

secret/guidesecret created

Deploying microservices to AKS

Creating a deployment definition

Now that your container images are built and you have created a Kubernetes cluster, you can deploy the images using a Kubernetes resource definition.

A Kubernetes resource definition is a yaml file that contains a description of all your deployments, services, or any other resources that you want to deploy. All resources can also be deleted from the cluster by using the same yaml file that you used to deploy them. The kubernetes.yaml resource definition file is provided for you. If you are interested in learning more about the Kubernetes resource definition, check out the Deploying microservices to Kubernetes guide.

Update the kubernetes.yaml file in the start directory.
kubernetes.yaml

Replace [registry-server] with your container registry server. You can get the login server for your registry by running the az acr show -n [registry-name] --query loginServer command.

kubernetes.yaml

link:finish/kubernetes.yaml[role=include]

The image is the name and tag of the container image that you want to use for the container. The kubernetes.yaml file references the images that you pushed to your registry for the system and inventory repositories. These images can be pulled with the secret that you defined before.

The service that is used to expose your deployments has a type of LoadBalancer. This means you can access these services from IP addresses that forward incoming traffic to your nodepool via a specific port. You can expose your services in other ways such as using a NodePort service type.

Deploying your application

To deploy your microservices to Azure Kubernetes Service, you need Kubernetes to create the contents of the kubernetes.yaml file.

Run the following command to deploy the resources defined in the kubernetes.yaml file:

kubectl create -f kubernetes.yaml

You will see an output similar to the following:

deployment.apps/system-deployment created
deployment.apps/inventory-deployment created
service/system-service created
service/inventory-service created

Run the following command to check the status of your pods:

kubectl get pods

If all the pods are healthy and running, you see an output similar to the following:

NAME                                    READY     STATUS    RESTARTS   AGE
system-deployment-6bd97d9bf6-4ccds      1/1       Running   0          15s
inventory-deployment-645767664f-nbtd9   1/1       Running   0          15s

Making requests to the microservices

You need the external IP addresses that are associated with the system and inventory services to try out your microservices.

Take note of the EXTERNAL-IP in the output of the following commands. It is the hostname that you will later substitute into [EXTERNAL-IP] to access the system and inventory services.

View the information of the system service to see its EXTERNAL-IP address:

kubectl get service/system-service

You need to wait a while for the EXTERNAL-IP to change from <pending> to an IP address.

NAME                TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)           AGE
system-service      LoadBalancer   10.0.27.66     <pending>       9080:32436/TCP    26s
NAME                TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)           AGE
system-service      LoadBalancer   10.0.27.66     23.99.223.10    9080:32436/TCP    74s

View the information of the inventory service to see its EXTERNAL-IP address:

kubectl get service/inventory-service

You will need to wait a while for the EXTERNAL-IP to change from <pending> to an IP address.

NAME                TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)           AGE
inventory-service   LoadBalancer   10.0.103.223   <pending>       9081:32739/TCP    69s
NAME                TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)           AGE
inventory-service   LoadBalancer   10.0.103.223   168.61.174.136  9081:32739/TCP    2m8s

To access your microservices, point your browser to the following URLs, substituting the appropriate EXTERNAL-IP hostnames for the system and inventory services:

  • http://[system-EXTERNAL-IP]:9080/system/properties

  • http://[inventory-EXTERNAL-IP]:9081/inventory/systems

In the first URL, you see a result in JSON format with the system properties of the container JVM. The second URL returns an empty list, which is expected because no system properties are stored in the inventory yet.

Point your browser to the http://[inventory-EXTERNAL-IP]:9081/inventory/systems/[system-EXTERNAL-IP] URL. When you visit this URL, these system properties are automatically stored in the inventory. Go back to http://[inventory-EXTERNAL-IP]:9081/inventory/systems and you see a new entry for [system-EXTERNAL-IP].

Testing the microservices

A few tests are included for you to test the basic functionality of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further.

pom.xml

link:finish/inventory/pom.xml[role=include]

The default properties that are defined in the pom.xml file are:

Property Description

system.ip

Name of the Kubernetes Service wrapping the system pods, system-service by default.

inventory.ip

Name of the Kubernetes Service wrapping the inventory pods, inventory-service by default.

sys.http.port

The HTTP port for the Kubernetes Service system-service, 9080 by default.

inv.http.port

The HTTP port of the Kubernetes Service inventory-service, 9081 by default.

Running the tests

Run the Maven failsafe:integration-test goal to test your microservices by replacing [system-EXTERNAL-IP] and [inventory-EXTERNAL-IP] with the values that were determined in the previous section.

mvn failsafe:integration-test -Dsystem.ip=[system-EXTERNAL-IP] -Dinventory.ip=[inventory-EXTERNAL-IP]

If the tests pass, you will see an output similar to the following for each service:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT

Results:

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

Tearing down the environment

It is important to clean up your resources when you are finished with the guide so that you do not incur extra charges for ongoing usage.

When you no longer need your deployed microservices, you can delete all Kubernetes resources by running the kubectl delete command:

kubectl delete -f kubernetes.yaml

Because you are done testing your cluster, clean up all of its related sources using the az group delete command. This command removes the resource group, container service, and all related resources:

az group delete -g guideGroup --yes --no-wait

Great work! You’re done!

You have just deployed two microservices running in Open Liberty to Azure Kubernetes Service (AKS). You also learned how to use kubectl to deploy your microservices on a Kubernetes cluster.