DeepThought.sh
Infrastructure

Learning Kubernetes with KubeADM - Part 3: Launching an Online Boutique with Helm

Part 3 of our series on learning Kubernetes with kubeadm where we deploy Google's Online Boutique microservices demo application, showcasing a realistic multi-service architecture in preparation for service mesh implementation.

Aaron Mathis
23 min read
Learning Kubernetes with KubeADM - Part 3: Launching an Online Boutique with Helm

Welcome back to our series on learning Kubernetes with KubeADM! In Part 2, we transformed our basic three-node cluster into a production-ready platform by adding persistent storage, ingress controllers, and a complete monitoring stack with Prometheus, Grafana, and Loki. Our homelab environment now mirrors enterprise Kubernetes clusters in many key aspects.

In this third installment, we’ll deploy our first real-world application: Google’s Online Boutique (formerly known as Hipster Shop). This cloud-native microservices demo represents exactly the kind of complex, distributed application that Kubernetes was designed to orchestrate. Unlike simple single-container applications, Online Boutique consists of 11 interconnected microservices written in different programming languages, communicating over gRPC, and showcasing modern cloud-native design patterns.

More importantly, this deployment serves as essential preparation for Part 4, where we’ll implement Istio service mesh to add advanced traffic management, security policies, and observability to our microservices architecture. Understanding how these services interact at the network level will be crucial when we introduce mesh capabilities.

By the end of this tutorial, you’ll have a fully functional e-commerce application running in your cluster, complete with product catalogs, shopping carts, payment processing, and load generation. You’ll gain hands-on experience with complex multi-service deployments and understand how modern distributed applications operate in Kubernetes.

You can find all the code examples and configuration files in our GitHub repository.

Prerequisites and Current State

Before we begin deploying Online Boutique, let’s verify that our cluster is healthy and that all the infrastructure components from Part 2 are functioning correctly. SSH into your master-1 node and run these verification commands:

# Verify all nodes are ready
kubectl get nodes

# Check that our monitoring stack is running
kubectl get pods -n monitoring

# Verify ingress controller is operational
kubectl get pods -n ingress-nginx

# Confirm Helm is available
helm version

# Check that our storage class is ready
kubectl get storageclass

# Verify MetalLB is installed (we'll install this next if needed)
kubectl get pods -n metallb-system 2>/dev/null || echo "MetalLB not yet installed"

You should see output confirming:

  • All three nodes (master-1, worker-1, worker-2) are in “Ready” status
  • Monitoring pods (Prometheus, Grafana, Loki) are running
  • NGINX Ingress Controller is operational
  • Helm v3.x is installed
  • The local-path storage class is available and marked as default
  • MetalLB pods are running (or will be installed in the next section)

If any components are missing or not functioning, please refer back to Part 2 to complete the setup before proceeding.

Installing MetalLB for LoadBalancer Services

In cloud environments, LoadBalancer services automatically receive external IP addresses through cloud load balancers. However, in our homelab environment, we need MetalLB to provide this functionality. MetalLB is a load balancer implementation for bare-metal Kubernetes clusters that provides network load balancing capabilities.

We’ll install MetalLB in Layer 2 mode, which is perfect for homelab environments. In Layer 2 mode, MetalLB responds to ARP requests for the external IPs, making the service appear to be hosted directly on the network.

Installing MetalLB with Helm

First, let’s add the MetalLB Helm repository and install it:

# Add the MetalLB Helm repository
helm repo add metallb https://metallb.github.io/metallb
helm repo update

# Create the metallb-system namespace with proper Pod Security labels
kubectl create namespace metallb-system
kubectl label namespace metallb-system pod-security.kubernetes.io/enforce=privileged
kubectl label namespace metallb-system pod-security.kubernetes.io/audit=privileged
kubectl label namespace metallb-system pod-security.kubernetes.io/warn=privileged

# Install MetalLB using Helm
helm install metallb metallb/metallb \
  --namespace metallb-system \
  --wait

The Pod Security Standard labels are important because MetalLB requires privileged access to manage network interfaces and ARP responses on the host nodes.

Configuring MetalLB Layer 2 Mode

Now we need to configure MetalLB with an IP address pool that it can use for LoadBalancer services. We’ll use a range within our VM network (192.168.122.x):

# Create the MetalLB configuration
cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.122.200-192.168.122.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system
spec:
  ipAddressPools:
  - default-pool
EOF

This configuration:

  • IPAddressPool: Defines a range of IP addresses (192.168.122.200-220) that MetalLB can assign to LoadBalancer services
  • L2Advertisement: Configures Layer 2 mode to announce these IPs on the local network

Verifying MetalLB Installation

Let’s verify that MetalLB is running correctly:

# Check MetalLB pods
kubectl get pods -n metallb-system

# Verify the IP pool configuration
kubectl get ipaddresspools -n metallb-system

# Check the L2 advertisement
kubectl get l2advertisements -n metallb-system

You should see MetalLB controller and speaker pods running, and the configuration resources created successfully.

Testing LoadBalancer Functionality

Let’s quickly test that MetalLB is working by checking our existing NGINX Ingress Controller service:

# Check if the ingress controller service now has an external IP
kubectl get service -n ingress-nginx

You should see output similiar to:

ubuntu@master-1:~$ kubectl get service -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.102.41.163   <none>        80:30080/TCP,443:30443/TCP   15h
ingress-nginx-controller-admission   ClusterIP   10.99.109.157   <none>        443/TCP                      15h
ubuntu@master-1:~$

If the Type is NodePort, we can update it to LoadBalancer:

# Update the ingress controller service to LoadBalancer
kubectl patch service ingress-nginx-controller -n ingress-nginx -p '{"spec": {"type": "LoadBalancer"}}'

# Wait a moment and check again
kubectl get service -n ingress-nginx

You should now see an external IP address assigned from the MetalLB pool (192.168.122.200-220 range) to the ingress controller service.

With MetalLB installed and configured, our cluster can now properly handle LoadBalancer services, which will resolve the hanging issue with the Helm deployment that includes LoadBalancer services.

Understanding Online Boutique Architecture

Before diving into the deployment, it’s important to understand what we’re building. Online Boutique is a sophisticated microservices application that demonstrates real-world distributed system patterns. The application simulates a complete e-commerce experience where users can browse products, add items to their cart, and complete purchases.

The Microservices Landscape

Online Boutique consists of 11 distinct microservices, each with specific responsibilities:

Frontend Services:

  • frontend (Go): Serves the web interface and handles user interactions
  • loadgenerator (Python/Locust): Simulates realistic user traffic patterns

Business Logic Services:

  • productcatalogservice (Go): Manages product inventory and search functionality
  • cartservice (C#): Handles shopping cart operations with Redis backend
  • checkoutservice (Go): Orchestrates the purchase workflow
  • paymentservice (Node.js): Processes payment transactions (simulated)
  • shippingservice (Go): Calculates shipping costs and handles delivery

Supporting Services:

  • currencyservice (Node.js): Provides real-time currency conversion
  • emailservice (Python): Sends order confirmation emails (simulated)
  • recommendationservice (Python): Suggests related products
  • adservice (Java): Displays contextual advertisements

This architecture showcases several important microservices patterns:

  • Service communication via gRPC for efficient inter-service calls
  • Polyglot programming with services written in Go, Python, Node.js, C#, and Java
  • Stateful services like the cart service that require persistent storage
  • External dependencies like Redis for session storage
  • Cross-cutting concerns like logging, monitoring, and security

Understanding this architecture will be crucial when we implement service mesh in Part 4, as Istio will provide advanced capabilities for managing the complex network of service-to-service communications.

Deployment Strategy: Helm vs Raw Manifests

Google provides multiple deployment options for Online Boutique, including raw Kubernetes manifests and Helm charts. While the raw manifests are simpler and work well for basic deployments, we’ll use the Helm chart approach for several important reasons:

Advantages of Helm Deployment:

  • Configuration Management: Easily customize deployments through values files
  • Environment Consistency: Standardized approach that scales from development to production
  • Upgrade Management: Built-in rollback and upgrade capabilities
  • Resource Organization: Helm releases provide logical grouping of related resources
  • Production Readiness: Helm charts are the industry standard for complex application deployment

The Helm chart also provides advanced configuration options that we’ll leverage in future tutorials, particularly when integrating with Istio service mesh.


Objective 1: Deploying Online Boutique with Helm

Let’s begin by deploying the Online Boutique application using Google’s official Helm chart. This approach will give us a clean, manageable deployment that’s easy to customize and maintain.

Creating the Application Namespace

First, we’ll create a dedicated namespace for our Online Boutique application. Namespaces provide logical isolation and make it easier to manage resources and apply policies:

# Create the onlineboutique namespace
kubectl create namespace onlineboutique

# Verify the namespace was created
kubectl get namespaces

Deploying with the Official Helm Chart

Google maintains the Online Boutique Helm chart in their container registry. We’ll deploy it directly from there, which ensures we’re using the latest stable version:

# Deploy Online Boutique using the official Helm chart
helm upgrade onlineboutique \
  oci://us-docker.pkg.dev/online-boutique-ci/charts/onlineboutique \
  --install \
  --namespace onlineboutique \
  --create-namespace \
  --wait \
  --timeout=10m

Let’s break down these Helm flags:

  • upgrade —install: Installs the chart if it doesn’t exist, or upgrades if it does
  • —namespace: Specifies the target namespace for the deployment
  • —create-namespace: Creates the namespace if it doesn’t exist
  • —wait: Waits for all resources to reach a ready state before completing
  • —timeout: Sets a maximum wait time for the deployment

The deployment will take several minutes as Kubernetes pulls container images and starts all the microservices. You can monitor the progress:

# Watch the pods being created
kubectl get pods -n onlineboutique -w

Press Ctrl+C when all pods show a “Running” status. You should see output similar to:

NAME                                     READY   STATUS    RESTARTS   AGE
adservice-76bdd69666-ckc5j               1/1     Running   0          3m58s
cartservice-66d497c6b7-dp5jr             1/1     Running   0          3m59s
checkoutservice-666c784bd6-4jd22         1/1     Running   0          4m1s
currencyservice-5d5d496984-4jmd7         1/1     Running   0          3m59s
emailservice-667457d9d6-75jcq            1/1     Running   0          4m2s
frontend-6b8d69b9fb-wjqdg                1/1     Running   0          4m1s
loadgenerator-665b5cd444-gwqdq           1/1     Running   0          4m
paymentservice-68596d6dd6-bf6bv          1/1     Running   0          4m
productcatalogservice-557d474574-888kr   1/1     Running   0          4m
recommendationservice-69c56b74d4-7z8r5   1/1     Running   0          4m1s
redis-cart-5f59546cdd-5jnqf              1/1     Running   0          3m58s
shippingservice-6ccc89f8fd-v686r         1/1     Running   0          3m58s

Verifying the Deployment

Let’s examine what Helm created for us:

# Check the Helm release status
helm status onlineboutique -n onlineboutique

# List all resources in the namespace
kubectl get all -n onlineboutique

# Check the services that were created
kubectl get services -n onlineboutique

You should see that Helm created:

  • Deployments for each microservice
  • Services for inter-service communication
  • A Redis deployment for the cart service’s data storage
  • A frontend-external service for web access

Understanding the Service Architecture

Let’s examine how the services are configured to communicate with each other:

# Look at the service details
kubectl describe service frontend -n onlineboutique

# Check the endpoints for a service
kubectl get endpoints -n onlineboutique

The services use ClusterIP by default, which means they’re only accessible within the cluster. The microservices communicate with each other using these internal service names and gRPC protocols.


Objective 2: Configuring External Access

By default, the Online Boutique frontend is not accessible from outside the cluster. We need to configure external access using the ingress controller we set up in Part 2.

Creating an Ingress Resource

Let’s create an ingress resource to expose the frontend service through our NGINX Ingress Controller:

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: onlineboutique-ingress
  namespace: onlineboutique
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  ingressClassName: nginx
  rules:
  - host: boutique.homelab.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend
            port:
              number: 80
EOF

The ingress resource defines how external traffic should be routed to our application:

  • host: Sets up a virtual host for our application
  • path: Routes all requests to the frontend service
  • annotations: Configure NGINX-specific behavior

Configuring Local DNS Resolution

Since we’re using a custom hostname (boutique.homelab.local), we need to configure local DNS resolution. First, let’s get the external IP address that MetalLB assigned to our ingress controller:

# Get the external IP assigned by MetalLB
kubectl get service ingress-nginx-controller -n ingress-nginx

# Note the EXTERNAL-IP value (should be in the 192.168.122.200-220 range)

You should see output similar to:

NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP        PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.102.41.163   192.168.122.200   80:30080/TCP,443:30443/TCP   15h

Now, from your host machine (not the VMs), add an entry to your /etc/hosts file using the MetalLB-assigned external IP:

# On your host machine, replace 192.168.122.200 with your actual external IP
echo "192.168.122.200 boutique.homelab.local" | sudo tee -a /etc/hosts

Important: Use the actual EXTERNAL-IP value from the kubectl command output, not the node IP addresses.

Testing External Access

Now let’s test access to our application:

# Check the ingress status
kubectl get ingress -n onlineboutique

# Verify the ingress controller can reach the frontend
kubectl get pods -n ingress-nginx

# Check the LoadBalancer external IP
kubectl get service ingress-nginx-controller -n ingress-nginx

You can now access the Online Boutique application by navigating to http://boutique.homelab.local in your web browser (the LoadBalancer will route to port 80). You should see a modern e-commerce interface with product listings, shopping cart functionality, and a complete checkout process.

Verifying Application Functionality

Let’s test the core functionality of our deployed application:

# Check that all pods are healthy
kubectl get pods -n onlineboutique

# Look at the logs from the frontend service
kubectl logs -n onlineboutique deployment/frontend --tail=20

# Check the cart service is working with Redis
kubectl logs -n onlineboutique deployment/cartservice --tail=20

# Monitor the load generator creating traffic
kubectl logs -n onlineboutique deployment/loadgenerator --tail=10

The load generator continuously creates realistic user traffic, which you can observe in the application’s behavior and in the monitoring dashboards we set up in Part 2.


Objective 3: Monitoring Our Microservices

With our monitoring stack from Part 2 already in place, let’s configure it to observe our Online Boutique microservices. This will give us valuable insights into how distributed applications behave in Kubernetes.

Accessing Grafana Dashboard

The Online Boutique applications expose metrics that Prometheus can scrape automatically through Kubernetes service discovery. Let’s access our Grafana dashboard to see the application metrics. If you followed Part 2 you should still have an entry for grafana.local in your /etc/hosts file. You will need to modify /etc/hosts on your host computer to either add or modify the records now that we have installed a load balancer:

192.168.122.200 boutique.homelab.local
192.168.122.200 grafana.homelab.local
192.168.122.200 alertmanager.homelab.local
192.168.122.200 prometheus.homelab.local

Now you can access grafana

In order to access Grafana, you will need to get the admin password:

kubectl --namespace monitoring get secrets prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo

Viewing Application Logs

Our Loki installation from Part 2 automatically collects logs from all pods. You can view application logs through Grafana’s Explore interface:

# View recent logs from the frontend
kubectl logs -n onlineboutique deployment/frontend --tail=50 | jq

# Check load generator activity
kubectl logs -n onlineboutique deployment/loadgenerator --tail=20 | jq

# Monitor cart service interactions
kubectl logs -n onlineboutique deployment/cartservice --tail=30 | jq

These logs provide valuable insights into how the microservices interact and process requests, which will be especially important when we implement service mesh observability in Part 4.

Understanding Traffic Patterns

The load generator creates realistic traffic patterns that simulate actual user behavior:

# Watch the load generator logs to understand traffic patterns
kubectl logs -n onlineboutique deployment/loadgenerator -f

You’ll see the load generator performing actions like:

  • Browsing product categories
  • Adding items to cart
  • Completing checkout processes
  • Simulating user sessions

This generated traffic provides a consistent workload for testing performance, observability, and service mesh features.


Objective 4: Exploring the Application

Now that Online Boutique is fully deployed and accessible, let’s explore its functionality to understand what we’ve built.

Web Interface Exploration

Navigate to http://boutique.homelab.local and explore the application features:

Key Features to Test:

  • Product Browsing: View the catalog of products with images and descriptions
  • Search Functionality: Search for specific products using the search bar
  • Shopping Cart: Add items to your cart and modify quantities
  • Currency Conversion: Change the currency to see real-time conversion
  • Checkout Process: Complete a full purchase simulation
  • Recommendations: Notice how the system suggests related products

Backend Service Interactions

While you use the web interface, let’s observe the backend service interactions by monitoring logs as you browse:

# Monitor frontend logs during your browsing
kubectl logs -n onlineboutique deployment/frontend -f 

# Watch cart service activity when you add items
kubectl logs -n onlineboutique deployment/cartservice -f 

# Observe currency service calls when changing currencies
kubectl logs -n onlineboutique deployment/currencyservice -f 

These logs show the gRPC communication between services, demonstrating the distributed nature of the application.

Service Communication Patterns

Let’s examine how services discover and communicate with each other. Most containers in Online Boutique do not include DNS tools like nslookup, so we’ll launch a temporary pod with debugging tools:

kubectl run dnsutils -n onlineboutique \
  --image=busybox:1.28 --restart=Never \
  --command -- sleep 3600

Then:

kubectl exec -n onlineboutique dnsutils -- nslookup cartservice

This confirms that Kubernetes DNS can resolve the cartservice name within the cluster.

# View the environment variables that services use for communication
kubectl exec -n onlineboutique dnsutils -- env | grep SERVICE

This reveals how Kubernetes provides service discovery through DNS and environment variables, allowing microservices to find and communicate with each other reliably.

Clean up when you’re done:

kubectl delete pod dnsutils -n onlineboutique

Performance and Scalability Testing

Let’s test how our application handles load by scaling some services:

# Scale the frontend to handle more traffic
kubectl scale deployment frontend -n onlineboutique --replicas=3

# Scale the product catalog service
kubectl scale deployment productcatalogservice -n onlineboutique --replicas=2

# Watch the pods being created
kubectl get pods -n onlineboutique -w

You can now refresh the web application and notice that it continues to function normally even as new pods are being created. This demonstrates Kubernetes’ ability to maintain service availability during scaling operations.

To scale all services back down to 1:

kubectl get deployments -n onlineboutique -o name | xargs -n1 -I{} kubectl scale -n onlineboutique {} --replicas=1

Advanced Configuration with Helm Values

One of the key advantages of using Helm is the ability to customize deployments through values files. Let’s explore how to modify our Online Boutique deployment with custom configurations.

Creating a Custom Values File

Let’s create a values file to customize our deployment:

Note: It’s important to understand that while Helm’s values.yaml files offer powerful customization, not every parameter of a Kubernetes resource is automatically exposed for configuration. The extent to which a Helm chart can be customized depends entirely on how the chart’s maintainers have designed and templated it.

Create onlineboutique-values.yaml:

cat <<EOF > onlineboutique-values.yaml
# Frontend service resource adjustment
frontend:
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 200m
      memory: 512Mi # Changed from 256Mi to 512Mi for demonstration
EOF

Applying the Custom Configuration

Before we apply some custom values, check the current resource limits for frontend:

kubectl describe deployment frontend -n onlineboutique | grep -A 5 "Limits:"

You should see memory 256Mi under the Limits section.

Now, apply the changes using helm upgrade:

# Upgrade the deployment with our custom values
helm upgrade onlineboutique \
  oci://us-docker.pkg.dev/online-boutique-ci/charts/onlineboutique \
  --namespace onlineboutique \
  --values onlineboutique-values.yaml \
  --wait

Lets verify that the changes were applied. Check resource limits on frontend:

kubectl describe deployment frontend -n onlineboutique | grep -A 5 "Limits:"

Expected: 512Mi under Limits section

Understanding Helm Chart Structure

Let’s examine what the Helm chart created and how it’s organized:

# List all resources managed by the Helm release
helm get manifest onlineboutique -n onlineboutique

# Check the current values being used
helm get values onlineboutique -n onlineboutique

This shows you exactly what Kubernetes resources were created and their current configuration, giving you full visibility into your deployment.


Troubleshooting Common Issues

When working with complex microservices applications, various issues can arise. Let’s cover common problems and their solutions.

Pod Startup Issues

If pods are failing to start, check their status and logs:

# Check pod status
kubectl get pods -n onlineboutique

# Describe a problematic pod
kubectl describe pod <pod-name> -n onlineboutique

# Check logs for startup errors
kubectl logs <pod-name> -n onlineboutique --previous

Common issues include:

  • Image pull errors: Check internet connectivity and image names
  • Resource constraints: Verify the cluster has sufficient CPU and memory
  • Configuration errors: Check environment variables and config maps

Service Communication Problems

If services can’t communicate with each other:

# Check service endpoints
kubectl get endpoints -n onlineboutique

# Verify service labels and selectors match
kubectl describe service cartservice -n onlineboutique

External Access Issues

If you can’t access the application externally:

# Check ingress status
kubectl describe ingress onlineboutique-ingress -n onlineboutique

# Verify ingress controller is running
kubectl get pods -n ingress-nginx

# Test NodePort access directly
kubectl get service -n ingress-nginx

Security Considerations

Running microservices in production requires attention to security best practices. Let’s implement some basic security measures for our Online Boutique deployment.

Network Policies

Network policies control traffic flow between pods. Let’s create a basic policy:

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: onlineboutique-netpol
  namespace: onlineboutique
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: ingress-nginx
  - from:
    - podSelector: {}
  egress:
  - to:
    - podSelector: {}
  - to: []
    ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
  - to: []
    ports:
    - protocol: TCP
      port: 443
    - protocol: TCP
      port: 80
EOF

This policy allows:

  • Traffic from the ingress controller
  • Inter-pod communication within the namespace
  • DNS resolution
  • Outbound HTTPS/HTTP for external services

It’s also important to note that by defining any ingress rules and any egress rules (because policyTypes includes both Ingress and Egress), this Network Policy effectively creates a “default deny” for all traffic not explicitly allowed. So, any traffic not matching one of these specified ingress or egress rules will be blocked for pods in the onlineboutique namespace. This is the core security benefit of Network Policies.

Resource Limits

Ensure all services have appropriate resource limits to prevent resource exhaustion:

# Check current resource limits
kubectl describe deployments -n onlineboutique | grep -A 10 Limits

Security Scanning

Use built-in Kubernetes security features to scan for vulnerabilities:

# Check pod security standards (if enabled)
kubectl get pods -n onlineboutique -o jsonpath='{.items[*].spec.securityContext}'

# Review service account configurations
kubectl get serviceaccounts -n onlineboutique

These security measures provide a foundation that we’ll build upon when implementing Istio service mesh security policies in Part 4.


Performance Optimization

With our microservices running, let’s optimize performance for better user experience and resource efficiency.

Horizontal Pod Autoscaling

Configure autoscaling for services that may experience variable load:

# Enable autoscaling for the frontend service
kubectl autoscale deployment frontend -n onlineboutique --cpu-percent=70 --min=2 --max=5

# Enable autoscaling for the product catalog service
kubectl autoscale deployment productcatalogservice -n onlineboutique --cpu-percent=70 --min=1 --max=3

# Check autoscaler status
kubectl get hpa -n onlineboutique

Cleanup and Resource Management

When you’re finished exploring Online Boutique, you can clean up the resources while preserving your cluster infrastructure.

**Note: In part 4 of this series we will be implementing service meshing with Istio and will assume you still have the OnlineBoutique namespace and services setup.

Removing the Application

To remove just the Online Boutique application:

# Delete the Helm release
helm uninstall onlineboutique -n onlineboutique

# Remove the namespace
kubectl delete namespace onlineboutique

# Remove the ingress host entry from /etc/hosts (on your host machine)
sudo sed -i '/boutique.homelab.local/d' /etc/hosts

Preserving the Infrastructure

Your cluster infrastructure (storage, monitoring, ingress) remains intact for future tutorials:

# Verify infrastructure services are still running
kubectl get pods -n monitoring
kubectl get pods -n ingress-nginx
kubectl get storageclass

This approach allows you to easily redeploy applications while maintaining your cluster’s core capabilities.


Summary and Next Steps

Congratulations! You’ve successfully deployed a sophisticated microservices application in your Kubernetes homelab. Through this tutorial, you’ve gained hands-on experience with:

Technical Achievements:

  • Complex Application Deployment: Deployed an 11-service microservices application using Helm
  • Service Architecture: Understood how modern distributed applications communicate via gRPC
  • External Access Configuration: Set up ingress routing for web application access
  • Monitoring Integration: Connected application metrics and logs to your monitoring stack
  • Performance Optimization: Implemented autoscaling and resource management
  • Security Implementation: Applied network policies and resource constraints

Key Learning Outcomes:

  • Microservices Patterns: Experienced polyglot programming, service discovery, and distributed communication
  • Helm Proficiency: Mastered chart deployment, customization, and lifecycle management
  • Kubernetes Networking: Understood ingress controllers and inter-pod communication
  • Observability: Learned to monitor and troubleshoot distributed applications
  • Production Readiness: Implemented security, scaling, and resource management best practices

Real-World Relevance: The Online Boutique application represents the complexity and challenges of modern cloud-native applications. The skills you’ve developed managing this multi-service architecture directly translate to production environments where applications often consist of dozens or hundreds of microservices.

Preparing for Service Mesh

The foundation we’ve built with Online Boutique is perfect for our next major milestone: implementing Istio service mesh. In Part 4, we’ll transform our microservices architecture by adding:

Advanced Traffic Management:

  • Intelligent load balancing and traffic routing
  • Circuit breakers and retry policies
  • Canary deployments and A/B testing
  • Traffic mirroring for safe testing

Enhanced Security:

  • Mutual TLS encryption between all services
  • Fine-grained authorization policies
  • Service-to-service authentication
  • Security policy enforcement

Deep Observability:

  • Distributed tracing across all service calls
  • Service-level metrics and SLIs
  • Traffic flow visualization
  • Performance bottleneck identification

Operational Excellence:

  • Fault injection for resilience testing
  • Advanced deployment strategies
  • Service dependency mapping
  • Centralized configuration management

The microservices communication patterns you’ve observed in Online Boutique will become much more visible and controllable once we implement Istio. You’ll gain unprecedented insight into how your distributed application behaves and powerful tools to manage its complexity.

Looking Ahead

Your Kubernetes homelab has evolved from a basic cluster to a sophisticated platform capable of running production-grade applications. With persistent storage, monitoring, ingress, and now a complex microservices application, you have the foundation to explore advanced Kubernetes concepts and prepare for real-world container orchestration challenges.

In Part 4, we’ll complete our journey by implementing Istio service mesh, giving you the tools to manage microservices at enterprise scale. The combination of Kubernetes orchestration and Istio service mesh represents the cutting edge of modern application infrastructure, and you’ll have hands-on experience with both.

Continue experimenting with Online Boutique, try different configuration options, and observe how the services interact. This practical experience will prove invaluable as we add the advanced capabilities of service mesh in our next tutorial.

As always, you can find all the configuration files and code examples in our GitHub repository. Happy learning, and see you in Part 4!

Aaron Mathis

Aaron Mathis

Systems administrator and software engineer specializing in cloud development, AI/ML, and modern web technologies. Passionate about building scalable solutions and sharing knowledge with the developer community.

Related Articles

Discover more insights on similar topics