Ampere Computing Logo
Contact Sales
Ampere Computing Logo
Customer reference board (CRB) platforms from Ampere

Deploy VOD PoC demo on a Single-Node Canonical MicroK8s

on Ampere Altra platform

Overview

Ampere is pleased to showcase a cloud-native open source Video-on-Demand (VOD) Service tutorial. This setup runs on Canonical MicroK8s, an open-source lightweight Kubernetes system for automating deployment, scaling, and management of containerized applications. This specific combination of Ampere Altra Family processors and Canonical MicroK8s is ideal for video services, such as VOD streaming, as it provides a scalable platform (from single node to high-availability multi-node production clusters) all while minimizing physical footprint.

We understand the need to support effective and scalable VOD streaming workloads for a variety of clients, such as video service providers and digital service providers. This customer base requires a consistent workload lifecycle and predictable performance to accommodate their rapidly growing audience across the web. Ampere Altra and Altra Max processors are based on a low-power Arm64 architecture, thus offering high core counts per socket along with more scalable and predictable performance. As such, they are ideal for power-sensitive edge locations as well as large-scale datacenters.

MicroK8s can run on edge devices as well as large scale HA deployments. It provides the functionality of core Kubernetes components, in a small footprint, scalable from a single node to a high-availability production cluster. It's a Zero-ops, pure-upstream Kubernetes, from developer workstations to production, it enables developers to get a fully featured, conformant and secure Kubernetes system. It’s also designed for local development, IoT appliances, CI/CD, and use at the edge. Single-node deployments are ideal for offline development, prototyping, and testing. The following are the main software components for this VOD cloud-native reference architecture on a single-node running MicroK8s:

  • Canonical MicroK8s: a lightweight Kubernetes platform with full-stack automated operations for containerized deployments.
  • NGINX Ingress controller: a production‑grade Ingress controller that can run alongside NGINX Open Source instances in Kubernetes environments.
  • MinIO: a high-performance, open-source, S3 compatible Object Storage system.
Prerequisites
  • One Ampere Altra- or Altra Max-based platform provisioned with Canonical Ubuntu 22.04 server LTS operating system (including ssh server at a minimum) with the following hardware components:

    • CPU: 1x Ampere Altra or Altra Max up to 128 cores 3.0 GHz
    • Memory: 8x 32GB DDR4 DIMMs
    • NIC: 1x Mellanox ConnectX-6
    • Storage: 1x M.2 NVMe drive and 5x U.2 NVMe drives
  • A laptop or workstation configured as the Bastion node for deploying VOD PoC and preparing VOD demo files.

  • A DNS service like Bind (named) runs on the bastion node

  • The following executable files are required:

    • kubectl version 1.26 or later
    • git
    • tar
    • docker or podman
Instructions

Deploy single-node MicroK8s on Ampere Altra Family processors:

1.Access the Ubuntu 22.04 server (Ampere Altra or Altra Max platform) and add its IP addresses into the host's file under /etc for a bastion node (laptop or workstation).

$ echo "10.76.87.126 node1" | sudo tee -a /etc/hosts

2.Enable HugePages on each server.

$ sudo sysctl vm.nr_hugepages=1024 $ echo 'vm.nr_hugepages=1024' | sudo tee -a /etc/sysctl.conf

3.Reboot the system to make sure the changes are applied permanently.

$ sudo reboot

4.After the system has booted up, the MicroK8s stable releases containing v1.26/stable need to be verified.

$ snap info microk8s | grep stable | head -n8

5.Install MicroK8s on server.

$ sudo snap install microk8s --classic --channel=1.26/stable microk8s (1.26/stable) v1.26.11 from Canonical✓ installed

6.Check the instance 'microk8s is running', notice that there is no high-availability and the datastore is localhost.

$ sudo microk8s status --wait-ready | head -n4 microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none

7.Add user to the microk8s group and update ~/.kube.

$ sudo usermod -a -G microk8s $USER $ sudo chown -f -R $USER ~/.kube

8.Create an alias for MicroK8s embedded kubectl. Simply call ‘kubectl' after running 'newgrp microk8s’.

$ sudo snap alias microk8s.kubectl kubectl $ newgrp microk8s

9.To check the node status, run kubectl get node on server.

$ kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready <none> 3d2h v1.26.11-2+123bfdfc196019

10.Verify that the cluster status is “running”.

$ microk8s status microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: ha-cluster # (core) Configure high availability on the current node disabled: community # (core) The community addons repository dashboard # (core) The Kubernetes dashboard dns # (core) CoreDNS helm # (core) Helm 2 - the package manager for Kubernetes helm3 # (core) Helm 3 - Kubernetes package manager ingress # (core) Ingress controller for external access host-access # (core) Allow Pods connecting to Host services smoothly hostpath-storage # (core) Storage class; allocates storage from host directory mayastor # (core) OpenEBS MayaStor metallb # (core) Loadbalancer for your Kubernetes cluster metrics-server # (core) K8s Metrics Server for API access to service metrics prometheus # (core) Prometheus operator for monitoring and logging rbac # (core) Role-Based Access Control for authorisation registry # (core) Private image registry exposed on localhost:32000 storage # (core) Alias to hostpath-storage add-on, deprecated

11.Enable add-ons dns, helm3, and ingress.

$ microk8s enable dns helm3 ingress

12.Clean up the target storage devices on each node in MicroK8s cluster (skip the nvme that has the OS installed)

$ sudo su - # umount /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1 # for DISK in "/dev/nvme1n1" "/dev/nvme2n1" "/dev/nvme3n1" "/dev/nvme4n1" "/dev/nvme5n1" ; do echo $DISK && \ sgdisk --zap-all $DISK && \ dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync && \ blkdiscard $DISK done # exit

13.Prepare /dev/nvme1n1 for hostpath storage

$ sudo mkfs.ext4 /dev/nvme1n1 mke2fs 1.46.5 (30-Dec-2021) Discarding device blocks: done Creating filesystem with 468843606 4k blocks and 117211136 inodes Filesystem UUID: 43b6d451-8678-4e3d-a2b8-c1e8e6435697 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done $ sudo mkdir -p /mnt/fast-data $ sudo mount /dev/nvme1n1 /mnt/fast-data/ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 49.1M 1 loop /snap/core18/2755 loop1 7:1 0 49.1M 1 loop /snap/core18/2788 loop2 7:2 0 59.2M 1 loop /snap/core20/1953 loop3 7:3 0 59.2M 1 loop /snap/core20/1977 loop4 7:4 0 71.8M 1 loop /snap/lxd/22927 loop5 7:5 0 109.6M 1 loop /snap/lxd/24326 loop6 7:6 0 142.2M 1 loop /snap/microk8s/5705 loop8 7:8 0 46.4M 1 loop /snap/snapd/19365 loop9 7:9 0 46.4M 1 loop /snap/snapd/19459 nvme0n1 259:0 0 894.3G 0 disk ├─nvme0n1p1 259:1 0 1G 0 part /boot/efi └─nvme0n1p2 259:2 0 893.2G 0 part / nvme1n1 259:3 0 1.7T 0 disk /mnt/fast-data nvme2n1 259:5 0 1.7T 0 disk nvme3n1 259:5 0 1.7T 0 disk nvme4n1 259:5 0 1.7T 0 disk nvme5n1 259:5 0 1.7T 0 disk $ echo '/dev/nvme1n1 /mnt/fast-data ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab

14.Enable Microk8s hostpath add-on

ubuntu@node1:~$ $ microk8s enable hostpath-storage Infer repository core for addon hostpath-storage Enabling default storage class. WARNING: Hostpath storage is not suitable for production environments. deployment.apps/hostpath-provisioner created storageclass.storage.k8s.io/microk8s-hostpath created serviceaccount/microk8s-hostpath created clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created Storage will be available soon. ubuntu@node1:~$ cd microservices/kube-system ubuntu@node1:~/microservices/kube-system$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE microk8s-hostpath (default) microk8s.io/hostpath Delete WaitForFirstConsumer false 6s ubuntu@node1:~/microservices/kube-system$ cat << EOF> fast-data-hostpath-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fastdata-hostpath provisioner: microk8s.io/hostpath reclaimPolicy: Delete parameters: pvDir: /mnt/fast-data volumeBindingMode: WaitForFirstConsumer EOF ubuntu@node1:~/microservices/kube-system$ kubectl apply -f fast-data-hostpath-sc.yaml storageclass.storage.k8s.io/fastdata-hostpath created ubuntu@node1:~/microservices/kube-system$ kubectl patch storageclass fastdata-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' ubuntu@node1:~/microservices/kube-system$ kubectl patch storageclass microk8s-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

15.Check Storage Classes

ubuntu@node1:~/microservices/kube-system$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE fastdata-hostpath (default) microk8s.io/hostpath Delete WaitForFirstConsumer false 6s microk8s-hostpath microk8s.io/hostpath Delete WaitForFirstConsumer false 1m2s

16.Edit DaemonSet NGINX ingress controller, change the address.

$ kubectl edit ds -n ingress nginx-ingress-microk8s-controller ## the original --publish-status-address=127.0.0.1 ## replace 127.0.0.1 with the hosts' IP addresses --publish-status-address=10.76.87.126 daemonset.apps/nginx-ingress-microk8s-controller edited

17.Test the storage class by the YAML file, test-pod-with-pvc.yaml, to create Persistent Volume Claim for a NGINX container

ubuntu@node1:~$ cd ~/microservices/tests ubuntu@node1:~/microservices/tests$ cat << EOF > test-pod-with-pvc.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc spec: #storageClassName: accessModes: [ReadWriteOnce] resources: { requests: { storage: 5Gi } } --- apiVersion: v1 kind: Pod metadata: name: test-nginx spec: volumes: - name: pvc persistentVolumeClaim: claimName: test-pvc containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: pvc mountPath: /usr/share/nginx/html EOF

18.Create a Persistent Volume Claim (PVC) on fastdata-hostpath storage class and a pod with NGINX container

ubuntu@node1:~/microservices/tests$ kubectl apply -f test-pod-with-pvc.yaml persistentvolumeclaim/test-pvc created pod/test-nginx created ubuntu@node1:~/microservices/tests$ kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-nginx 1/1 Running 0 4m8s 10.1.166.138 node1 <none> <none> ubuntu@node1:~/microservices/tests$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/test-pvc Bound pvc-57e2338d-e551-4539-8d18-b2db00a0b857 5Gi RWO fastdata-hostpath 19s ubuntu@node1:~/microservices/tests$ kubectl describe pv Name: pvc-57e2338d-e551-4539-8d18-b2db00a0b857 Labels: <none> Annotations: hostPathProvisionerIdentity: node1 pv.kubernetes.io/provisioned-by: microk8s.io/hostpath Finalizers: [kubernetes.io/pv-protection] StorageClass: fastdata-hostpath Status: Bound Claim: default/test-pvc Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 5Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [node1] Message: Source: Type: HostPath (bare host directory volume) Path: /mnt/fast-data/default-test-pvc-pvc-57e2338d-e551-4539-8d18-b2db00a0b857 HostPathType: DirectoryOrCreate Events: <none>

19.Run kubectl exec -it pod/test-nginx -- sh to access the NGINX container inside pod/test-nginx to verify the PVC being bound with NGINX container inside the pod

ubuntu@node1:~/microservices/tests$ kubectl exec -it pod/test-nginx -- sh # cd /usr/share/nginx/html # ls -al total 8 drwxrwxrwx 2 root root 4096 Aug 11 03:17 . drwxr-xr-x 3 root root 4096 Jul 28 14:03 .. # mount | grep nginx /dev/nvme1n1 on /usr/share/nginx/html type ext4 (rw,relatime,stripe=32) # lsblk | grep nvme0n1p2 nvme1n1 259:3 0 1.7T 0 disk /usr/share/nginx/html # exit

20.Check storage pool usage and delete the PVC and NGINX container

ubuntu@node1:~/microservices/tests$ kubectl delete -f test-pod-with-pvc.yaml persistentvolumeclaim "test-pvc" deleted pod "test-nginx" deleted ubuntu@node1:~/microservices/tests$ kubectl get pvc No resources found in default namespace.

21.If you prefer to use kubectl command on a local machine instead of the server, running the following command will output the kubeconfig file from MicroK8s.

% ssh ampere@node1 microk8s config > ~/.kube/config % mkdir ~/.kube % scp node1/kubeconfig ~/.kube/config % export KUBECONFIG=~/.kube/config % kubectl get nodes

Enable Kubernetes Dashboard add-on

1.Enable dashboard add-on

ubuntu@node1:~$ microk8s enable dashboard Infer repository core for addon dashboard Enabling Kubernetes Dashboard Infer repository core for addon metrics-server Enabling Metrics-Server serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created clusterrolebinding.rbac.authorization.k8s.io/microk8s-admin created Adding argument --authentication-token-webhook to nodes. Restarting nodes. Metrics-Server is enabled Applying manifest serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created secret/microk8s-dashboard-token created If RBAC is not enabled access the dashboard using the token retrieved with: microk8s kubectl describe secret -n kube-system microk8s-dashboard-token Use this token in the https login UI of the kubernetes-dashboard service. In an RBAC enabled setup (microk8s enable RBAC) you need to create a user with restricted permissions as shown in: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md ubuntu@node1:~$ kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 52m metrics-server ClusterIP 10.152.183.120 <none> 443/TCP 9m43s kubernetes-dashboard ClusterIP 10.152.183.138 <none> 443/TCP 9m36s dashboard-metrics-scraper ClusterIP 10.152.183.132 <none> 8000/TCP 9m36s ubuntu@node1:~$ kubectl create token -n kube-system default --duration=8544h [xxxyyyzzz]

2.Apply ingress YAML file for Kubernetes dashboard

ubuntu@node1:~/microservices/kube-system$ cat << EOF > dashboard-ingress.yaml kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: ingress-kubernetes-dashboard namespace: kube-system generation: 1 annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/configuration-snippet: | chunked_transfer_encoding off; nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/proxy-ssl-verify: 'off' nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/server-snippet: | client_max_body_size 0; spec: rules: - host: console.microk8s.hhii.ampere http: paths: - path: / pathType: Prefix backend: service: name: kubernetes-dashboard port: number: 443 EOF ubuntu@node1:~/microservices/kube-system$ kubectl apply -f dashboard-ingress.yaml ingress.networking.k8s.io/kubernetes-dashboard-ingress created

3.Access dashboard web UI via https://console.microk8s.hhii.ampere/#/login and click “Advanced” tab to process

Fig1. Kubernetes dashboard web UI Fig1. Kubernetes dashboard web UI


4.Enter the token created from Step 1, or youcan create kubeconfig with command microk8s config > kubeconfig Fig2. Enter the token Fig2. Enter the token


5.Now you can access MicroK8s dashboard

Fig3. Access the dashboard Fig3. Access the dashboard

Deploy MicroK8s Observability add-on

1.Enable observability add-on

ubuntu@node1:~$ cd ~/microservices/observability ubuntu@node1:~/microservices/observability$ microk8s enable observability Infer repository core for addon observability Addon core/dns is already enabled Addon core/helm3 is already enabled Addon core/hostpath-storage is already enabled Enabling observability "prometheus-community" already exists with the same configuration, skipping "grafana" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "grafana" chart repository ...Successfully got an update from the "prometheus-community" chart repository Update Complete. ⎈Happy Helming!⎈ Release "kube-prom-stack" does not exist. Installing it now. NAME: kube-prom-stack LAST DEPLOYED: Fri Aug 11 03:22:36 2023 NAMESPACE: observability STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace observability get pods -l "release=kube-prom-stack" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. Release "loki" does not exist. Installing it now. NAME: loki LAST DEPLOYED: Fri Aug 11 03:23:06 2023 NAMESPACE: observability STATUS: deployed REVISION: 1 NOTES: The Loki stack has been deployed to your cluster. Loki can now be added as a datasource in Grafana. See http://docs.grafana.org/features/datasources/loki/ for more detail. Release "tempo" does not exist. Installing it now. NAME: tempo LAST DEPLOYED: Fri Aug 11 03:23:08 2023 NAMESPACE: observability STATUS: deployed REVISION: 1 TEST SUITE: None Adding argument --authentication-kubeconfig to nodes. Adding argument --authorization-kubeconfig to nodes. Restarting nodes. Adding argument --authentication-kubeconfig to nodes. Adding argument --authorization-kubeconfig to nodes. Restarting nodes. Adding argument --metrics-bind-address to nodes. Restarting nodes. Note: the observability stack is setup to monitor only the current nodes of the MicroK8s cluster. For any nodes joining the cluster at a later stage this addon will need to be set up again. Observability has been enabled (user/pass: admin/prom-operator) ubuntu@node1:~/microservices/observability$ microk8s status microk8s is running high-availability: yes datastore master nodes: 192.168.1.101:19001 192.168.1.102:19001 192.168.1.103:19001 datastore standby nodes: none addons: enabled: dashboard # (core) The Kubernetes dashboard dns # (core) CoreDNS ha-cluster # (core) Configure high availability on the current node helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes hostpath-storage # (core) Storage class; allocates storage from host directory ingress # (core) Ingress controller for external access metallb # (core) Loadbalancer for your Kubernetes cluster metrics-server # (core) K8s Metrics Server for API access to service metrics observability # (core) A lightweight observability stack for logs, traces and metrics storage # (core) Alias to hostpath-storage add-on, deprecated disabled: cert-manager # (core) Cloud native certificate management community # (core) The community addons repository host-access # (core) Allow Pods connecting to Host services smoothly kube-ovn # (core) An advanced network fabric for Kubernetes mayastor # (core) OpenEBS MayaStor minio # (core) MinIO object storage prometheus # (core) Prometheus operator for monitoring and logging rbac # (core) Role-Based Access Control for authorisation registry # (core) Private image registry exposed on localhost:32000 ubuntu@node1:~/microservices/observability$ ubuntu@node1:~/microservices/observability$ kubectl get all -n observability NAME READY STATUS RESTARTS AGE pod/kube-prom-stack-kube-prome-operator-79cbdd7979-gjc5r 1/1 Running 0 3m18s pod/tempo-0 2/2 Running 0 3m3s pod/alertmanager-kube-prom-stack-kube-prome-alertmanager-0 2/2 Running 1 (3m11s ago) 3m14s pod/kube-prom-stack-kube-state-metrics-5bf874b44d-8b564 1/1 Running 0 3m18s pod/kube-prom-stack-prometheus-node-exporter-nkxdg 1/1 Running 0 3m18s pod/kube-prom-stack-prometheus-node-exporter-cttrq 1/1 Running 0 3m18s pod/kube-prom-stack-prometheus-node-exporter-lmqqc 1/1 Running 0 3m18s pod/prometheus-kube-prom-stack-kube-prome-prometheus-0 2/2 Running 0 3m13s pod/loki-promtail-5vj5r 1/1 Running 0 3m4s pod/loki-promtail-m97dt 1/1 Running 0 3m4s pod/loki-promtail-q7h87 1/1 Running 0 3m4s pod/kube-prom-stack-prometheus-node-exporter-fpxpf 1/1 Running 0 3m18s pod/kube-prom-stack-grafana-79bff66ffb-fq6x2 3/3 Running 0 3m18s pod/loki-promtail-frvfv 1/1 Running 0 3m4s pod/loki-0 1/1 Running 0 3m4s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-prom-stack-grafana ClusterIP 10.152.183.210 <none> 80/TCP 3m18s service/kube-prom-stack-kube-prome-prometheus ClusterIP 10.152.183.228 <none> 9090/TCP 3m18s service/kube-prom-stack-prometheus-node-exporter ClusterIP 10.152.183.242 <none> 9100/TCP 3m18s service/kube-prom-stack-kube-prome-alertmanager ClusterIP 10.152.183.200 <none> 9093/TCP 3m18s service/kube-prom-stack-kube-prome-operator ClusterIP 10.152.183.239 <none> 443/TCP 3m18s service/kube-prom-stack-kube-state-metrics ClusterIP 10.152.183.179 <none> 8080/TCP 3m18s service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 3m15s service/prometheus-operated ClusterIP None <none> 9090/TCP 3m14s service/loki-headless ClusterIP None <none> 3100/TCP 3m4s service/loki-memberlist ClusterIP None <none> 7946/TCP 3m4s service/loki ClusterIP 10.152.183.19 <none> 3100/TCP 3m4s service/tempo ClusterIP 10.152.183.177 <none> 3100/TCP,16687/TCP,16686/TCP,6831/UDP,6832/UDP,14268/TCP,14250/TCP,9411/TCP,55680/TCP,55681/TCP,4317/TCP,4318/TCP,55678/TCP 3m3s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/kube-prom-stack-prometheus-node-exporter 4 4 4 4 4 <none> 3m18s daemonset.apps/loki-promtail 4 4 4 4 4 <none> 3m4s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kube-prom-stack-kube-prome-operator 1/1 1 1 3m18s deployment.apps/kube-prom-stack-kube-state-metrics 1/1 1 1 3m18s deployment.apps/kube-prom-stack-grafana 1/1 1 1 3m18s NAME DESIRED CURRENT READY AGE replicaset.apps/kube-prom-stack-kube-prome-operator-79cbdd7979 1 1 1 3m18s replicaset.apps/kube-prom-stack-kube-state-metrics-5bf874b44d 1 1 1 3m18s replicaset.apps/kube-prom-stack-grafana-79bff66ffb 1 1 1 3m18s NAME READY AGE statefulset.apps/alertmanager-kube-prom-stack-kube-prome-alertmanager 1/1 3m16s statefulset.apps/prometheus-kube-prom-stack-kube-prome-prometheus 1/1 3m15s statefulset.apps/tempo 1/1 3m4s statefulset.apps/loki 1/1 3m5s

2.Add a 50GB PVC to Prometheus pod by patch CRD object kube-prom-stack-kube-prome-prometheus in observability namespace

kubectl -n observability patch prometheus/kube-prom-stack-kube-prome-prometheus --patch '{"spec": {"paused": true, "storage": {"volumeClaimTemplate": {"spec": {"storageClassName":"microk8s-hostpath","resources": {"requests": {"storage":"50Gi"}}}}}}}' --type merge kubectl -n observability patch prometheus/kube-prom-stack-kube-prome-prometheus --patch '{"spec": {"paused": true, "resources": {"requests": {"cpu": "1", "memory":"1.6Gi"}}, "retention": "60d"}}' --type merge kubectl -n observability patch prometheus/kube-prom-stack-kube-prome-prometheus --patch '{"spec": {"paused": false}}' --type merge

3.Apply Ingress YAML file for Grafana Loki, Prometheus and AlertManager for browser accessing

$ cat << EOF > observability-ingress.yaml --- kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: grafana-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.class: "public" nginx.ingress.kubernetes.io/rewrite-target: / #new spec: rules: - host: "grafana.microk8s.hhii.ampere" http: paths: - backend: service: name: kube-prom-stack-grafana port: number: 3000 path: / pathType: Prefix --- kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: prometheus-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.class: "public" nginx.ingress.kubernetes.io/rewrite-target: / #new spec: rules: - host: "prometheus.microk8s.hhii.ampere" http: paths: - backend: service: name: prometheus-operated port: number: 9090 path: / pathType: Prefix --- kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: alertmanager-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.class: "public" nginx.ingress.kubernetes.io/rewrite-target: / #new spec: rules: - host: "alertmanager.microk8s.hhii.ampere" http: paths: - backend: service: name: alertmanager-operated port: number: 9093 path: / pathType: Prefix EOF $ kubectl -n observability apply -f observability-ingress.yaml ingress.networking.k8s.io/grafana-ingress created ingress.networking.k8s.io/prometheus-ingress created ingress.networking.k8s.io/alertmanager-ingress created $ kubectl get ingress -n observability NAME CLASS HOSTS ADDRESS PORTS AGE alertmanager-ingress <none> alertmanager.microk8s.hhii.ampere 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.104 80 7s grafana-ingress <none> grafana.microk8s.hhii.ampere 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.104 80 7s prometheus-ingress <none> prometheus.microk8s.hhii.ampere 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.104 80 7s

4.Access themon the browser with the credential admin / admin via the links below:

Fig4. Observability Ingress Fig4. Observability Ingress


5.To access Grafana, enter the link in 3.a and use the credential admin / prom-operator to login to it web UI Fig5. Grafana web UI Fig5. Grafana web UI


6.Click Dashboard → Manage → Default → “Kubernetes / Compute Resources / Namespace (Workloads) Fig6. Grafana dashboards Fig6. Grafana dashboards

Deploy MinIo on MicroK8s

1.Enable MinIO

$ microk8s enable minio -c 100Gi -t  tenant-1 ... Tenant 'tenant-1' created in 'minio-operator' Namespace Username: {MinIO Access Key ID} Password: {MinIO Access Key} ...

2.Prepare Ingress for MinIO console

cat << EOF > minio-tenant-1-ingress.yaml kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: minio-console-ingress namespace: minio-operator annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/backend-protocol: HTTP nginx.ingress.kubernetes.io/configuration-snippet: | chunked_transfer_encoding off; nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/proxy-ssl-verify: 'off' nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/server-snippet: | client_max_body_size 0; spec: rules: - host: console.minio.microk8s.hhii.ampere http: paths: - path: / pathType: Prefix backend: service: name: tenant-1-minio-svc port: number: 9090

3.Prepare Ingress for MinIO port 80, which will be used to share files (video thumbnail)

cat << EOF > minio-link-ingress.yaml kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: minio-link-ingress namespace: minio-operator annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/backend-protocol: HTTP nginx.ingress.kubernetes.io/configuration-snippet: | chunked_transfer_encoding off; nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/proxy-ssl-verify: 'off' nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/server-snippet: | client_max_body_size 0; spec: rules: - host: minio.microk8s.hhii.ampere http: paths: - path: / pathType: Prefix backend: service: name: minio port: number: 80

4.Deploy ingress for MinIO console

ubuntu@node1:~/microservices/minio$ kubectl apply -f minio-tenant-1-ingress.yaml -n minio-operator ingress.networking.k8s.io/minio-console-ingress created kubectl apply -f minio-link-ingress.yaml -n minio-operator

5.Create a configmap YAML file for MinIO on the namespace, vpp-poc

ubuntu@node1:~/microservices/vpp$ cat << EOF > minio-credential.yaml apiVersion: v1 kind: ConfigMap metadata: name: miniocredentials data: credentials: | [default] aws_access_key_id={MinIO Access Key ID} aws_secret_access_key={MinIO Access Key} EOF

6.Apply the credential on namespace, vod-poc and process-ns

$ kubectl create namespace vod-poc ubuntu@node1:~/microservices/vpp$ kubectl -n vod-poc create -f minio-credential.yaml configmap/miniocredentials created $ kubectl create namespace process-ns $ kubectl -n process-ns create -f minio-credential.yaml

7.Create bucket in MinIO (vpp-src, vpp-dst)

Fig7. Create bucket in MinIO web UI Fig7. Create bucket in MinIO web UI

Deploy the Video-on-Demand PoC demo on MicroK8s

1.Access the target MicroK8s cluster

% kubectl get nodes -owide NAME         STATUS   ROLES    AGE    VERSION                     INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME node1        Ready    <none>   3d3h   v1.26.11-2+123bfdfc196019   10.76.87.209   <none>        Ubuntu 22.04.1 LTS   5.15.0-60-generic   containerd://1.5.13

2.Create a namespace as vod-poc

% kubectl create ns vod-poc

3.Obtain the source code from the 2 GitHub repos:

% git clone https://github.com/AmpereComputing/nginx-hello-container % git clone https://github.com/AmpereComputing/nginx-vod-module-container
  • There are two YAML files for deploying VOD PoC:
    • nginx-hello-container/MicroK8s/nginx-front-app.yaml
    • nginx-vod-module-container/MicroK8s/nginx-vod-app.yaml

4.To deploy NGINX web server container in StatefulSet with its PVC template, service, and ingress, make sure to edit the host in “nginx-front-app.yaml” and “nginx-vod-app.yaml” files to match the records in DNS server. Additionally, set the replicas field to 1 in the YAML files for deploying one pod; the replicas field in "nginx-vod-app.yaml" is currently set to 3.

% kubectl -n vod-poc create -f nginx-hello-container/MicroK8s/nginx-front-app.yaml

5.Deploy NGINX VOD Module container in StatefulSet with its PVC template, service, and ingress

% kubectl -n vod-poc create -f nginx-vod-module-container/MicroK8s/nginx-vod-app.yaml

6.Check the deployment status in vod-poc namespace

% kubectl -n vod-poc get pod,pvc -owide
% kubectl -n vod-poc get svc,ingress -owide

7.You can prepare a tar ball with those pre-transcoded video files (under vod-demo) and subtitle file (extension filename: vtt), and the 2nd tar ball for nginx-front-app with html, css, JavaScript, etc. (under nginx-front-demo):

% tar zcvf vod-demo.tgz vod-demo % tar zcvf nginx-front-demo.tgz nginx-front-demo

8.For deploying the video files to the container in the nginx-vod-app pod, a web server will be required to host those video and subtitle files. Create a sub-directory such as "nginx-html", then move vod-demo.tgz and nginx-front-demo.tgz to the directory of "nginx-html". Run the NGINX web server container with the directory.

% mkdir nginx-html % mv vod-demo.tgz nginx-html % mv nginx-front-demo.tgz nginx-html   % cd nginx-html % docker run -d --rm -p 8080:8080  --name nginx-html  --user 1001 -v $PWD:/usr/share/nginx/html  docker.io/mrdojojo/nginx-hello-app:1.1-arm64

9.Deploying those bundle files to VOD pods.

  • Access the container in vod-poc pod by running the commands below and download files with wget from the web server from Bastion node to the pod, nginx-front-app-0
    • Download files for front-end from nginx-front-demo.tgz
% kubectl -n vod-poc exec -it pod/nginx-front-app-0 -- sh / # cd /usr/share/nginx/html/ /usr/share/nginx/html # wget http://[Bastion node's IP address]:8080/nginx-front-demo.tgz /usr/share/nginx/html # tar zxvf nginx-front-demo.tgz /usr/share/nginx/html # sed -i "s,http://\[vod-demo\]/,http://vod.microk8s.ampere/,g" *.html /usr/share/nginx/html # rm -rf nginx-front-demo.tgz
  • Download the file, vod-demo.tgz, for video and subtitle files (mp4 & vtt) with wget from the web server from Bastion node to the pod, nginx-vod-app-0.
% kubectl -n vod-poc exec -it pod/nginx-vod-app-0 -- sh / # cd /opt/static/videos/ /opt/static/videos # wget http://[Bastion node's IP address]:8080/vod-demo.tgz /opt/static/videos # tar zxvf vod-demo.tgz /opt/static/videos # mv vod-demo/* . /opt/static/videos # rm -rf vod-demo

10.Finally, since Ingress is applied in nginx-hello-container/MicroK8s/nginx-front-app.yaml, open a browser to access the http://demo.microk8s.ampere for testing the VOD PoC demo on MicroK8s

Fig8. VOD demo web UI Fig8. VOD demo web UI

Created At : March 21st 2024, 5:11:08 pm
Last Updated At : March 28th 2024, 5:14:32 pm
Ampere Logo

Ampere Computing LLC

4655 Great America Parkway Suite 601

Santa Clara, CA 95054

image
image
image
image
 |  |  |  |  |  | 
© 2023 Ampere Computing LLC. All rights reserved. Ampere, Altra and the A and Ampere logos are registered trademarks or trademarks of Ampere Computing.
This site is running on Ampere Altra Processors.