Skip to content

Kedify OTel Scaler Migration

This guide will help you with setting up Prometheus Scaler that will be autoscaling the target deployment based on application-specific metrics. In the second part of the guide, we will migrate the Prometheus Scaler to lightweight OpenTelemetry (OTel) Scaler.

1. Prepare Helm Charts and Cluster

Terminal window
# Add the repos and update them
helm repo add kedacore https://kedacore.github.io/charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update kedacore prometheus-community
Terminal window
# Create k3d cluster
k3d cluster create --port "8080:30080@loadbalancer" --k3s-arg "--disable=traefik@server:*"

2. Install KEDA and Prometheus Stack

Install KEDA

Terminal window
helm upgrade -i keda kedacore/keda --namespace keda --create-namespace

Install Prometheus Stack

Terminal window
helm upgrade -i kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace

3. Deploy Application that Exposes Prometheus Metrics

Deploy the work-simulator application, which exposes Prometheus metrics for business logic.

Terminal window
kubectl apply -f https://raw.githubusercontent.com/kedify/examples/refs/heads/main/samples/work-simulator/config/manifests.yaml
kubectl expose svc work-simulator --name work-simulator-np --type NodePort --overrides '{ "apiVersion": "v1","spec":{"ports": [{"port":8080,"protocol":"TCP","targetPort":8080,"nodePort":30080}]}}'

This will deploy the application and expose it on node port 30080 (which should be mapped to host’s port 8080 by k3d).

Deploy the ServiceMonitor to configure Prometheus scraping:

Terminal window
kubectl apply -f https://raw.githubusercontent.com/kedify/examples/refs/heads/main/samples/work-simulator/config/servicemonitor.yaml

You can verify the metrics are correctly scraped by locating the new target in Prometheus UI. First we will port forward the proper service and then access it on http://localhost:9090.

Terminal window
kubectl port-forward service/kube-prometheus-stack-prometheus -n monitoring 9090:9090

4. Deploy ScaledObject for Autoscaling

Create a ScaledObject to define the scaling behavior.

Terminal window
kubectl apply -f https://raw.githubusercontent.com/kedify/examples/refs/heads/main/samples/work-simulator/config/so.yaml

Verify the ScaledObject:

Terminal window
kubectl get scaledobject work-simulator-scaledobject

You should see an output where READY is True.


5. Generate Load

Use hey to generate load:

Terminal window
hey -z 2m -c 50 http://localhost:8080/work
  • -z 2m: Number of minutes that we will send requests.
  • -c 50: Number of concurrent workers.

Or use a simple curl:

Terminal window
curl -s http://localhost:8080/work

Monitor the scaling behavior:

Terminal window
watch kubectl get deployment work-simulator

You should see the number of replicas increase as load is applied. Once the load is processed, the replicas will scale back down to 1.


6. Migrate from Prometheus to OTel Collector & OTel Scaler

Uninstall the Prometheus

We want the OTel collector to be scraping the metrics from our work-simulator app and sending them to KEDA OTel Scaler. In this setup we don’t need the Prometheus stack anymore. However, OTel Operator can work with CRDs from Prometheus Operator (PodMonitor & ServiceMonitor) so let’s leave the Helm Charts installed and let’s only scale the deployments to 0.

Terminal window
for kind in deploy statefulset; do
for name in $(kubectl get ${kind} -nmonitoring --no-headers -o custom-columns=":metadata.name"); do
kubectl scale ${kind} ${name} -nmonitoring --replicas=0
done
done
kubectl -nmonitoring patch daemonset kube-prometheus-stack-prometheus-node-exporter -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'

7. Install KEDA OTel Scaler, OTel Operator & OTel Collector

Terminal window
cat <<VALUES | helm upgrade -i otel-scaler oci://ghcr.io/kedify/charts/otel-add-on --version=v0.0.11 -f -
otelOperator:
enabled: true
otelOperatorCrs:
- enabled: true
targetAllocatorEnabled: true
# not necessary, but let's ignore {Pod,Service}Monitors from other namespaces
targetAllocator:
prometheusCR:
allowNamespaces: [default]
VALUES

8. Migrate the ScaledObject

When using the Prometheus scaler as a trigger for the ScaledObject (SO), it has slightly different fields under the metadata section. To migrate from it to another scaler, apply this SO, this will also unpause it, because the annotation that pauses the ScaledObject is not present:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: work-simulator-scaledobject
spec:
scaleTargetRef:
kind: Deployment
name: work-simulator
minReplicaCount: 1
maxReplicaCount: 10
advanced:
restoreToOriginalReplicaCount: true
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
stabilizationWindowSeconds: 5
triggers:
# was type: prometheus
- type: external
metadata:
scalerAddress: keda-otel-scaler.default.svc:4318 # was serverAddress: http://kube-prometheus-svc..cluster.local:9090
metricQuery: sum(work_simulator_inprogress_tasks{job="work-simulator"}) # <- was query: .., the PromQL query itself is the same
targetValue: '5' # <- was threshold: "5"
Terminal window
kubectl apply -f https://kedify.io/assets/yaml/how-to/otel-scaler-migration/new-so.yaml

To verify that new setup with OpenTelemetry works, generate some load as in Step 5.

Terminal window
hey -z 2m -c 50 http://localhost:8080/work

Conclusion

With using the OTel Operator, we can migrate the existing monitoring infra smoothly, because TargetAllocator feature allows reusing the Prometheus’ CRDs. Configuration of OpenTelemetry Collector also allows using the dynamic target allocation based on annotations (example).

Now, let’s check the resource savings after dropping in the OTel stuff:

Terminal window
kubectl top pods -n monitoring
NAME CPU(cores) MEMORY(bytes)
alertmanager-kube-prometheus-stack-alertmanager-0 2m 49Mi
kube-prometheus-stack-grafana-746fbdb8d8-p9vjp 52m 388Mi
kube-prometheus-stack-kube-state-metrics-779d68fc98-sqlhz 2m 23Mi
kube-prometheus-stack-operator-75dbf947f8-9rj8h 5m 32Mi
kube-prometheus-stack-prometheus-node-exporter-pzcvt 1m 20Mi
prometheus-kube-prometheus-stack-prometheus-0 19m 377Mi
Terminal window
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
keda-otel-scaler-548967c87f-878fh 3m 32Mi
keda-otel-scaler-collector-0 3m 55Mi
keda-otel-scaler-targetallocator-ff5877d79-2rdjd 3m 35Mi
otel-operator-6545c6bddc-26765 3m 46Mi