The kube-prometheus-stack Helm chart deploys Alertmanager with a default configuration that routes all alerts to a “null” receiver—effectively discarding them. This post documents configuring Alertmanager to send notifications to Slack.
The Problem
Default Alertmanager configuration:
receivers:
- name: "null"
route:
receiver: "null" # All alerts discarded
Alerts fire, but nobody gets notified.
Solution Architecture
┌─────────────────────┐ ┌──────────────────────┐ ┌─────────────┐
│ Prometheus │────▶│ Alertmanager │────▶│ Slack │
│ (fires alerts) │ │ (routes & groups) │ │ (#alerts) │
└─────────────────────┘ └──────────────────────┘ └─────────────┘
│
▼
┌──────────────────────┐
│ Routing Rules │
├──────────────────────┤
│ critical → 1h repeat │
│ warning → 4h repeat │
│ Watchdog → silenced │
└──────────────────────┘
Directory Structure
monitoring/alertmanager/
├── .env.example # Webhook URL template
├── .env # Actual webhook (gitignored)
├── create-secret.sh # Creates Kubernetes secret
└── README.md # Setup documentation
Setup
Step 1: Create Slack Webhook
- Go to https://api.slack.com/apps
- Click “Create New App” → “From scratch”
- Name:
Alertmanager, select your workspace - Go to “Incoming Webhooks” → Toggle “Activate”
- Click “Add New Webhook to Workspace”
- Select the channel for alerts (e.g.,
#alerts) - Copy the webhook URL
Step 2: Create Kubernetes Secret
# monitoring/alertmanager/.env.example
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/XXX/YYY/ZZZ
SLACK_CHANNEL=#alerts
#!/bin/bash
# monitoring/alertmanager/create-secret.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$SCRIPT_DIR/.env" ]; then
source "$SCRIPT_DIR/.env"
else
echo "Error: .env file not found"
exit 1
fi
kubectl create secret generic alertmanager-slack-config \
--from-literal=slack-webhook-url="${SLACK_WEBHOOK_URL}" \
--from-literal=slack-channel="${SLACK_CHANNEL}" \
--namespace=monitoring \
--dry-run=client -o yaml | kubectl apply -f -
Run the setup:
cd monitoring/alertmanager
cp .env.example .env
# Edit .env with your webhook URL
./create-secret.sh
Step 3: Configure Alertmanager
Add to prometheus-values.yaml:
alertmanager:
alertmanagerSpec:
nodeSelector:
kubernetes.io/hostname: polycephala
secrets:
- alertmanager-slack-config
config:
global:
resolve_timeout: 5m
slack_api_url_file: /etc/alertmanager/secrets/alertmanager-slack-config/slack-webhook-url
route:
group_by: ['namespace', 'alertname', 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
receiver: 'slack-notifications'
routes:
- matchers:
- severity = critical
receiver: 'slack-critical'
repeat_interval: 1h
continue: false
- matchers:
- alertname = Watchdog
receiver: 'null'
- matchers:
- alertname = InfoInhibitor
receiver: 'null'
inhibit_rules:
- source_matchers:
- severity = critical
target_matchers:
- severity =~ warning|info
equal: ['namespace', 'alertname']
- source_matchers:
- severity = warning
target_matchers:
- severity = info
equal: ['namespace', 'alertname']
receivers:
- name: 'null'
- name: 'slack-notifications'
slack_configs:
- channel: '#alerts'
send_resolved: true
title: '{{ if eq .Status "firing" }}:fire:{{ else }}:white_check_mark:{{ end }} [{{ .Status | toUpper }}] {{ .CommonLabels.alertname }}'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }}
*Severity:* {{ .Labels.severity }}
*Namespace:* {{ .Labels.namespace }}
{{ if .Annotations.description }}*Description:* {{ .Annotations.description }}{{ end }}
{{ end }}
- name: 'slack-critical'
slack_configs:
- channel: '#alerts'
send_resolved: true
title: ':rotating_light: [CRITICAL] {{ .CommonLabels.alertname }}'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }}
*Namespace:* {{ .Labels.namespace }}
{{ if .Annotations.description }}*Description:* {{ .Annotations.description }}{{ end }}
*Runbook:* {{ if .Annotations.runbook_url }}{{ .Annotations.runbook_url }}{{ else }}N/A{{ end }}
{{ end }}
templates: []
Step 4: Deploy
helm upgrade prometheus prometheus-community/kube-prometheus-stack \
-f prometheus-values.yaml -n monitoring
Gotchas
1. Prometheus Operator Does Not Support channel_file
Initial attempt used channel_file to read the channel from the mounted secret:
# This does NOT work with Prometheus operator
slack_configs:
- channel_file: /etc/alertmanager/secrets/alertmanager-slack-config/slack-channel
The Prometheus operator’s config validation rejected this with:
yaml: unmarshal errors: field channel_file not found in type config.plain
The channel_file field is valid Alertmanager configuration, but the Prometheus operator parses the config through its own schema which doesn’t include this field.
Solution: Use channel directly with a hardcoded value:
slack_configs:
- channel: '#alerts'
The webhook URL can still use slack_api_url_file since that’s in the global section which the operator handles differently.
2. Helm Upgrade Timeouts with Linkerd
Helm upgrades kept timing out:
Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition
The admission webhook job pods were getting Linkerd sidecar injection. When the main container completed, the Linkerd proxy sidecar kept the pod alive, preventing job completion.
Workaround: Apply the Alertmanager config directly to the secret:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: alertmanager-prometheus-kube-prometheus-alertmanager
namespace: monitoring
type: Opaque
stringData:
alertmanager.yaml: |
# ... config here ...
EOF
The Prometheus operator picks up the secret change and reconciles the Alertmanager StatefulSet.
3. Secret Mount Requires Operator Reconciliation
After updating the Alertmanager CR to include secrets, the StatefulSet wasn’t updating:
spec:
secrets:
- alertmanager-slack-config
This was because the operator was failing to reconcile due to the channel_file issue above. Once that was fixed, the operator successfully updated the StatefulSet with the secret volume mount.
Alert Routing Summary
| Alert Type | Receiver | Repeat Interval | Notes |
|---|---|---|---|
| Critical | slack-critical | 1 hour | Immediate, special formatting |
| Warning | slack-notifications | 4 hours | Grouped by namespace/alertname |
| Info | slack-notifications | 4 hours | Grouped |
| Watchdog | null | - | Silenced (heartbeat) |
| InfoInhibitor | null | - | Silenced |
Message Format
Standard alerts:
🔥 [FIRING] KubeDaemonSetRolloutStuck
Alert: DaemonSet monitoring/loki-promtail has not finished rolling out
Severity: warning
Namespace: monitoring
Critical alerts:
🚨 [CRITICAL] NodeNotReady
Alert: Node k8s-worker01 is not ready
Namespace: kube-system
Runbook: https://runbooks.prometheus-operator.dev/...
Resolved:
✅ [RESOLVED] KubeDaemonSetRolloutStuck
...
Verification
Check Alertmanager logs:
kubectl logs -n monitoring -l app.kubernetes.io/name=alertmanager -c alertmanager
Check active alerts:
kubectl exec -n monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 \
-c alertmanager -- wget -qO- http://localhost:9093/api/v2/alerts | jq '.[].labels.alertname'
Send a test alert:
kubectl exec -n monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 \
-c alertmanager -- wget -qO- --post-data='[{
"labels":{"alertname":"TestAlert","severity":"warning","namespace":"test"},
"annotations":{"summary":"Test alert"}
}]' --header='Content-Type: application/json' http://localhost:9093/api/v2/alerts
Summary
| Component | Status |
|---|---|
| Slack webhook | Stored in Kubernetes secret |
| Alert routing | Critical/Warning/Info differentiated |
| Silencing | Watchdog and InfoInhibitor |
| Message format | Emoji indicators, severity, namespace |
| Resolved notifications | Enabled |
Configuration available at k8s-configs/monitoring/alertmanager.