Compare commits

..

49 Commits

Author SHA1 Message Date
db2d719f34 Merge commit 'cd1165690a9823240549009fc9ee9114742d6448'
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2024-06-25 17:15:03 +00:00
cd1165690a Squashed '.ci/' changes from 22ed100..2c44e4f
2c44e4f Disable concurrent builds
7144a42 Improve Trivy scanning logic
c1a48a6 Remove auto stash push / pop as being too dangerous
318c19e Add merge comment for subtree

git-subtree-dir: .ci
git-subtree-split: 2c44e4fd8550d30fba503a2bcccec8e0bac1c151
2024-06-25 17:15:03 +00:00
03a88e01b2 Feat: add support for group updates
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2024-06-05 10:02:15 +00:00
c324ab03bb Merge pull request 'chore(deps): update all non-major dependencies' (#19) from renovate/all-minor-patch into master
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
Reviewed-on: #19
2024-04-15 14:15:26 +00:00
bf204c8fb4 chore(deps): update all non-major dependencies
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/pr-master This commit looks good
2024-04-14 03:05:28 +00:00
1175a38d8b Upgrade base OS to Alpine 3.19, minor ElastiCache fix
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
2024-04-05 13:27:05 +00:00
d7bf6542ce Merge pull request 'chore(deps): update all non-major dependencies' (#18) from renovate/all-minor-patch into master
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
Reviewed-on: #18
2024-04-05 13:02:35 +00:00
56945759a6 chore(deps): update all non-major dependencies
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/pr-master This commit looks good
2024-04-05 03:09:15 +00:00
2d59428b1e Merge pull request 'chore(deps): update all non-major dependencies' (#17) from renovate/all-minor-patch into master
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
Reviewed-on: #17
2023-08-17 09:59:58 +00:00
5f747d5e8b chore(deps): update all non-major dependencies
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/pr-master This commit looks good
2023-08-17 09:56:14 +00:00
27dddfb1b7 Squashed '.ci/' changes from 5023473..22ed100
22ed100 Fix custom branch docker tags
227e39f Allow custom GIT_TAG
38a9cda Debug CI pipeline
3efcc81 Debug CI pipeline

git-subtree-dir: .ci
git-subtree-split: 22ed10034d0a2380085b8f0680a50c2e67f6fada
2023-08-17 09:51:15 +00:00
7043dcb2bf Merge latest ci-tools-lib
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-08-17 09:51:15 +00:00
d0f97044c0 Fix tests for RDS events, add error log for failed notifications
Some checks are pending
ZeroDownTime/sns-alert-hub/pipeline/head Build queued...
2023-08-15 10:28:36 +01:00
6d421cefd1 Squashed '.ci/' changes from cdc32e0..5023473
5023473 Make branch detection work for tagged commits

git-subtree-dir: .ci
git-subtree-split: 50234738d04b5b26d9e067fed0e58e98931c2e9b
2023-08-15 10:26:56 +01:00
a177d6145e Merge commit '6d421cefd191c85f9c7eb9f0f13a12fcc76bec05' 2023-08-15 10:26:56 +01:00
bbbacf6666 Merge pull request 'chore(deps): update all non-major dependencies' (#14) from renovate/all-minor-patch into master
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
Reviewed-on: #14
2023-08-14 10:35:01 +00:00
0152d90502 Squashed '.ci/' changes from 8df60af..cdc32e0
cdc32e0 Improve cleanup flow

git-subtree-dir: .ci
git-subtree-split: cdc32e01eae67cd2635fa89cad114c6f6a11359f
2023-08-14 10:27:13 +00:00
9ef9c150b1 Merge commit '0152d905022e47ccf43ebac18804fb970cb0609c'
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
2023-08-14 10:27:13 +00:00
93faaeb6ce Squashed '.ci/' changes from 748a4bd..8df60af
8df60af Fix derp

git-subtree-dir: .ci
git-subtree-split: 8df60afa12bb349e86eab22250ba5644ec9c6069
2023-08-14 10:24:11 +00:00
cf558ada66 Merge commit '93faaeb6cec341b66e93f5da5d6d6c218351a1aa'
Some checks failed
ZeroDownTime/sns-alert-hub/pipeline/head There was a failure building this commit
2023-08-14 10:24:11 +00:00
ef69c37109 Make tests work again, use new CI flow
Some checks failed
ZeroDownTime/sns-alert-hub/pipeline/head There was a failure building this commit
2023-08-14 10:22:11 +00:00
b6e34c53c1 Squashed '.ci/' changes from 955afa7..748a4bd
748a4bd Migrate to :: to allow custom make steps, add generic stubs

git-subtree-dir: .ci
git-subtree-split: 748a4bde2f5ab332b34acb356183e43948531609
2023-08-14 10:20:46 +00:00
af7bebdc53 Merge commit 'b6e34c53c1e41709c05d828059ae034b903a3bd3' 2023-08-14 10:20:46 +00:00
478899bb66 chore(deps): update all non-major dependencies
Some checks failed
ZeroDownTime/sns-alert-hub/pipeline/pr-master There was a failure building this commit
2023-08-12 03:02:51 +00:00
6588aa823e Tesr PR grouping
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-08-11 18:07:03 +00:00
785cddf02c Squashed '.ci/' changes from 5819ded..955afa7
955afa7 Apply pep8

git-subtree-dir: .ci
git-subtree-split: 955afa71eec3533962eae428f46d5372a22ab85f
2023-08-11 12:51:56 +00:00
146d1c7a64 Merge commit '785cddf02c35288ac82794e2f0cb20cf8f7cf653'
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-08-11 12:51:56 +00:00
cb8ae01ca4 Squashed '.ci/' changes from 5d4e4ad..5819ded
Some checks failed
ZeroDownTime/sns-alert-hub/pipeline/head There was a failure building this commit
5819ded Improve ECR public lifecycle handling via python script

git-subtree-dir: .ci
git-subtree-split: 5819ded812103aa008fd29b7bb98121e9353cec0
2023-08-11 12:40:20 +00:00
67f352cf2e Merge pull request 'Configure Renovate' (#8) from renovate/configure into master
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
Reviewed-on: #8
2023-08-10 23:41:28 +00:00
ececa1b075 pin dependencies, latest ci lib
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-08-09 11:59:19 +00:00
4903ac275e Merge commit '5a1db73f3365a155c9b3123afb531f66e828b15f' 2023-08-09 11:59:00 +00:00
5a1db73f33 Squashed '.ci/' changes from 79eebe4..5d4e4ad
5d4e4ad Make rm-remote-untagged less noisy
f00e541 Add cleanup step to remove untagged images by default
0821e91 Ensure tag names are valid for remote branches like PRs

git-subtree-dir: .ci
git-subtree-split: 5d4e4adce020398a515b5edf73cede0e0f81087b
2023-08-09 11:59:00 +00:00
477b03efd3 chore(deps): add renovate.json
Some checks failed
ZeroDownTime/sns-alert-hub/pipeline/pr-master There was a failure building this commit
2023-08-07 13:40:39 +00:00
bfb60003de Pepare Dockerfile for Renovate
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-08-04 13:55:43 +00:00
3b0fffef1a Add support for RDS snapshot events
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
2023-06-18 09:35:46 +00:00
3831ee7774 Tune RDS messages
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
2023-06-16 15:03:52 +00:00
9229d32110 Add support for RDS events
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
2023-06-15 12:20:10 +00:00
bd14d352eb Some more ElastiCache events
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
2023-05-24 10:40:43 +00:00
d32ea3855a Add support for ASG launch event
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
2023-05-22 16:17:59 +00:00
bb13496ac3 Add support for ElastiCache replacement schedule notifications
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-05-19 10:21:44 +00:00
0d6ef359f5 Add support for ASG Events, fix deep link to AWS UI
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/tag This commit looks good
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-05-16 14:23:43 +00:00
7544e6a30d Add tests for elasticache snapshot msg
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-05-16 13:43:21 +00:00
da11514c5f Add support for ElastiCache snapshot notifications, minor code reorg
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-05-16 14:01:12 +01:00
e2d2a3bb89 Update Alpine to 3.16
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-05-15 21:08:13 +00:00
d4ccabae2e Squashed '.ci/' changes from aea1ccc..79eebe4
79eebe4 add ARCH support for tests

git-subtree-dir: .ci
git-subtree-split: 79eebe4d3de843d921994db20e20dda801272934
2023-05-15 21:07:54 +00:00
5822e70228 Merge commit 'd4ccabae2ebb13484d6303f9916d28c0aed8d1a2' 2023-05-15 21:07:54 +00:00
174d1f9680 Squashed '.ci/' changes from 98c8ec1..aea1ccc
aea1ccc Only add branch name to tags, if not part of actual tag
a5875db Make EXTRA_TAGS work again
63421d1 fix: prevent branch_name equals tag
47140c2 feat: append branch into tags if not main
4b62ada chore: improve messaging
a49cc0c chore: improve messaging
194afb4 chore: get ci working again
8ec9769 chore: get ci working again
fef4968 fix: do NOT push PRs to registry, other fixes
50a6d67 feat: ensure ARCH is only set to defined values
8fb40c7 fix: adjust trivy call to local podman
7378ea9 fix: fix trivy scan task to match new flow, add BRANCH env to Makefile
38cf7ab fix: more podman/buildah cleanups
aece7fc fix: Improve multi-arch manifest handling
80dabc2 feat: remove implicit dependencies, add help target, cleanup
5554972 feat: multi-arch container images
da15d68 feat: handle nothing to cleanup gracefully
01df38b feat: move ecr-login into its own task
ea9c914 chore: test mermaid
19a782e chore: test mermaid
a47929d feat: switch to latest trivy cli syntax
cb5faca feat: add create-repo task to ease bootstrapping new project
49ea8c8 feat: Add support for custom EXTRA_TAGS
dc2c208 fix: use absolute image URLs for some tasks
bc72735 docs: add quickstart

git-subtree-dir: .ci
git-subtree-split: aea1cccfff35de2ef2eca138379a599c8bde39e0
2023-05-15 12:21:08 +00:00
2b96d7ddc1 Merge commit '174d1f9680e0d3da1c11e94bc1de92eae2e2203a'
All checks were successful
ZeroDownTime/sns-alert-hub/pipeline/head This commit looks good
2023-05-15 12:21:08 +00:00
8fef66c4e9 ci: move pytest.ini and use it within the test container
Some checks failed
ZeroDownTime/sns-alert-hub/pipeline/head There was a failure building this commit
2022-02-24 12:57:14 +01:00
17 changed files with 386 additions and 112 deletions

View File

@ -2,6 +2,22 @@
Various toolchain bits and pieces shared between projects Various toolchain bits and pieces shared between projects
# Quickstart
Create top-level Makefile
```
REGISTRY := <your-registry>
IMAGE := <image_name>
REGION := <AWS region of your registry>
include .ci/podman.mk
```
Add subtree to your project:
```
git subtree add --prefix .ci https://git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash
```
## Jenkins ## Jenkins
Shared groovy libraries Shared groovy libraries

63
.ci/ecr_public_lifecycle.py Executable file
View File

@ -0,0 +1,63 @@
#!/usr/bin/env python3
import argparse
import boto3
parser = argparse.ArgumentParser(
description='Implement basic public ECR lifecycle policy')
parser.add_argument('--repo', dest='repositoryName', action='store', required=True,
help='Name of the public ECR repository')
parser.add_argument('--keep', dest='keep', action='store', default=10, type=int,
help='number of tagged images to keep, default 10')
parser.add_argument('--dev', dest='delete_dev', action='store_true',
help='also delete in-development images only having tags like v0.1.1-commitNr-githash')
args = parser.parse_args()
client = boto3.client('ecr-public', region_name='us-east-1')
images = client.describe_images(repositoryName=args.repositoryName)[
"imageDetails"]
untagged = []
kept = 0
# actual Image
# imageManifestMediaType: 'application/vnd.oci.image.manifest.v1+json'
# image Index
# imageManifestMediaType: 'application/vnd.oci.image.index.v1+json'
# Sort by date uploaded
for image in sorted(images, key=lambda d: d['imagePushedAt'], reverse=True):
# Remove all untagged
# if registry uses image index all actual images will be untagged anyways
if 'imageTags' not in image:
untagged.append({"imageDigest": image['imageDigest']})
# print("Delete untagged image {}".format(image["imageDigest"]))
continue
# check for dev tags
if args.delete_dev:
_delete = True
for tag in image["imageTags"]:
# Look for at least one tag NOT beign a SemVer dev tag
if "-" not in tag:
_delete = False
if _delete:
print("Deleting development image {}".format(image["imageTags"]))
untagged.append({"imageDigest": image['imageDigest']})
continue
if kept < args.keep:
kept = kept+1
print("Keeping tagged image {}".format(image["imageTags"]))
continue
else:
print("Deleting tagged image {}".format(image["imageTags"]))
untagged.append({"imageDigest": image['imageDigest']})
deleted_images = client.batch_delete_image(
repositoryName=args.repositoryName, imageIds=untagged)
if deleted_images["imageIds"]:
print("Deleted images: {}".format(deleted_images["imageIds"]))

View File

@ -1,56 +1,84 @@
# Parse version from latest git semver tag # Parse version from latest git semver tag
GTAG=$(shell git describe --tags --match v*.*.* 2>/dev/null || git rev-parse --short HEAD 2>/dev/null) GIT_TAG ?= $(shell git describe --tags --match v*.*.* 2>/dev/null || git rev-parse --short HEAD 2>/dev/null)
TAG ?= $(shell echo $(GTAG) | awk -F '-' '{ print $$1 "-" $$2 }' | sed -e 's/-$$//') GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
ifeq ($(TRIVY_REMOTE),) TAG ::= $(GIT_TAG)
TRIVY_OPTS := image # append branch name to tag if NOT main nor master
else ifeq (,$(filter main master, $(GIT_BRANCH)))
TRIVY_OPTS := client --remote ${TRIVY_REMOTE} # If branch is substring of tag, omit branch name
ifeq ($(findstring $(GIT_BRANCH), $(GIT_TAG)),)
# only append branch name if not equal tag
ifneq ($(GIT_TAG), $(GIT_BRANCH))
# Sanitize GIT_BRANCH to allowed Docker tag character set
TAG = $(GIT_TAG)-$(shell echo $$GIT_BRANCH | sed -e 's/[^a-zA-Z0-9]/-/g')
endif
endif
endif endif
.PHONY: build test scan push clean ARCH ::= amd64
ALL_ARCHS ::= amd64 arm64
_ARCH = $(or $(filter $(ARCH),$(ALL_ARCHS)),$(error $$ARCH [$(ARCH)] must be exactly one of "$(ALL_ARCHS)"))
all: test ifneq ($(TRIVY_REMOTE),)
TRIVY_OPTS ::= --server $(TRIVY_REMOTE)
endif
.SILENT: ; # no need for @
.ONESHELL: ; # recipes execute in same shell
.NOTPARALLEL: ; # wait for this target to finish
.EXPORT_ALL_VARIABLES: ; # send all vars to shell
.PHONY: all # All targets are accessible for user
.DEFAULT: help # Running Make will run the help target
build: help: ## Show Help
@docker image exists $(IMAGE):$(TAG) || \ grep -E '^[a-zA-Z_-]+:.*?## .*$$' .ci/podman.mk | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
docker build --rm -t $(IMAGE):$(TAG) --build-arg TAG=$(TAG) .
test: build rm-test-image prepare:: ## custom step on the build agent before building
@test -f Dockerfile.test && \
{ docker build --rm -t $(IMAGE):$(TAG)-test --from=$(IMAGE):$(TAG) -f Dockerfile.test . && \
docker run --rm --env-host -t $(IMAGE):$(TAG)-test; } || \
echo "No Dockerfile.test found, skipping test"
scan: build fmt:: ## auto format source
@echo "Scanning $(IMAGE):$(TAG) using Trivy"
@trivy $(TRIVY_OPTS) $(IMAGE):$(TAG)
push: build lint:: ## Lint source
@aws ecr-public get-login-password --region $(REGION) | docker login --username AWS --password-stdin $(REGISTRY)
@docker tag $(IMAGE):$(TAG) $(REGISTRY)/$(IMAGE):$(TAG) $(REGISTRY)/$(IMAGE):latest
docker push $(REGISTRY)/$(IMAGE):$(TAG)
docker push $(REGISTRY)/$(IMAGE):latest
clean: rm-test-image rm-image build: ## Build the app
buildah build --rm --layers -t $(IMAGE):$(TAG)-$(_ARCH) --build-arg TAG=$(TAG) --build-arg ARCH=$(_ARCH) --platform linux/$(_ARCH) .
# Delete all untagged images test:: ## test built artificats
.PHONY: rm-remote-untagged
rm-remote-untagged: scan: ## Scan image using trivy
@echo "Removing all untagged images from $(IMAGE) in $(REGION)" echo "Scanning $(IMAGE):$(TAG)-$(_ARCH) using Trivy $(TRIVY_REMOTE)"
@aws ecr-public batch-delete-image --repository-name $(IMAGE) --region $(REGION) --image-ids $$(for image in $$(aws ecr-public describe-images --repository-name $(IMAGE) --region $(REGION) --output json | jq -r '.imageDetails[] | select(.imageTags | not ).imageDigest'); do echo -n "imageDigest=$$image "; done) trivy image $(TRIVY_OPTS) --quiet --no-progress localhost/$(IMAGE):$(TAG)-$(_ARCH)
# first tag and push all actual images
# create new manifest for each tag and add all available TAG-ARCH before pushing
push: ecr-login ## push images to registry
for t in $(TAG) latest $(EXTRA_TAGS); do \
echo "Tagging image with $(REGISTRY)/$(IMAGE):$${t}-$(ARCH)"
buildah tag $(IMAGE):$(TAG)-$(_ARCH) $(REGISTRY)/$(IMAGE):$${t}-$(_ARCH); \
buildah manifest rm $(IMAGE):$$t || true; \
buildah manifest create $(IMAGE):$$t; \
for a in $(ALL_ARCHS); do \
buildah manifest add $(IMAGE):$$t $(REGISTRY)/$(IMAGE):$(TAG)-$$a; \
done; \
echo "Pushing manifest $(IMAGE):$$t"
buildah manifest push --all $(IMAGE):$$t docker://$(REGISTRY)/$(IMAGE):$$t; \
done
ecr-login: ## log into AWS ECR public
aws ecr-public get-login-password --region $(REGION) | podman login --username AWS --password-stdin $(REGISTRY)
rm-remote-untagged: ## delete all remote untagged and in-dev images, keep 10 tagged
echo "Removing all untagged and in-dev images from $(IMAGE) in $(REGION)"
.ci/ecr_public_lifecycle.py --repo $(IMAGE) --dev
clean:: ## clean up source folder
.PHONY: rm-image
rm-image: rm-image:
@test -z "$$(docker image ls -q $(IMAGE):$(TAG))" || docker image rm -f $(IMAGE):$(TAG) > /dev/null test -z "$$(podman image ls -q $(IMAGE):$(TAG)-$(_ARCH))" || podman image rm -f $(IMAGE):$(TAG)-$(_ARCH) > /dev/null
@test -z "$$(docker image ls -q $(IMAGE):$(TAG))" || echo "Error: Removing image failed" test -z "$$(podman image ls -q $(IMAGE):$(TAG)-$(_ARCH))" || echo "Error: Removing image failed"
# Ensure we run the tests by removing any previous runs ## some useful tasks during development
.PHONY: rm-test-image ci-pull-upstream: ## pull latest shared .ci subtree
rm-test-image: git subtree pull --prefix .ci ssh://git@git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash -m "Merge latest ci-tools-lib"
@test -z "$$(docker image ls -q $(IMAGE):$(TAG)-test)" || docker image rm -f $(IMAGE):$(TAG)-test > /dev/null
@test -z "$$(docker image ls -q $(IMAGE):$(TAG)-test)" || echo "Error: Removing test image failed"
.DEFAULT: create-repo: ## create new AWS ECR public repository
@echo "$@ not implemented. NOOP" aws ecr-public create-repository --repository-name $(IMAGE) --region $(REGION)

View File

@ -2,24 +2,33 @@
def call(Map config=[:]) { def call(Map config=[:]) {
pipeline { pipeline {
options {
disableConcurrentBuilds()
}
agent { agent {
node { node {
label 'podman-aws-trivy' label 'podman-aws-trivy'
} }
} }
stages { stages {
stage('Prepare') { stage('Prepare') {
// get tags
steps { steps {
sh 'git fetch -q --tags ${GIT_URL} +refs/heads/${BRANCH_NAME}:refs/remotes/origin/${BRANCH_NAME}' sh 'mkdir -p reports'
// we set pull tags as project adv. options
// pull tags
//withCredentials([gitUsernamePassword(credentialsId: 'gitea-jenkins-user')]) {
// sh 'git fetch -q --tags ${GIT_URL}'
//}
// Optional project specific preparations
sh 'make prepare'
} }
} }
// Build using rootless podman // Build using rootless podman
stage('Build') { stage('Build') {
steps { steps {
sh 'make build' sh 'make build GIT_BRANCH=$GIT_BRANCH'
} }
} }
@ -31,13 +40,13 @@ def call(Map config=[:]) {
// Scan via trivy // Scan via trivy
stage('Scan') { stage('Scan') {
environment {
TRIVY_FORMAT = "template"
TRIVY_OUTPUT = "reports/trivy.html"
}
steps { steps {
sh 'mkdir -p reports' // we always scan and create the full json report
sh 'make scan' sh 'TRIVY_FORMAT=json TRIVY_OUTPUT="reports/trivy.json" make scan'
// render custom full html report
sh 'trivy convert -f template -t @/home/jenkins/html.tpl -o reports/trivy.html reports/trivy.json'
publishHTML target: [ publishHTML target: [
allowMissing: true, allowMissing: true,
alwaysLinkToLastBuild: true, alwaysLinkToLastBuild: true,
@ -47,25 +56,33 @@ def call(Map config=[:]) {
reportName: 'TrivyScan', reportName: 'TrivyScan',
reportTitles: 'TrivyScan' reportTitles: 'TrivyScan'
] ]
sh 'echo "Trivy report at: $BUILD_URL/TrivyScan"'
// Scan again and fail on CRITICAL vulns, if not overridden // fail build if issues found above trivy threshold
script { script {
if (config.trivyFail == 'NONE') { if ( config.trivyFail ) {
echo 'trivyFail == NONE, review Trivy report manually. Proceeding ...' sh "TRIVY_SEVERITY=${config.trivyFail} trivy convert --report summary --exit-code 1 reports/trivy.json"
} else {
sh "TRIVY_EXIT_CODE=1 TRIVY_SEVERITY=${config.trivyFail} make scan"
} }
} }
} }
} }
// Push to ECR // Push to container registry if not PR
// incl. basic registry retention removing any untagged images
stage('Push') { stage('Push') {
when { not { changeRequest() } }
steps { steps {
sh 'make push' sh 'make push'
sh 'make rm-remote-untagged'
} }
} }
// generic clean
stage('cleanup') {
steps {
sh 'make clean'
}
}
} }
} }
} }

2
.gitignore vendored
View File

@ -59,3 +59,5 @@ reports/
# virtualenv # virtualenv
venv/ venv/
ENV/ ENV/
aws-lambda-rie

View File

@ -1,13 +1,16 @@
# https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/ # https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/
ARG RUNTIME_VERSION="3.9" # libexec is missing from >=3.17
ARG DISTRO_VERSION="3.15"
# Stage 1 - bundle base image + runtime # Stage 1 - bundle base image + runtime
FROM python:${RUNTIME_VERSION}-alpine${DISTRO_VERSION} AS python-alpine FROM python:3.12-alpine3.19 AS python-alpine
ARG ALPINE="v3.19"
# Install GCC (Alpine uses musl but we compile and link dependencies with GCC) # Install GCC (Alpine uses musl but we compile and link dependencies with GCC)
RUN apk upgrade -U --available --no-cache && \ RUN echo "@kubezero https://cdn.zero-downtime.net/alpine/${ALPINE}/kubezero" >> /etc/apk/repositories && \
apk add --no-cache \ wget -q -O /etc/apk/keys/stefan@zero-downtime.net-61bb6bfb.rsa.pub https://cdn.zero-downtime.net/alpine/stefan@zero-downtime.net-61bb6bfb.rsa.pub
RUN apk -U --no-cache upgrade && \
apk --no-cache add \
libstdc++ libstdc++
@ -16,18 +19,17 @@ FROM python-alpine AS build-image
ARG TAG="latest" ARG TAG="latest"
# Install aws-lambda-cpp build dependencies # Install aws-lambda-cpp build dependencies
RUN apk upgrade -U --available --no-cache && \ RUN apk --no-cache add \
apk add --no-cache \
build-base \ build-base \
libtool \ libtool \
autoconf \ autoconf \
automake \ automake \
libexecinfo-dev \
make \ make \
cmake \ cmake \
libcurl \ libcurl \
libffi-dev \ libffi-dev \
openssl-dev openssl-dev \
libexecinfo-dev@kubezero
# cargo # cargo
# Install requirements # Install requirements
@ -38,7 +40,7 @@ RUN export MAKEFLAGS="-j$(nproc)" && \
# Install our app # Install our app
COPY app.py /app COPY app.py /app
# Ser version to our TAG # Set internal __version__ to our own container TAG
RUN sed -i -e "s/^__version__ =.*/__version__ = \"${TAG}\"/" /app/app.py RUN sed -i -e "s/^__version__ =.*/__version__ = \"${TAG}\"/" /app/app.py
# Stage 3 - final runtime image # Stage 3 - final runtime image

View File

@ -1,26 +0,0 @@
FROM setviacmdline:latest
# Install additional tools for tests
COPY dev-requirements.txt .flake8 .
RUN export MAKEFLAGS="-j$(nproc)" && \
pip install -r dev-requirements.txt
# Unit Tests / Static / Style etc.
COPY tests/ tests/
RUN flake8 app.py tests && \
codespell app.py tests
# Get aws-lambda run time emulator
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/local/bin/aws-lambda-rie
RUN chmod 0755 /usr/local/bin/aws-lambda-rie && \
mkdir -p tests
# Install pytest
RUN pip install pytest --target /app
# Add our tests
ADD tests /app/tests
# Run tests
ENTRYPOINT []
CMD [ "/usr/local/bin/python", "-m", "pytest", "tests", "--capture=tee-sys" ]

View File

@ -3,3 +3,21 @@ IMAGE := sns-alert-hub
REGION := us-east-1 REGION := us-east-1
include .ci/podman.mk include .ci/podman.mk
SOURCE := app.py tests/test_aws-lambda-rie.py
test:: aws-lambda-rie
./run_tests.sh "$(IMAGE):$(TAG)-$(_ARCH)"
fmt::
autopep8 -i -a $(SOURCE)
lint::
flake8 $(SOURCE)
codespell $(SOURCE)
clean::
rm -rf .pytest_cache __pycache__ aws-lambda-rie
aws-lambda-rie:
wget https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie && chmod 0755 aws-lambda-rie

View File

@ -3,6 +3,10 @@
## Abstract ## Abstract
AWS SNS/Lambda central alert hub taking SNS messages, parsing and formatting them before sending them to any messaging service, like Slack, Matrix, etc AWS SNS/Lambda central alert hub taking SNS messages, parsing and formatting them before sending them to any messaging service, like Slack, Matrix, etc
## Tests
All env variables are forwarded into the test container.
Simply set WEBHOOK_URL accordingly before running `make test`.
## Resources ## Resources
- https://gallery.ecr.aws/zero-downtime/sns-alert-hub - https://gallery.ecr.aws/zero-downtime/sns-alert-hub
- https://github.com/caronc/apprise - https://github.com/caronc/apprise

92
app.py
View File

@ -39,7 +39,8 @@ else:
# Ensure slack URLs use ?blocks=yes # Ensure slack URLs use ?blocks=yes
if "slack.com" in WEBHOOK_URL: if "slack.com" in WEBHOOK_URL:
scheme, netloc, path, query_string, fragment = urllib.parse.urlsplit(WEBHOOK_URL) scheme, netloc, path, query_string, fragment = urllib.parse.urlsplit(
WEBHOOK_URL)
query_params = urllib.parse.parse_qs(query_string) query_params = urllib.parse.parse_qs(query_string)
query_params["blocks"] = ["yes"] query_params["blocks"] = ["yes"]
new_query_string = urllib.parse.urlencode(query_params, doseq=True) new_query_string = urllib.parse.urlencode(query_params, doseq=True)
@ -56,7 +57,8 @@ asset.app_url = "https://zero-downtime.net"
asset.image_url_mask = ( asset.image_url_mask = (
"https://cdn.zero-downtime.net/assets/zdt/apprise/{TYPE}-{XY}{EXTENSION}" "https://cdn.zero-downtime.net/assets/zdt/apprise/{TYPE}-{XY}{EXTENSION}"
) )
asset.app_id = "{} / {} {}".format("cloudbender", __version__, "zero-downtime.net") asset.app_id = "{} / {} {}".format("cloudbender",
__version__, "zero-downtime.net")
apobj = apprise.Apprise(asset=asset) apobj = apprise.Apprise(asset=asset)
apobj.add(WEBHOOK_URL) apobj.add(WEBHOOK_URL)
@ -96,11 +98,16 @@ def handler(event, context):
msg = {} msg = {}
pass pass
body = ""
title = ""
msg_type = apprise.NotifyType.INFO
# CloudWatch Alarm ? # CloudWatch Alarm ?
if "AlarmName" in msg: if "AlarmName" in msg:
title = "AWS Cloudwatch Alarm" title = "AWS Cloudwatch Alarm"
# Discard NewStateValue == OK && OldStateValue == INSUFFICIENT_DATA as these are triggered by installing new Alarms and only cause confusion # Discard NewStateValue == OK && OldStateValue == INSUFFICIENT_DATA as
# these are triggered by installing new Alarms and only cause confusion
if msg["NewStateValue"] == "OK" and msg["OldStateValue"] == "INSUFFICIENT_DATA": if msg["NewStateValue"] == "OK" and msg["OldStateValue"] == "INSUFFICIENT_DATA":
logger.info( logger.info(
"Discarding Cloudwatch Metrics Alarm as state is OK and previous state was insufficient data, most likely new alarm being installed" "Discarding Cloudwatch Metrics Alarm as state is OK and previous state was insufficient data, most likely new alarm being installed"
@ -133,13 +140,12 @@ def handler(event, context):
pass pass
body = body + "\n\n_{}_".format(msg_context) body = body + "\n\n_{}_".format(msg_context)
apobj.notify(body=body, title=title, notify_type=msg_type)
elif "Source" in msg and msg["Source"] == "CloudBender": elif "Source" in msg and msg["Source"] == "CloudBender":
title = "AWS EC2 - CloudBender" title = "AWS EC2 - CloudBender"
try: try:
msg_context = "{account} - {region} - {host} ({instance}) <https://{region}.console.aws.amazon.com/ec2/autoscaling/home?region={region}#AutoScalingGroups:id={asg};view=history|{artifact} ASG>".format( msg_context = "{account} - {region} - {host} ({instance}) <https://{region}.console.aws.amazon.com/ec2/home?region={region}#AutoScalingGroupDetails:id={asg};view=activity|{artifact} ASG>".format(
account=get_alias(msg["AWSAccountId"]), account=get_alias(msg["AWSAccountId"]),
region=msg["Region"], region=msg["Region"],
asg=msg["Asg"], asg=msg["Asg"],
@ -175,7 +181,6 @@ def handler(event, context):
body = body + "\n```{}```".format(msg["Attachment"]) body = body + "\n```{}```".format(msg["Attachment"])
body = body + "\n\n_{}_".format(msg_context) body = body + "\n\n_{}_".format(msg_context)
apobj.notify(body=body, title=title, notify_type=msg_type)
elif "receiver" in msg and msg["receiver"] == "alerthub-notifications": elif "receiver" in msg and msg["receiver"] == "alerthub-notifications":
@ -234,13 +239,74 @@ def handler(event, context):
except KeyError: except KeyError:
pass pass
# Finally send each parsed alert # ElasticCache snapshot notifications
apobj.notify(body=body, title=title, notify_type=msg_type) elif "ElastiCache:SnapshotComplete" in msg:
title = "ElastiCache Snapshot complete."
body = "Snapshot taken on {}".format(
msg["ElastiCache:SnapshotComplete"])
# ElasticCache replacement notifications
elif "ElastiCache:NodeReplacementScheduled" in msg:
title = "ElastiCache node replacement scheduled"
body = "{} will be replaced between {} and {}".format(
msg["ElastiCache:NodeReplacementScheduled"], msg["Start Time"], msg["End Time"])
# ElasticCache replacement notifications
elif "ElastiCache:CacheNodeReplaceStarted" in msg:
title = "ElastiCache fail over stareted"
body = "for node {}".format(msg["ElastiCache:CacheNodeReplaceStarted"])
# ElasticCache replacement notifications
elif "ElastiCache:FailoverComplete" in msg:
title = "ElastiCache fail over complete"
body = "for node {}".format(msg["ElastiCache:FailoverComplete"])
# ElasticCache update notifications
elif "ElastiCache:ServiceUpdateAvailableForNode" in msg:
title = "ElastiCache update available"
body = "for node {}".format(msg["ElastiCache:ServiceUpdateAvailableForNode"])
elif "ElastiCache:ServiceUpdateAvailable" in msg:
title = "ElastiCache update available"
body = "for Group {}".format(msg["ElastiCache:ServiceUpdateAvailable"])
# known RDS events
elif "Event Source" in msg and msg['Event Source'] in ["db-instance", "db-cluster-snapshot", "db-snapshot"]:
try:
title = msg["Event Message"]
try:
name = " ({}).".format(
msg["Tags"]["Name"])
except (KeyError, IndexError):
name = ""
body = "RDS {}: <{}|{}>{}\n<{}|Event docs>".format(msg["Event Source"].replace("db-", ""),
msg["Identifier Link"], msg["Source ID"], name, msg["Event ID"])
except KeyError:
msg_type = apprise.NotifyType.WARNING
body = sns["Message"]
pass
# Basic ASG events
elif "Event" in msg and msg["Event"] in ["autoscaling:EC2_INSTANCE_TERMINATE", "autoscaling:EC2_INSTANCE_LAUNCH"]:
title = msg["Description"]
body = msg["Cause"]
try:
msg_context = "{account} - {region} - <https://{region}.console.aws.amazon.com/ec2/home?region={region}#AutoScalingGroupDetails:id={asg};view=activity|{asg} ASG>".format(
region=region,
account=get_alias(msg["AccountId"]),
asg=msg["AutoScalingGroupName"],
)
body = body + "\n\n_{}_".format(msg_context)
except KeyError:
pass
else: else:
title = "Unknown message type"
msg_type = apprise.NotifyType.WARNING
body = sns["Message"] body = sns["Message"]
apobj.notify(
body=body, if not apobj.notify(body=body, title=title, notify_type=msg_type):
title="Unknown message type", logger.error("Error during notify!")
notify_type=apprise.NotifyType.WARNING,
)

View File

@ -1,3 +1,4 @@
pytest pytest
autopep8
flake8 flake8
codespell codespell

10
renovate.json Normal file
View File

@ -0,0 +1,10 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:recommended",
":label(renovate)",
":semanticCommits",
"group:allNonMajor"
],
"prHourlyLimit": 0
}

View File

@ -1,4 +1,4 @@
boto3 boto3==1.34.84
apprise apprise==1.7.6
humanize humanize==4.9.0
awslambdaric awslambdaric==2.0.11

17
run_tests.sh Executable file
View File

@ -0,0 +1,17 @@
#!/bin/sh -ex
IMAGE=$1
ctr=$(buildah from $IMAGE)
trap "buildah rm $ctr" EXIT
buildah copy $ctr dev-requirements.txt .flake8 .
buildah copy $ctr aws-lambda-rie
buildah copy $ctr tests/ tests/
buildah run $ctr pip install -r dev-requirements.txt --target .
buildah run $ctr python -m flake8 app.py
buildah run $ctr python -m codespell_lib app.py
buildah run $ctr python -m pytest tests -c tests/pytest.ini --capture=tee-sys

View File

@ -9,8 +9,13 @@ from requests.packages.urllib3.util.retry import Retry
s = requests.Session() s = requests.Session()
retries = Retry( retries = Retry(
total=3, backoff_factor=1, status_forcelist=[502, 503, 504], allowed_methods="POST" total=3,
) backoff_factor=1,
status_forcelist=[
502,
503,
504],
allowed_methods="POST")
s.mount("http://", HTTPAdapter(max_retries=retries)) s.mount("http://", HTTPAdapter(max_retries=retries))
@ -18,7 +23,7 @@ class Test:
@classmethod @classmethod
def setup_class(cls): def setup_class(cls):
cls.p = subprocess.Popen( cls.p = subprocess.Popen(
"aws-lambda-rie python -m awslambdaric app.handler", shell=True "./aws-lambda-rie python -m awslambdaric app.handler", shell=True
) )
@classmethod @classmethod
@ -60,3 +65,54 @@ class Test:
r' { "Records": [ { "EventSource": "aws:sns", "EventVersion": "1.0", "EventSubscriptionArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub:0e7ce1ba-c3e4-4264-bae1-4eb71c91235a", "Sns": { "Type": "Notification", "MessageId": "10ae86eb-9ddc-5c2f-806c-df6ecb6bde42", "TopicArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub", "Subject": null, "Message": "{\"receiver\":\"alerthub-notifications\",\"status\":\"resolved\",\"alerts\":[{\"status\":\"resolved\",\"labels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"awsAccount\":\"123456789012\",\"awsRegion\":\"us-west-2\",\"clusterName\":\"test-cluster\",\"container\":\"kube-state-metrics\",\"deployment\":\"example-job\",\"endpoint\":\"http\",\"instance\":\"10.244.202.71:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"default\",\"pod\":\"metrics-kube-state-metrics-56546f44c7-h57jx\",\"prometheus\":\"monitoring/metrics-kube-prometheus-st-prometheus\",\"service\":\"metrics-kube-state-metrics\",\"severity\":\"warning\"},\"annotations\":{\"description\":\"Deployment default/example-job has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubedeploymentreplicasmismatch\",\"summary\":\"Deployment has not matched the expected number of replicas.\"},\"startsAt\":\"2021-09-29T12:36:11.394Z\",\"endsAt\":\"2021-09-29T14:51:11.394Z\",\"generatorURL\":\"https://prometheus.dev.example.com/graph?g0.expr=%28kube_deployment_spec_replicas%7Bjob%3D%22kube-state-metrics%22%7D+%3E+kube_deployment_status_replicas_available%7Bjob%3D%22kube-state-metrics%22%7D%29+and+%28changes%28kube_deployment_status_replicas_updated%7Bjob%3D%22kube-state-metrics%22%7D%5B10m%5D%29+%3D%3D+0%29\\u0026g0.tab=1\",\"fingerprint\":\"59ad2f1a4567b43b\"},{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubeVersionMismatch\",\"awsRegion\":\"eu-central-1\",\"clusterName\":\"test\",\"prometheus\":\"monitoring/metrics-kube-prometheus-st-prometheus\",\"severity\":\"warning\"},\"annotations\":{\"description\":\"There are 2 different semantic versions of Kubernetes components running.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeversionmismatch\",\"summary\":\"Different semantic versions of Kubernetes components running.\"},\"startsAt\":\"2021-08-04T13:17:40.31Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"https://prometheus/graph?g0.expr=count%28count+by%28git_version%29+%28label_replace%28kubernetes_build_info%7Bjob%21~%22kube-dns%7Ccoredns%22%7D%2C+%22git_version%22%2C+%22%241%22%2C+%22git_version%22%2C+%22%28v%5B0-9%5D%2A.%5B0-9%5D%2A%29.%2A%22%29%29%29+%3E+1\\u0026g0.tab=1\",\"fingerprint\":\"5f94d4a22730c666\"}],\"groupLabels\":{\"job\":\"kube-state-metrics\"},\"commonLabels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"awsAccount\":\"123456789012\",\"awsRegion\":\"us-west-2\",\"clusterName\":\"test-cluster\",\"container\":\"kube-state-metrics\",\"deployment\":\"example-job\",\"endpoint\":\"http\",\"instance\":\"10.244.202.71:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"default\",\"pod\":\"metrics-kube-state-metrics-56546f44c7-h57jx\",\"prometheus\":\"monitoring/metrics-kube-prometheus-st-prometheus\",\"service\":\"metrics-kube-state-metrics\",\"severity\":\"warning\"},\"commonAnnotations\":{\"description\":\"Deployment default/example-job has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubedeploymentreplicasmismatch\",\"summary\":\"Deployment has not matched the expected number of replicas.\"},\"externalURL\":\"https://alertmanager.dev.example.com\",\"version\":\"4\",\"groupKey\":\"{}:{job=\\\"kube-state-metrics\\\"}\",\"truncatedAlerts\":0}\n", "Timestamp": "2021-08-05T03:01:11.233Z", "SignatureVersion": "1", "Signature": "pSUYO7LDIfzCbBrp/S2HXV3/yzls3vfYy+2di6HsKG8Mf+CV97RLnen15ieAo3eKA8IfviZIzyREasbF0cwfUeruHPbW1B8kO572fDyV206zmUxvR63r6oM6OyLv9XKBmvyYHKawkOgHZHEMP3v1wMIIHK2W5KbJtXoUcks5DVamooVb9iFF58uqTf+Ccy31bOL4tFyMR9nr8NU55vEIlGEVno8A9Q21TujdZTg0V0BmRgPafcic96udWungjmfhZ005378N32u2hlLj6BRneTpHHSXHBw4wKZreKpX+INZwiZ4P8hzVfgRvAIh/4gXN9+0UJSHgnsaqUcLDNoLZTQ==", "SigningCertUrl": "https://sns.eu-central-1.amazonaws.com/SimpleNotificationService-010a507c1833636cd94bdb98bd93083a.pem", "UnsubscribeUrl": "https://sns.eu-central-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-central-1:123456789012:AlertHub:0e7ce1ba-c3e4-4264-bae1-4eb71c91235a", "MessageAttributes": {} } } ] }' r' { "Records": [ { "EventSource": "aws:sns", "EventVersion": "1.0", "EventSubscriptionArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub:0e7ce1ba-c3e4-4264-bae1-4eb71c91235a", "Sns": { "Type": "Notification", "MessageId": "10ae86eb-9ddc-5c2f-806c-df6ecb6bde42", "TopicArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub", "Subject": null, "Message": "{\"receiver\":\"alerthub-notifications\",\"status\":\"resolved\",\"alerts\":[{\"status\":\"resolved\",\"labels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"awsAccount\":\"123456789012\",\"awsRegion\":\"us-west-2\",\"clusterName\":\"test-cluster\",\"container\":\"kube-state-metrics\",\"deployment\":\"example-job\",\"endpoint\":\"http\",\"instance\":\"10.244.202.71:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"default\",\"pod\":\"metrics-kube-state-metrics-56546f44c7-h57jx\",\"prometheus\":\"monitoring/metrics-kube-prometheus-st-prometheus\",\"service\":\"metrics-kube-state-metrics\",\"severity\":\"warning\"},\"annotations\":{\"description\":\"Deployment default/example-job has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubedeploymentreplicasmismatch\",\"summary\":\"Deployment has not matched the expected number of replicas.\"},\"startsAt\":\"2021-09-29T12:36:11.394Z\",\"endsAt\":\"2021-09-29T14:51:11.394Z\",\"generatorURL\":\"https://prometheus.dev.example.com/graph?g0.expr=%28kube_deployment_spec_replicas%7Bjob%3D%22kube-state-metrics%22%7D+%3E+kube_deployment_status_replicas_available%7Bjob%3D%22kube-state-metrics%22%7D%29+and+%28changes%28kube_deployment_status_replicas_updated%7Bjob%3D%22kube-state-metrics%22%7D%5B10m%5D%29+%3D%3D+0%29\\u0026g0.tab=1\",\"fingerprint\":\"59ad2f1a4567b43b\"},{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubeVersionMismatch\",\"awsRegion\":\"eu-central-1\",\"clusterName\":\"test\",\"prometheus\":\"monitoring/metrics-kube-prometheus-st-prometheus\",\"severity\":\"warning\"},\"annotations\":{\"description\":\"There are 2 different semantic versions of Kubernetes components running.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeversionmismatch\",\"summary\":\"Different semantic versions of Kubernetes components running.\"},\"startsAt\":\"2021-08-04T13:17:40.31Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"https://prometheus/graph?g0.expr=count%28count+by%28git_version%29+%28label_replace%28kubernetes_build_info%7Bjob%21~%22kube-dns%7Ccoredns%22%7D%2C+%22git_version%22%2C+%22%241%22%2C+%22git_version%22%2C+%22%28v%5B0-9%5D%2A.%5B0-9%5D%2A%29.%2A%22%29%29%29+%3E+1\\u0026g0.tab=1\",\"fingerprint\":\"5f94d4a22730c666\"}],\"groupLabels\":{\"job\":\"kube-state-metrics\"},\"commonLabels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"awsAccount\":\"123456789012\",\"awsRegion\":\"us-west-2\",\"clusterName\":\"test-cluster\",\"container\":\"kube-state-metrics\",\"deployment\":\"example-job\",\"endpoint\":\"http\",\"instance\":\"10.244.202.71:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"default\",\"pod\":\"metrics-kube-state-metrics-56546f44c7-h57jx\",\"prometheus\":\"monitoring/metrics-kube-prometheus-st-prometheus\",\"service\":\"metrics-kube-state-metrics\",\"severity\":\"warning\"},\"commonAnnotations\":{\"description\":\"Deployment default/example-job has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubedeploymentreplicasmismatch\",\"summary\":\"Deployment has not matched the expected number of replicas.\"},\"externalURL\":\"https://alertmanager.dev.example.com\",\"version\":\"4\",\"groupKey\":\"{}:{job=\\\"kube-state-metrics\\\"}\",\"truncatedAlerts\":0}\n", "Timestamp": "2021-08-05T03:01:11.233Z", "SignatureVersion": "1", "Signature": "pSUYO7LDIfzCbBrp/S2HXV3/yzls3vfYy+2di6HsKG8Mf+CV97RLnen15ieAo3eKA8IfviZIzyREasbF0cwfUeruHPbW1B8kO572fDyV206zmUxvR63r6oM6OyLv9XKBmvyYHKawkOgHZHEMP3v1wMIIHK2W5KbJtXoUcks5DVamooVb9iFF58uqTf+Ccy31bOL4tFyMR9nr8NU55vEIlGEVno8A9Q21TujdZTg0V0BmRgPafcic96udWungjmfhZ005378N32u2hlLj6BRneTpHHSXHBw4wKZreKpX+INZwiZ4P8hzVfgRvAIh/4gXN9+0UJSHgnsaqUcLDNoLZTQ==", "SigningCertUrl": "https://sns.eu-central-1.amazonaws.com/SimpleNotificationService-010a507c1833636cd94bdb98bd93083a.pem", "UnsubscribeUrl": "https://sns.eu-central-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-central-1:123456789012:AlertHub:0e7ce1ba-c3e4-4264-bae1-4eb71c91235a", "MessageAttributes": {} } } ] }'
) )
self.send_event(event) self.send_event(event)
# ElastiCache snaphshot
def test_elasticache_snapshot(self):
event = json.loads(
r' {"Records": [{"EventSource": "aws:sns", "EventVersion": "1.0", "EventSubscriptionArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub:0e7ce1ba-c3e4-4264-bae1-4eb71c91235a", "Sns": {"Type": "Notification", "MessageId": "10ae86eb-9ddc-5c2f-806c-df6ecb6bde42", "TopicArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub", "Subject": null, "Message": "{\"ElastiCache:SnapshotComplete\":\"redis-prod-0002-001\"}" }}]}'
)
self.send_event(event)
# RDS
def test_rds_event(self):
event = json.loads(
r''' {
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:us-west-2:123456789012:AlertHub:63470449-620d-44ce-971f-ad9582804b13",
"Sns": {
"Type": "Notification",
"MessageId": "ef1f821c-a04f-5c5c-9dff-df498532069b",
"TopicArn": "arn:aws:sns:us-west-2:123456789012:AlertHub",
"Subject": "RDS Notification Message",
"Message": "{\"Event Source\":\"db-cluster-snapshot\",\"Event Time\":\"2023-08-15 07:03:24.491\",\"Identifier Link\":\"https://console.aws.amazon.com/rds/home?region=us-west-2#snapshot:engine=aurora;id=rds:projectdb-cluster-2023-08-15-07-03\",\"Source ID\":\"rds:projectdb-cluster-2023-08-15-07-03\",\"Source ARN\":\"arn:aws:rds:us-west-2:123456789012:cluster-snapshot:rds:projectdb-cluster-2023-08-15-07-03\",\"Event ID\":\"http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html#RDS-EVENT-0168\",\"Event Message\":\"Creating automated cluster snapshot\",\"Tags\":{}}",
"Timestamp": "2023-08-15T07:03:25.289Z",
"SignatureVersion": "1",
"Signature": "mRtx+ddS1uzF3alGDWnDtUkAz+Gno8iuv0wPwkeBJPe1LAcKTXVteYhQdP2BB5ZunPlWXPSDsNtFl8Eh6v4/fcdukxH/czc6itqgGiciQ3DCICLvOJrvrVVgsVvHgOA/Euh8wryzxeQ3HJ/nmF9sg/PtuKyxvGxyO7NSFJrRKkqwkuG1Wr/8gcN3nrenqNTzKiC16kzVuKISWgXM1jqbsleQ4MyBcjq61LRwODKB8tc8vJ6PLGOs4Lrc3qeruCqF3Tzpl43680RsaRBBn1SLycwFVdB1kpHSXuk+YJQ6BS7s6rbMoyhPOpSCFHMZXC/eEb09wTzgpop0KDE/koiUsg==",
"SigningCertUrl": "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-01d088a6f77103d0fe307c0069e40ed6.pem",
"UnsubscribeUrl": "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:123456789012:AlertHub:63470449-620d-44ce-971f-ad9582804b13",
"MessageAttributes": {
"Resource": {
"Type": "String",
"Value": "arn:aws:rds:us-west-2:123456789012:cluster-snapshot:rds:projectdb-cluster-2023-08-15-07-03"
},
"EventID": {
"Type": "String",
"Value": "RDS-EVENT-0168"
}
}
}
}
]
}
'''
)
self.send_event(event)
def test_asg(self):
event = json.loads(
r' {"Records": [{"EventSource": "aws:sns", "EventVersion": "1.0", "EventSubscriptionArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub:0e7ce1ba-c3e4-4264-bae1-4eb71c91235a", "Sns": {"Type": "Notification", "MessageId": "10ae86eb-9ddc-5c2f-806c-df6ecb6bde42", "TopicArn": "arn:aws:sns:eu-central-1:123456789012:AlertHub", "Subject": null, "Message": "{\"Origin\":\"AutoScalingGroup\",\"Destination\":\"EC2\",\"Progress\":50,\"AccountId\":\"123456789012\",\"Description\":\"Terminating EC2 instance: i-023ca42b188ffd91d\",\"RequestId\":\"1764cac3-224b-46bf-8bed-407a5b868e63\",\"EndTime\":\"2023-05-15T08:51:16.195Z\",\"AutoScalingGroupARN\":\"arn:aws:autoscaling:us-west-2:123456789012:autoScalingGroup:4a4fb6e3-22b4-487b-8335-3904f02ff9fd:autoScalingGroupName/powerbi\",\"ActivityId\":\"1764cac3-224b-46bf-8bed-407a5b868e63\",\"StartTime\":\"2023-05-15T08:50:14.145Z\",\"Service\":\"AWS Auto Scaling\",\"Time\":\"2023-05-15T08:51:16.195Z\",\"EC2InstanceId\":\"i-023ca42b188ffd91d\",\"StatusCode\":\"InProgress\",\"StatusMessage\":\"\",\"Details\":{\"Subnet ID\":\"subnet-fe2d6189\",\"Availability Zone\":\"us-west-2a\"},\"AutoScalingGroupName\":\"powerbi\",\"Cause\":\"At 2023-05-15T08:50:03Z the scheduled action end executed. Setting min size from 1 to 0. Setting desired capacity from 1 to 0. At 2023-05-15T08:50:03Z a scheduled action update of AutoScalingGroup constraints to min: 0, max: 1, desired: 0 changing the desired capacity from 1 to 0. At 2023-05-15T08:50:13Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 1 to 0. At 2023-05-15T08:50:14Z instance i-023ca42b188ffd91d was selected for termination.\",\"Event\":\"autoscaling:EC2_INSTANCE_TERMINATE\"}" }}]}'
)
self.send_event(event)