Build Profiles and 3.9.4

* Build Profiles (completion of PR #49)
+ auto-updates version profile when new release detected
+ updates releases/<profile>.yaml after successful builds
* Prune AMIs (in AWS and in releases/<profile>.yaml
+ 'revision' - keep latest revision per release
+ 'release' - keep latest release per version
+ 'version' - remove end-of-life versions
* releases/README.md updater script
* README overhaul
+ Pre-built AMIs --> releases/README.md
+ profiles/README.md for profile configuration details
+ main README.md overhauled to go over how to build and manage custom AMIs
This commit is contained in:
Jake Buchholz 2019-05-27 22:27:55 -07:00 committed by Mike Crute
parent 24144391d6
commit 396bb8ab86
33 changed files with 1595 additions and 735 deletions

17
.gitignore vendored
View File

@ -1,8 +1,15 @@
**/*~
**/*.bak
**/*.swp
/build/
/.py3/
/variables.yaml
/variables.yaml_*
/scrub-old-amis.py
/gen-readme.py
/profiles/*
!/profiles/README.md
!/profiles/base/
!/profiles/arch/
!/profiles/version/
!/profiles/alpine.conf
!/profiles/example.conf
!/profiles/test.conf
/releases/*
!/releases/README.md
!/releases/alpine.yaml

View File

@ -1,4 +1,4 @@
Copyright (c) 2017 Michael Crute
Copyright (c) 2017-2019 Michael Crute, Jake Buchholz
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in

View File

@ -1,44 +1,43 @@
.PHONY: ami
# vim: ts=8 noet:
ami: convert
packer build -var-file=build/variables.json build/alpine-ami.json
ALL_SCRIPTS := $(wildcard scripts/*)
CORE_PROFILES := $(wildcard profiles/*/*)
TARGET_PROFILES := $(wildcard profiles/*.conf)
PROFILE :=
BUILD :=
BUILDS := $(BUILD)
LEVEL :=
edge: convert
@echo '{ "version": "edge", "release": "edge", "revision": "'-`date +%Y%m%d%H%M%S`'" }' > build/edge.json
packer build -var-file=build/variables.json -var-file=build/edge.json build/alpine-ami.json
# by default, use the 'packer' in the path
PACKER := packer
export PACKER
convert: build/convert
[ -f variables.yaml ] || cp variables.yaml-default variables.yaml
build/convert variables.yaml > build/variables.json
build/convert alpine-ami.yaml > build/alpine-ami.json
.PHONY: amis prune release-readme clean
build/convert:
[ -d ".py3" ] || python3 -m venv .py3
.py3/bin/pip install pyyaml boto3
amis: build build/packer.json build/profile/$(PROFILE) build/update-release.py
build/make-amis $(PROFILE) $(BUILDS)
[ -d "build" ] || mkdir build
prune: build build/prune-amis.py
build/prune-amis.py $(LEVEL) $(PROFILE) $(BUILD)
# Make stupid simple little YAML/JSON converter so we can maintain our
# packer configs in a sane format that allows comments but also use packer
# which only supports JSON
@echo "#!`pwd`/.py3/bin/python" > build/convert
@echo "import yaml, json, sys" >> build/convert
@echo "y = yaml.full_load(open(sys.argv[1]))" >> build/convert
@echo "for k in ['ami_access','deploy_regions','add_repos','add_pkgs','add_svcs']:" >> build/convert
@echo " if k in y and isinstance(y[k], list):" >> build/convert
@echo " y[k] = ','.join(str(x) for x in y[k])" >> build/convert
@echo " if k in y and isinstance(y[k], dict):" >> build/convert
@echo " y[k] = ':'.join(str(l) + '=' + ','.join(str(s) for s in ss) for l, ss in y[k].items())" >> build/convert
@echo "json.dump(y, sys.stdout, indent=4, separators=(',', ': '))" >> build/convert
@chmod +x build/convert
release-readme: build build/gen-release-readme.py
build/gen-release-readme.py $(PROFILE)
%.py: %.py.in
sed "s|@PYTHON@|#!`pwd`/.py3/bin/python|" $< > $@
build: $(SCRIPTS)
[ -d build/profile ] || mkdir -p build/profile
python3 -m venv build/.py3
build/.py3/bin/pip install pyhocon pyyaml boto3
(cd build; for i in $(ALL_SCRIPTS); do ln -sf ../$$i .; done)
build/packer.json: build packer.conf
build/.py3/bin/pyhocon -i packer.conf -f json > build/packer.json
build/profile/$(PROFILE): build build/resolve-profile.py $(CORE_PROFILES) $(TARGET_PROFILES)
build/resolve-profile.py $(PROFILE)
%.py: %.py.in build
sed "s|@PYTHON@|#!`pwd`/build/.py3/bin/python|" $< > $@
chmod +x $@
.PHONY: clean
clean:
rm -rf build .py3 scrub-old-amis.py gen-readme.py
distclean: clean
rm -f variables.yaml
rm -rf build

163
README.md
View File

@ -1,72 +1,111 @@
# Alpine Linux EC2 AMI Build
# Alpine Linux EC2 AMI Builder
**NOTE: This is not an official Amazon or AWS provided image. This is
community built and supported.**
**NOTE: This is not an official AWS or Alpine project. This is community built
and supported.**
This repository contains a packer file and a script to create an EC2 AMI
containing Alpine Linux. The AMI is designed to work with most EC2 features
such as Elastic Network Adapters and NVME EBS volumes by default. If anything
is missing please report a bug.
## Pre-Built AMIs
This image can be launched on any modern x86_64 instance type, including T3,
M5, C5, I3, R5, P3, X1, X1e, D2, Z1d. Other instances may also work but have
not been tested. If you find an issue with instance support for any current
generation instance please file a bug against this project.
***To get started with one of our pre-built minimalist AMIs, please refer to the
[README](releases/README.md) in the [releases](releases) subdirectory.***
To get started use one of the AMIs below. The default user is `alpine` and
will be configured to use whatever SSH keys you chose when you launched the
image. If user data is specified it must be a shell script that begins with
`#!`. If a script is provided it will be executed as root after the network is
configured.
## Custom AMIs
**NOTE:** *We are working to automate AMI builds and updates to this file and
[release.yaml](https://github.com/mcrute/alpine-ec2-ami/blob/master/release.yaml)
in the not-too-distant future.*
Using the scripts and configuration in this project, you can build your own
custom Alpine Linux AMIs. If you experience any problems building custom AMIs,
please open an [issue](https://github.com/mcrute/alpine-ec2-ami/issues) and
include as much detailed information as possible.
| Alpine Release | Region Code | AMI ID |
| :------------: | ----------- | ------ |
| 3.9.3 | ap-northeast-1 | [ami-001e74131496d0212](https://ap-northeast-1.console.aws.amazon.com/ec2/home#launchAmi=ami-001e74131496d0212) |
| 3.9.3 | ap-northeast-2 | [ami-09a26b03424d75667](https://ap-northeast-2.console.aws.amazon.com/ec2/home#launchAmi=ami-09a26b03424d75667) |
| 3.9.3 | ap-south-1 | [ami-03534f64f8b87aafc](https://ap-south-1.console.aws.amazon.com/ec2/home#launchAmi=ami-03534f64f8b87aafc) |
| 3.9.3 | ap-southeast-1 | [ami-0d5f2950efcd55b0e](https://ap-southeast-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0d5f2950efcd55b0e) |
| 3.9.3 | ap-southeast-2 | [ami-0660edcba4ba7c8a0](https://ap-southeast-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0660edcba4ba7c8a0) |
| 3.9.3 | ca-central-1 | [ami-0bf4ea1f0f86283bb](https://ca-central-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0bf4ea1f0f86283bb) |
| 3.9.3 | eu-central-1 | [ami-060d9bbde8d5047e8](https://eu-central-1.console.aws.amazon.com/ec2/home#launchAmi=ami-060d9bbde8d5047e8) |
| 3.9.3 | eu-north-1 | [ami-0a5284750fcf11d18](https://eu-north-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0a5284750fcf11d18) |
| 3.9.3 | eu-west-1 | [ami-0af60b964eb2f09d3](https://eu-west-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0af60b964eb2f09d3) |
| 3.9.3 | eu-west-2 | [ami-097405edd3790cf8b](https://eu-west-2.console.aws.amazon.com/ec2/home#launchAmi=ami-097405edd3790cf8b) |
| 3.9.3 | eu-west-3 | [ami-0078916a37514bb9a](https://eu-west-3.console.aws.amazon.com/ec2/home#launchAmi=ami-0078916a37514bb9a) |
| 3.9.3 | sa-east-1 | [ami-09e0025e60328ea6d](https://sa-east-1.console.aws.amazon.com/ec2/home#launchAmi=ami-09e0025e60328ea6d) |
| 3.9.3 | us-east-1 | [ami-05c8c48601c2303af](https://us-east-1.console.aws.amazon.com/ec2/home#launchAmi=ami-05c8c48601c2303af) |
| 3.9.3 | us-east-2 | [ami-064d64386a89de1e6](https://us-east-2.console.aws.amazon.com/ec2/home#launchAmi=ami-064d64386a89de1e6) |
| 3.9.3 | us-west-1 | [ami-04a4711d62db12ba0](https://us-west-1.console.aws.amazon.com/ec2/home#launchAmi=ami-04a4711d62db12ba0) |
| 3.9.3 | us-west-2 | [ami-0ff56870cf29d4f02](https://us-west-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0ff56870cf29d4f02) |
### Build Requirements
* [Packer](https://packer.io) >= 1.4.1
* [Python 3.x](https://python.org) (3.7 is known to work)
* `make` (GNU Make is known to work)
* an AWS account with an existing subnet in an AWS Virtual Private Cloud
### Profile Configuration
Target profile config files reside in the [profiles](profiles) subdirectory,
where you will also find the [config](profiles/alpine.conf) we use for our
pre-built AMIs. Refer to the [README](profiles/README.md) in that subdirectory
for more details and information about how AMI profile configs work.
### AWS Credentials
These scripts use the `boto3` library to interact with AWS, enabling you to
provide your AWS account credentials in a number of different ways. see the
offical `boto3` documentation's section on
[configuring credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#configuring-credentials)
for more details. *Please note that these scripts do not implement the first
two methods on the list.*
### Building AMIs
To build all build targets in a target profile, simply...
```
make PROFILE=<profile>
```
You can also build specfic build targets within a profile:
```
make PROFILE=<profile> BUILDS="<build1> <build2>"
```
If the `packer` binary is not in your `PATH`, or you would like to specify a
different one, use...
```
make PACKER=<packer-path> PROFILE=<profile>
```
Before each build, new Alpine Linux *releases* are detected and the version's
core profile is updated.
If there's already an AMI with the same name as the profile build's, that build
will be skipped and the process moves on to build the other profile's build
targets (if any).
After each successful build, `releases/<profile>.yaml` is updated with the
build's details, including (most importantly) the ids of the AMI artifacts that
were built.
Additional information about using your custom AMIs can be found in the
[README](releases/README.md) in the [releases](releases) subdirectory.
### Pruning AMIs
Every now and then, you may want to clean up old AMIs from your EC2 account and
your profile's `releases/<profile>.yaml`. There are three different levels of
pruning:
* `revision` - keep only the latest revision for each release
* `release` - keep only the latest release for each version
* `version` - remove any end-of-life versions
To prune a profile (or optionally one build target of a profile...
```
make prune LEVEL=<level> PROFILE=<profile> [BUILD=<build>]
```
Any AMIs in the account which are "unknown" (to the profile/build target, at
least) will be called out as such, but will not be pruned.
### Updating the Release README
This make target updates the [releases README](releases/README.md), primarily
for updating the list of our pre-built AMIs. This may-or-may-not be useful for
other target profiles.
```
make release-readme PROFILE=<profile>
```
### Cleaning up the Build Environment
`make clean` will remove the temporary `build` subdirectory, which contains the
resolved profile and Packer configs, the Python virtual environment, and other
temporary build-related artifacts.
## Caveats
This image is being used in production but it's still somewhat early stage in
its development and thus there are some sharp edges.
* New Alpine Linux *versions* are currently not auto-detected and added as a
core version profile; this process is, at the moment, still a manual task.
- As of 3.9.0-1, this AMI starts `haveged` at the boot runlevel, to provide
additional initial entropy as discussed in issue #39. In the long term, we
hope to find an alternative solution.
- Only EBS-backed HVM instances are supported. While paravirtualized instances
are still available from AWS they are not supported on any of the newer
hardware so it seems unlikely that they will be supported going forward.
Thus this project does not support them.
- [cloud-init](https://cloudinit.readthedocs.io/en/latest/) is not currently
supported on Alpine Linux. Instead this image uses
[tiny-ec2-bootstrap](https://github.com/mcrute/tiny-ec2-bootstrap). Hostname
setting will work, as will setting the ssh keys for the Alpine user based on
what was configured during instance launch. User data is supported as long
as it's a shell script (starts with #!). See the tiny-ec2-bootstrap README
for more details. You can still install cloud-init (from the edge testing
repositories), but we haven't tested whether it will work correctly for this
AMI. If full cloud-init support is important to you please file a bug
against this project.
- CloudFormation support is still forthcoming. This requires patches and
packaging for the upstream cfn tools that have not yet been accepted.
Eventually full CloudFormation support will be available.
* Although it's possible to build "aarch64" (arm64) AMIs, they don't quite work
yet.

View File

@ -1,70 +0,0 @@
variables:
# NOTE: Configuration is done with a 'variables.yaml' file. If it doesn't
# exist, default configuration is copied from 'variables.yaml-default'.
# NOTE: Changing arch/version/release may require modifying 'make_ami.sh'.
arch: x86_64
version: "3.9"
release: "3.9.3"
revision: ""
builders:
- type: "amazon-ebssurrogate"
### Builder Instance Details
region: "{{user `region`}}"
subnet_id: "{{user `subnet`}}"
security_group_id: "{{user `security_group`}}"
instance_type: "t3.nano"
associate_public_ip_address: "{{user `public_ip`}}"
launch_block_device_mappings:
- volume_type: "gp2"
device_name: "/dev/xvdf"
delete_on_termination: "true"
volume_size: "{{user `volume_size`}}"
ssh_username: "ec2-user"
source_ami_filter:
# use the latest Amazon Linux AMI
filters:
virtualization-type: "hvm"
root-device-type: "ebs"
architecture: "x86_64"
name: "amzn2-ami-hvm-2.0.*-gp2"
owners:
- "137112412989"
most_recent: "true"
### AMI Build Details
ami_name: "{{user `ami_name_prefix`}}{{user `release`}}{{user `revision`}}-{{user `arch`}}{{user `ami_name_suffix`}}"
ami_description: "{{user `ami_desc_prefix`}}{{user `release`}}{{user `revision`}} {{user `arch`}}{{user `ami_desc_suffix`}}"
ami_virtualization_type: "hvm"
ami_root_device:
source_device_name: "/dev/xvdf"
device_name: "/dev/xvda"
delete_on_termination: "true"
volume_size: "{{user `volume_size`}}"
volume_type: "gp2"
encrypt_boot: "{{user `encrypt_ami`}}"
ena_support: "true"
sriov_support: "true"
ami_groups: "{{user `ami_access`}}"
ami_regions: "{{user `deploy_regions`}}"
provisioners:
- type: "file"
source: "nvme/"
destination: "/tmp"
- type: "shell"
script: "make_ami.sh"
environment_vars:
- "VERSION={{user `version`}}"
- "RELEASE={{user `release`}}"
- "REVISION={{user `revision`}}"
- "ADD_REPOS='{{user `add_repos`}}'"
- "ADD_PKGS='{{user `add_pkgs`}}'"
- "ADD_SVCS='{{user `add_svcs`}}'"
execute_command: 'sudo sh -c "{{ .Vars }} {{ .Path }}"'

View File

@ -1,17 +0,0 @@
@PYTHON@
import yaml
URI_TEMPLATE = "https://{region}.console.aws.amazon.com/ec2/home#launchAmi={ami}"
ROW_TEMPLATE = "| {release} | {region} | [{ami}]({uri}) |"
with open("release.yaml") as fp:
releases = yaml.full_load(fp)
for metadata in releases.values():
release = str(metadata["alpine-release"])
for region, ami in metadata["region-identifiers"].items():
uri = URI_TEMPLATE.format(**locals())
print(ROW_TEMPLATE.format(**locals()))

View File

@ -1,368 +0,0 @@
#!/bin/sh
# vim: set ts=4 et:
set -eu
MIN_VERSION="3.9"
MIN_RELEASE="3.9.0"
: ${VERSION:="${MIN_VERSION}"} # unless otherwise specified
: ${RELEASE:="${MIN_RELEASE}"} # unless otherwise specified
: ${APK_TOOLS_URI:="https://github.com/alpinelinux/apk-tools/releases/download/v2.10.3/apk-tools-2.10.3-x86_64-linux.tar.gz"}
: ${APK_TOOLS_SHA256:="4d0b2cda606720624589e6171c374ec6d138867e03576d9f518dddde85c33839"}
: ${ALPINE_KEYS:="http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/alpine-keys-2.1-r1.apk"}
: ${ALPINE_KEYS_SHA256:="9c7bc5d2e24c36982da7aa49b3cfcb8d13b20f7a03720f25625fa821225f5fbc"}
die() {
printf '\033[1;31mERROR:\033[0m %s\n' "$@" >&2 # bold red
exit 1
}
einfo() {
printf '\n\033[1;36m> %s\033[0m\n' "$@" >&2 # bold cyan
}
rc_add() {
local target="$1"; shift # target directory
local runlevel="$1"; shift # runlevel name
local services="$*" # names of services
local svc; for svc in $services; do
mkdir -p "$target"/etc/runlevels/$runlevel
ln -s /etc/init.d/$svc "$target"/etc/runlevels/$runlevel/$svc
echo " * service $svc added to runlevel $runlevel"
done
}
wgets() (
local url="$1" # url to fetch
local sha256="$2" # expected SHA256 sum of output
local dest="$3" # output path and filename
wget -T 10 -q -O "$dest" "$url"
echo "$sha256 $dest" | sha256sum -c > /dev/null
)
validate_block_device() {
local dev="$1" # target directory
lsblk -P --fs "$dev" >/dev/null 2>&1 || \
die "'$dev' is not a valid block device"
if lsblk -P --fs "$dev" | grep -vq 'FSTYPE=""'; then
die "Block device '$dev' is not blank"
fi
}
fetch_apk_tools() {
local store="$(mktemp -d)"
local tarball="$(basename $APK_TOOLS_URI)"
wgets "$APK_TOOLS_URI" "$APK_TOOLS_SHA256" "$store/$tarball"
tar -C "$store" -xf "$store/$tarball"
find "$store" -name apk
}
make_filesystem() {
local device="$1" # target device path
local target="$2" # mount target
mkfs.ext4 -O ^64bit "$device"
e2label "$device" /
mount "$device" "$target"
}
setup_repositories() {
local target="$1" # target directory
local add_repos="$2" # extra repo lines, comma separated
mkdir -p "$target"/etc/apk/keys
if [ "$VERSION" = 'edge' ]; then
cat > "$target"/etc/apk/repositories <<EOF
http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
EOF
else
cat > "$target"/etc/apk/repositories <<EOF
http://dl-cdn.alpinelinux.org/alpine/v$VERSION/main
http://dl-cdn.alpinelinux.org/alpine/v$VERSION/community
EOF
fi
echo "$add_repos" | tr , "\012" >> "$target"/etc/apk/repositories
}
fetch_keys() {
local target="$1"
local tmp="$(mktemp -d)"
wgets "$ALPINE_KEYS" "$ALPINE_KEYS_SHA256" "$tmp/alpine-keys.apk"
tar -C "$target" -xvf "$tmp"/alpine-keys.apk etc/apk/keys
rm -rf "$tmp"
}
install_base() {
local target="$1"
$apk add --root "$target" --no-cache --initdb alpine-base
# verify release matches
if [ "$VERSION" != "edge" ]; then
ALPINE_RELEASE=$(cat "$target/etc/alpine-release")
[ "$RELEASE" = "$ALPINE_RELEASE" ] || \
die "Current Alpine $VERSION release ($ALPINE_RELEASE) does not match build ($RELEASE)"
fi
}
setup_chroot() {
local target="$1"
mount -t proc none "$target"/proc
mount --bind /dev "$target"/dev
mount --bind /sys "$target"/sys
# Don't want to ship this but it's needed for bootstrap. Will be removed in
# the cleanup stage.
install -Dm644 /etc/resolv.conf "$target"/etc/resolv.conf
}
install_core_packages() {
local target="$1" # target directory
local add_pkgs="$2" # extra packages, space separated
# Most from: https://git.alpinelinux.org/cgit/alpine-iso/tree/alpine-virt.packages
#
# sudo - to allow alpine user to become root, disallow root SSH logins
# tiny-ec2-bootstrap - to bootstrap system from EC2 metadata
#
chroot "$target" apk --no-cache add \
linux-virt \
alpine-mirrors \
chrony \
haveged \
nvme-cli \
openssh \
sudo \
tiny-ec2-bootstrap \
tzdata \
$(echo "$add_pkgs" | tr , ' ')
chroot "$target" apk --no-cache add --no-scripts syslinux
# Disable starting getty for physical ttys because they're all inaccessible
# anyhow. With this configuration boot messages will still display in the
# EC2 console.
sed -Ei '/^tty[0-9]/s/^/#/' \
"$target"/etc/inittab
# Make it a little more obvious who is logged in by adding username to the
# prompt
sed -i "s/^export PS1='/&\\\\u@/" "$target"/etc/profile
}
setup_mdev() {
local target="$1"
cp /tmp/nvme-ebs-links "$target"/lib/mdev
sed -n -i -e '/# fallback/r /tmp/nvme-ebs-mdev.conf' -e 1x -e '2,${x;p}' -e '${x;p}' "$target"/etc/mdev.conf
}
create_initfs() {
local target="$1"
# Create ENA feature for mkinitfs
echo "kernel/drivers/net/ethernet/amazon" > \
"$target"/etc/mkinitfs/features.d/ena.modules
# Enable ENA and NVME features these don't hurt for any instance and are
# hard requirements of the 5 series and i3 series of instances
sed -Ei 's/^features="([^"]+)"/features="\1 nvme ena"/' \
"$target"/etc/mkinitfs/mkinitfs.conf
chroot "$target" /sbin/mkinitfs $(basename $(find "$target"/lib/modules/* -maxdepth 0))
}
setup_extlinux() {
local target="$1"
# Must use disk labels instead of UUID or devices paths so that this works
# across instance familes. UUID works for many instances but breaks on the
# NVME ones because EBS volumes are hidden behind NVME devices.
#
# Enable ext4 because the root device is formatted ext4
#
# Shorten timeout because EC2 has no way to interact with instance console
#
# ttyS0 is the target for EC2s "Get System Log" feature whereas tty0 is the
# target for EC2s "Get Instance Screenshot" feature. Enabling the serial
# port early in extlinux gives the most complete output in the system log.
sed -Ei -e "s|^[# ]*(root)=.*|\1=LABEL=/|" \
-e "s|^[# ]*(default_kernel_opts)=.*|\1=\"console=ttyS0 console=tty0\"|" \
-e "s|^[# ]*(serial_port)=.*|\1=ttyS0|" \
-e "s|^[# ]*(modules)=.*|\1=sd-mod,usb-storage,ext4|" \
-e "s|^[# ]*(default)=.*|\1=virt|" \
-e "s|^[# ]*(timeout)=.*|\1=1|" \
"$target"/etc/update-extlinux.conf
}
install_extlinux() {
local target="$1"
chroot "$target" /sbin/extlinux --install /boot
chroot "$target" /sbin/update-extlinux --warn-only
}
setup_fstab() {
local target="$1"
cat > "$target"/etc/fstab <<EOF
# <fs> <mountpoint> <type> <opts> <dump/pass>
LABEL=/ / ext4 defaults,noatime 1 1
EOF
}
setup_networking() {
local target="$1"
cat > "$target"/etc/network/interfaces <<EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
EOF
}
enable_services() {
local target="$1"
local add_svcs="$2"
rc_add "$target" default chronyd networking sshd tiny-ec2-bootstrap
rc_add "$target" sysinit devfs dmesg hwdrivers mdev
rc_add "$target" boot acpid bootmisc haveged hostname hwclock modules swap sysctl syslog
rc_add "$target" shutdown killprocs mount-ro savecache
if [ -n "$add_svcs" ]; then
local lvl_svcs; for lvl_svcs in $(echo "$add_svcs" | tr : ' '); do
rc_add "$target" $(echo "$lvl_svcs" | tr =, ' ')
done
fi
}
create_alpine_user() {
local target="$1"
# Allow members of the wheel group to sudo without a password. By default
# this will only be the alpine user. This allows us to ship an AMI that is
# accessible via SSH using the user's configured SSH keys (thanks to
# tiny-ec2-bootstrap) but does not allow remote root access which is the
# best-practice.
sed -i '/%wheel .* NOPASSWD: .*/s/^# //' "$target"/etc/sudoers
# There is no real standard ec2 username across AMIs, Amazon uses ec2-user
# for their Amazon Linux AMIs but Ubuntu uses ubuntu, Fedora uses fedora,
# etc... (see: https://alestic.com/2014/01/ec2-ssh-username/). So our user
# and group are alpine because this is Alpine Linux. On instance bootstrap
# the user can create whatever users they want and delete this one.
chroot "$target" /usr/sbin/addgroup alpine
chroot "$target" /usr/sbin/adduser -h /home/alpine -s /bin/sh -G alpine -D alpine
chroot "$target" /usr/sbin/addgroup alpine wheel
chroot "$target" /usr/bin/passwd -u alpine
}
configure_ntp() {
local target="$1"
# EC2 provides an instance-local NTP service syncronized with GPS and
# atomic clocks in-region. Prefer this over external NTP hosts when running
# in EC2.
#
# See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
sed -e 's/^pool /server /' \
-e 's/pool.ntp.org/169.254.169.123/g' \
-i "$target"/etc/chrony/chrony.conf
}
cleanup() {
local target="$1"
# Sweep cruft out of the image that doesn't need to ship or will be
# re-generated when the image boots
rm -f \
"$target"/var/cache/apk/* \
"$target"/etc/resolv.conf \
"$target"/root/.ash_history \
"$target"/etc/*-
umount \
"$target"/dev \
"$target"/proc \
"$target"/sys
umount "$target"
}
version_sorted() {
# falsey if $1 version > $2 version
printf "%s\n%s" $1 $2 | sort -VC
}
main() {
[ "$VERSION" != 'edge' ] && {
version_sorted $MIN_VERSION $VERSION || die "Minimum Alpine version is '$MIN_RELEASE'"
version_sorted $MIN_RELEASE $RELEASE || die "Minimum Alpine release is '$MIN_RELEASE'"
}
local add_repos="$ADD_REPOS"
local add_pkgs="$ADD_PKGS"
local add_svcs="$ADD_SVCS"
local device="/dev/xvdf"
local target="/mnt/target"
validate_block_device "$device"
[ -d "$target" ] || mkdir "$target"
einfo "Fetching static APK tools"
apk="$(fetch_apk_tools)"
einfo "Creating root filesystem"
make_filesystem "$device" "$target"
einfo "Configuring Alpine repositories"
setup_repositories "$target" "$add_repos"
einfo "Fetching Alpine signing keys"
fetch_keys "$target"
einfo "Installing base system"
install_base "$target"
setup_chroot "$target"
einfo "Installing core packages"
install_core_packages "$target" "$add_pkgs"
einfo "Configuring and enabling boot loader"
create_initfs "$target"
setup_extlinux "$target"
install_extlinux "$target"
einfo "Configuring system"
setup_mdev "$target"
setup_fstab "$target"
setup_networking "$target"
enable_services "$target" "$add_svcs"
create_alpine_user "$target"
configure_ntp "$target"
einfo "All done, cleaning up"
cleanup "$target"
}
main "$@"

108
packer.conf Normal file
View File

@ -0,0 +1,108 @@
# This Packer config file is in HOCON, and is converted to JSON at build time.
# https://github.com/lightbend/config/blob/master/HOCON.md
# vim: ts=2 et:
builders = [
{
type = "amazon-ebssurrogate"
### Builder Instance Details
region = "{{user `build_region`}}"
subnet_id = "{{user `build_subnet`}}"
instance_type = "{{user `build_instance_type`}}"
associate_public_ip_address = "{{user `build_public_ip`}}"
source_ami_filter {
# use the latest Amazon Linux AMI
owners = [ "{{user `build_ami_owner`}}" ]
most_recent = "{{user `build_ami_latest`}}"
filters {
virtualization-type = "hvm"
root-device-type = "ebs"
architecture = "{{user `build_arch`}}"
name = "{{user `build_ami_name`}}"
}
}
launch_block_device_mappings = [
{
volume_type = "gp2"
device_name = "/dev/xvdf"
delete_on_termination = "true"
volume_size = "{{user `ami_volume_size`}}"
}
]
shutdown_behavior = "terminate"
ssh_username = "{{user `build_user`}}"
### AMI Build Details
ami_name = "{{user `ami_name`}}"
ami_description = "{{user `ami_desc`}}"
tags {
Name = "{{user `ami_name`}}"
}
ami_virtualization_type = "hvm"
ami_architecture = "{{user `build_arch`}}" # need packer 1.4.1
ami_root_device {
volume_type = "gp2"
source_device_name = "/dev/xvdf"
device_name = "/dev/xvda"
delete_on_termination = "true"
volume_size = "{{user `ami_volume_size`}}"
}
encrypt_boot = "{{user `ami_encrypt`}}"
ena_support = "true"
sriov_support = "true"
ami_groups = "{{user `ami_access`}}"
ami_regions = "{{user `ami_regions`}}"
}
]
provisioners = [
{
type = "file"
source = "nvme/"
destination = "/tmp"
}
{
type = "shell"
script = "setup-ami"
environment_vars = [
"VERSION={{user `version`}}"
"RELEASE={{user `release`}}"
"REVISION={{user `revision`}}"
"ARCH={{user `arch`}}"
"APK_TOOLS={{user `apk_tools`}}"
"APK_TOOLS_SHA256={{user `apk_tools_sha256`}}"
"ALPINE_KEYS={{user `alpine_keys`}}"
"ALPINE_KEYS_SHA256={{user `alpine_keys_sha256`}}"
"REPOS={{user `repos`}}"
"PKGS={{user `pkgs`}}"
"SVCS={{user `svcs`}}"
"KERNEL_MODS={{user `kernel_modules`}}"
"KERNEL_OPTS={{user `kernel_options`}}"
]
use_env_var_file = "true"
execute_command = "sudo sh -c '. {{.EnvVarFile}} && {{.Path}}'"
}
]
post-processors = [
{
type = "manifest"
output = "profile/{{user `profile`}}/{{user `profile_build`}}/manifest.json"
custom_data {
ami_name = "{{user `ami_name`}}"
ami_desc = "{{user `ami_desc`}}"
profile = "{{user `profile`}}"
profile_build = "{{user `profile_build`}}"
version = "{{user `version`}}"
release = "{{user `release`}}"
arch = "{{user `arch`}}"
revision = "{{user `revision`}}"
end_of_life = "{{user `end_of_life`}}"
}
}
]

157
profiles/README.md Normal file
View File

@ -0,0 +1,157 @@
# Profiles
Profiles are collections of related build definitions, which are used to
generate the `variables.yaml` files that [Packer](https://packer.io) consumes
when building AMIs.
Profiles use [HOCON](https://github.com/lightbend/config/blob/master/HOCON.md)
(Human-Optimized Config Object Notation) which allows importing common configs
from other files, simple variable interpolation, and easy merging of objects.
This flexibility helps keep configuration for related build targets
[DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
## Core Profiles
Core profile configurations are found in the `base`, `version`, and `arch`
subdirectories. Core profiles do not have a `.conf` suffix because they're not
meant to be directly used like target profiles with `make`.
Base core profiles define all build vars with default values -- those left
empty or null are usually set in version, arch, or target profile configs.
Base profiles are included in version profiles, and do not need to be included
in target profiles.
Version core profiles expand on the base profile they include, and set the
`version`, `release`, `end_of_life` (if known), and the associated Alpine Linux
`repos`.
Arch core profiles further define architecture-specific variables, such as
which `apk-tools` and `alpine-keys` to use (and their SHA256 checksums).
## Target Profiles
Target profiles, defined in this directory, are the top-level configuration
used with `make PROFILE=<profile>`; they must have a `.conf` suffix. Several
configuration objects are defined and later merged within the `BUILDS` object,
ultimately defining each individual build.
Simple profiles have an object that loads a "version" core profile and
another that loads an "arch" core profile. A more complicated version-arch
matrix profile would have an object for each version and arch.
Additionally, there are one or more objects that define profile-specific
settings.
The `BUILDS` object's elements merge core and profile configs (with optional
inline build settings) into named build definitions; these build names can be
used to specify a subset of a profile's builds:
`make PROFILE=<profile> BUILDS="<build> ..."`
**Please note that merge order matters!** The merge sequence is version -->
architecture --> profile --> build.
## Customization
The most important variables to set in your custom profile is `build_region`
and `build_subnet`. Without these, Packer will not know where to build.
`version` and `release` are meant to match Alpine; however,`revision` can be
used to track changes to profile or situations where the AMIs needed to be
rebuilt. The "edge" core version profile sets `revision` to the current
datetime, otherwise the default is `r0`.
You will probably want to personalize the name and description of your AMI.
Set `ami_name_prefix` and `ami_name_suffix`; setting `ami_desc_suffix` and
`ami_desc_suffix` is optional.
Set `build_instance_type` if you want/need to use a different instance type to
build the image; the default is `t3.nano`.
If 1 GiB is not enough to install the packages in your base AMI, you can set
the `ami_volume_size` to the number of GiB you need. Note, however, that the
[tiny-ec2-bootstrap](https://github.com/mcrute/tiny-ec2-bootstrap) init script
will expand the root partition to use the instance's entire EBS root volume
during the first boot, so you shouldn't need to make space for anything other
than installed packages.
Set `ami_encrypt` to "true" to create an encrypted AMI image. Launching images
from an encrypted AMI results in an encrypted EBS root volume.
To copy newly built AMIs to regions other than the `build_region` region, set
`ami_regions`. This variable is a *hash*, which allows for finer control over
inherited values when merging configs. Region identifiers are the keys, a
value of `true` means the AMI should be copied to that region; `null` or
`false` indicate that it shouldn't be copied to that region. If you want to
ensure that the `ami_regions` hash does not inherit any values, set it to
`null` before configuring your regions. For example:
```
ami_regions = null # don't inherit any previous values
ami_regions {
us-west-2 = true
eu-north-1 = true
}
```
Controlling what packages are installed and enabled in the AMI is the number
one reason for creating custom profile. The `repos`, `pkgs`, and `svcs` hash
variables serve precisely that purpose. With some exceptions (noted below),
they work the same as the `ami_regions` hash: `true` values enable, `false`
and `null` values disable, and inherited values can be cleared by first setting
the variable itself to `null`.
With `repos`, the keys are double-quoted URLs to the `apk` repos that you want
set up; these are initially set in the "version" core profiles. In addition
to the `true`, `false`, and `null` values, you can also use a "repo alias"
string value, allowing you to pin packages to be sourced from that particular
repo. For example, with a profile based from a non-edge core profile, you may
want to be able to pull packages from the edge testing repo:
```
repos {
"http://dl-cdn.alpinelinux.org/alpine/edge/testing" = "edge-testing"
}
```
The `pkgs` hash's default is set in the base core profile; its keys are
simply the Alpine package to install (or not install, if the value is `false`
or `null`). A `true` value installs the package from the default repos; if the
value is a repo alias string, the package will be pinned to explicitly install
from that repo. For example:
```
pkgs {
# install docker-compose from edge-testing repo
docker-compose = "edge-testing"
}
```
To control when (or whether) a system service starts, use the `svcs` hash
variable. Its keys are the service names, as they appear in `/etc/init.d`;
default values are set in the base core profile. Like the other hash
variables, setting `false` or `null` disable the service, `true` will enable
the service at the "default" runlevel. The service can be enabled at a
different runlevel by using that runlevel as the value.
By default, the AMIs built are accessible only by the owning account. To
make your AMIs publicly available, set the `ami_access` hash variable:
```
ami_access {
all = true
}
```
## Limitations and Caveats
* Hash variables that are reset to clear inherited values *must* be
re-defined as a hash, even if it is to remain empty:
```
hash_var = null # drops inherited values
hash_var {} # re-defines as an empty hash
```
* The AMI's login user is currently hard coded to be `alpine`. Changes to
[tiny-ec2-bootstrap](https://github.com/mcrute/tiny-ec2-bootstrap) are
required before we can truly make `ami_user` configurable.
* Currently, it is not possible to add/modify/remove arbitrary files (such as
service config files) on the filesystem which ultimately becomes the AMI.
One workaround is to use a "user data" script to make any necessary changes
(during the "default" runlevel) when an instance first launches.

47
profiles/alpine.conf Normal file
View File

@ -0,0 +1,47 @@
### Profile for Building the Publically-Available Alpine Linux AMIs
# vim: ts=2 et:
version-current { include required("version/current") }
version-edge { include required("version/edge") }
arch-x86_64 { include required("arch/x86_64") }
# profile vars
alpine {
# default profile revision is 'r0', reset for each new version release!
#revision = "r0"
ami_desc_suffix = " - https://github.com/mcrute/alpine-ec2-ami"
build_region = "us-west-2"
build_subnet = "subnet-b80c36e2"
ami_access {
all = true # these AMIs are publicly available
}
ami_regions {
#ap-east-1 = true # needs to be enabled first
ap-northeast-1 = true
ap-northeast-2 = true
#ap-northeast-3 = false # available by subscription only
ap-southeast-1 = true
ap-southeast-2 = true
ap-south-1 = true
ca-central-1 = true
eu-central-1 = true
eu-north-1 = true
eu-west-1 = true
eu-west-2 = true
eu-west-3 = true
sa-east-1 = true
us-east-1 = true
us-east-2 = true
us-west-1 = true
us-west-2 = true
}
}
# Build definitions
BUILDS {
# merge version, arch, and profile vars
current-x86_64 = ${version-current} ${arch-x86_64} ${alpine}
edge-x86_64 = ${version-edge} ${arch-x86_64} ${alpine}
}

1
profiles/arch/aarch64 Symbolic link
View File

@ -0,0 +1 @@
aarch64-1

10
profiles/arch/aarch64-1 Normal file
View File

@ -0,0 +1,10 @@
### aarch64 vars, revision 1
# vim: ts=2 et:
arch = "aarch64"
build_arch = "arm64"
build_instance_type = "a1.medium"
apk_tools = "https://github.com/alpinelinux/apk-tools/releases/download/v2.10.3/apk-tools-2.10.3-aarch64-linux.tar.gz"
apk_tools_sha256 = "58a07e547c83c3a30eb0a0bd73db57d6bbaf92cc093df7a1d9805631f7d349e3"
alpine_keys = "http://dl-cdn.alpinelinux.org/alpine/v3.9/main/aarch64/alpine-keys-2.1-r1.apk"
alpine_keys_sha256 = "1ae4cebb43adee47a68aa891660e69a1ac6467690daca6f211aabff36a17cad1"

1
profiles/arch/x86_64 Symbolic link
View File

@ -0,0 +1 @@
x86_64-1

9
profiles/arch/x86_64-1 Normal file
View File

@ -0,0 +1,9 @@
### x86_64 vars, revision 1
# vim: ts=2 et:
arch = "x86_64"
build_arch = "x86_64"
apk_tools = "https://github.com/alpinelinux/apk-tools/releases/download/v2.10.3/apk-tools-2.10.3-x86_64-linux.tar.gz"
apk_tools_sha256 = "4d0b2cda606720624589e6171c374ec6d138867e03576d9f518dddde85c33839"
alpine_keys = "http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/alpine-keys-2.1-r1.apk"
alpine_keys_sha256 = "9c7bc5d2e24c36982da7aa49b3cfcb8d13b20f7a03720f25625fa821225f5fbc"

87
profiles/base/1 Normal file
View File

@ -0,0 +1,87 @@
### base vars, revision 1
# vim: ts=2 et:
# Profile/Build
profile = null
profile_build = null
revision = "r0"
# Versioning
version = null
release = null
end_of_life = null
# Architecture
arch = null
build_arch = null
# Builder-instance
build_region = null
build_subnet = null
build_instance_type = "t3.nano"
build_public_ip = null
build_user = "ec2-user"
build_ami_name = "amzn2-ami-hvm-2.0.*-gp2"
build_ami_owner = "137112412989"
build_ami_latest = "true"
# AMI build/deploy
ami_name_prefix = "alpine-ami-"
ami_name_suffix = ""
ami_desc_prefix = "Alpine Linux "
ami_desc_suffix = ""
ami_volume_size = "1"
ami_encrypt = "false"
ami_user = "alpine" # modification currently not supported
ami_access = {}
ami_regions = {}
# NOTE: the following are python format strings, resolved in resolve-profile.py
ami_name = "{var.ami_name_prefix}{var.release}-{var.arch}-{var.revision}{var.ami_name_suffix}"
ami_desc = "{var.ami_desc_prefix}{var.release} {var.arch} {var.revision}{var.ami_desc_suffix}"
# AMI configuration
apk_tools = null
apk_tools_sha256 = null
alpine_keys = null
alpine_keys_sha256 = null
repos {}
pkgs {
linux-virt = true
alpine-mirrors = true
chrony = true
nvme-cli = true
openssh = true
sudo = true
tiny-ec2-bootstrap = true
tzdata = true
}
svcs {
devfs = "sysinit"
dmesg = "sysinit"
hwdrivers = "sysinit"
mdev = "sysinit"
acpid = "boot"
bootmisc = "boot"
hostname = "boot"
hwclock = "boot"
modules = "boot"
swap = "boot"
sysctl = "boot"
syslog = "boot"
chronyd = "default"
networking = "default"
sshd = "default"
tiny-ec2-bootstrap = "default"
killprocs = "shutdown"
mount-ro = "shutdown"
savecache = "shutdown"
}
kernel_modules {
sd-mod = true
usb-storage = true
ext4 = true
}
kernel_options {
"console=ttyS0" = true
"console=tty0" = true
}

1
profiles/base/current Symbolic link
View File

@ -0,0 +1 @@
1

31
profiles/test.conf Normal file
View File

@ -0,0 +1,31 @@
### Profile for Testing Builds
# vim: ts=2 et:
version-current { include required("version/current") }
version-edge { include required("version/edge") }
arch-x86_64 { include required("arch/x86_64") }
arch-aarch64 { include required("arch/aarch64") }
# specific to this profile's builds
test {
# default revision is 'r0', recomment/reset for each new version release!
#revision = "r0"
ami_name_prefix = "test-"
ami_desc_prefix = "Alpine Test "
build_region = "us-west-2"
build_subnet = "subnet-033a30d7b5220d177"
}
# Build definitions
BUILDS {
# merge version, arch, profile, and build vars
current-x86_64 = ${version-current} ${arch-x86_64} ${test}
edge-x86_64 = ${version-edge} ${arch-x86_64} ${test}
# aarch64 AMI builds are under development
edge-aarch64 = ${version-edge} ${arch-aarch64} ${test} {
# other subnet doesn't do a1.* instances
build_subnet = "subnet-08dfc622745f7d96a"
}
}

14
profiles/version/3.9 Normal file
View File

@ -0,0 +1,14 @@
### version 3.9 vars
# vim: ts=2 et:
# start with base vars
include required("../base/current")
# set version-specific vars
version = "3.9"
release = "3.9.4"
end_of_life = "2021-01-01"
repos {
"http://dl-cdn.alpinelinux.org/alpine/v3.9/main" = true
"http://dl-cdn.alpinelinux.org/alpine/v3.9/community" = true
}

1
profiles/version/current Symbolic link
View File

@ -0,0 +1 @@
3.9

18
profiles/version/edge Normal file
View File

@ -0,0 +1,18 @@
### edge vars
# vim: ts=2 et:
# based on current
include required("current")
# add edge-specific tweaks...
version = "edge"
release = "edge"
end_of_life = "@TOMORROW@"
revision = "@NOW@"
repos = null # remove all values from 'current'
repos {
"http://dl-cdn.alpinelinux.org/alpine/edge/main" = true
"http://dl-cdn.alpinelinux.org/alpine/edge/community" = true
"http://dl-cdn.alpinelinux.org/alpine/edge/testing" = true
}

View File

@ -1,22 +0,0 @@
alpine-ami-3.9.3-x86_64:
description: "Alpine Linux 3.9.3 x86_64"
alpine-release: 3.9.3
kernel-flavor: virt
ami-release-date: "2019-03-03 01:03:41"
region-identifiers:
ap-northeast-1: ami-001e74131496d0212
ap-northeast-2: ami-09a26b03424d75667
ap-south-1: ami-03534f64f8b87aafc
ap-southeast-1: ami-0d5f2950efcd55b0e
ap-southeast-2: ami-0660edcba4ba7c8a0
ca-central-1: ami-0bf4ea1f0f86283bb
eu-central-1: ami-060d9bbde8d5047e8
eu-north-1: ami-0a5284750fcf11d18
eu-west-1: ami-0af60b964eb2f09d3
eu-west-2: ami-097405edd3790cf8b
eu-west-3: ami-0078916a37514bb9a
sa-east-1: ami-09e0025e60328ea6d
us-east-1: ami-05c8c48601c2303af
us-east-2: ami-064d64386a89de1e6
us-west-1: ami-04a4711d62db12ba0
us-west-2: ami-0ff56870cf29d4f02

77
releases/README.md Normal file
View File

@ -0,0 +1,77 @@
# Alpine Linux EC2 AMIs
**These are not official AWS or Alpine images. They are community built and
supported.**
These AMIs should work with most EC2 features such as Elastic Network Adapters
and NVMe EBS volumes. If you find any problems launching them on current
generation instances, please open an [issue](https://github.com/mcrute/alpine-ec2-ami/issues)
and include as much detailed information as possible.
During the *first boot* of instances created with these AMIs, the lightweight
[tiny-ec2-bootstrap](https://github.com/mcrute/tiny-ec2-bootstrap) init
script...
- sets the instance's hostname,
- installs the SSH authorized_keys for the 'alpine' user,
- disables 'root' and 'alpine' users' passwords,
- expands the root partition to use all available EBS volume space,
- and executes a "user data" script (must be a shell script that starts with `#!`)
If you launch these AMIs to build other images (via [Packer](https://packer.io),
etc.), don't forget to remove `/var/lib/cloud/.bootstrap-complete` --
otherwise, instances launched from those second-generation AMIs will not run
`tiny-ec2-bootstrap` on their first boot.
The more popular [cloud-init](https://cloudinit.readthedocs.io/en/latest/)
is currently not supported on Alpine Linux. If `cloud-init` support is
important to you, please open an [issue](https://github.com/mcrute/alpine-ec2-ami/issues).
## AMIs
### Alpine Linux 3.9.4 (2019-05-28)
<details><summary><i>click to show/hide</i></summary><p>
| Region | alpine-ami-3.9.4-x86_64-r0 |
| ------ | --- |
| ap-northeast-1 | [ami-0251fa7f8f8ed0a3b](https://ap-northeast-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0251fa7f8f8ed0a3b) ([launch](https://ap-northeast-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0251fa7f8f8ed0a3b)) |
| ap-northeast-2 | [ami-0bb32f18ed247323e](https://ap-northeast-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0bb32f18ed247323e) ([launch](https://ap-northeast-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0bb32f18ed247323e)) |
| ap-south-1 | [ami-0ca42c8d33ec3ef66](https://ap-south-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0ca42c8d33ec3ef66) ([launch](https://ap-south-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0ca42c8d33ec3ef66)) |
| ap-southeast-1 | [ami-032330b6de2f39f75](https://ap-southeast-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-032330b6de2f39f75) ([launch](https://ap-southeast-1.console.aws.amazon.com/ec2/home#launchAmi=ami-032330b6de2f39f75)) |
| ap-southeast-2 | [ami-0681743c5235cb677](https://ap-southeast-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0681743c5235cb677) ([launch](https://ap-southeast-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0681743c5235cb677)) |
| ca-central-1 | [ami-0dfcf967a696ee901](https://ca-central-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0dfcf967a696ee901) ([launch](https://ca-central-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0dfcf967a696ee901)) |
| eu-central-1 | [ami-07a8060b90f208cf2](https://eu-central-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-07a8060b90f208cf2) ([launch](https://eu-central-1.console.aws.amazon.com/ec2/home#launchAmi=ami-07a8060b90f208cf2)) |
| eu-north-1 | [ami-0f25dd1f2ab208b34](https://eu-north-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0f25dd1f2ab208b34) ([launch](https://eu-north-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0f25dd1f2ab208b34)) |
| eu-west-1 | [ami-07453094c6d42a07e](https://eu-west-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-07453094c6d42a07e) ([launch](https://eu-west-1.console.aws.amazon.com/ec2/home#launchAmi=ami-07453094c6d42a07e)) |
| eu-west-2 | [ami-03fa8e7cff9293332](https://eu-west-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-03fa8e7cff9293332) ([launch](https://eu-west-2.console.aws.amazon.com/ec2/home#launchAmi=ami-03fa8e7cff9293332)) |
| eu-west-3 | [ami-07aad42fdc4a7e79b](https://eu-west-3.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-07aad42fdc4a7e79b) ([launch](https://eu-west-3.console.aws.amazon.com/ec2/home#launchAmi=ami-07aad42fdc4a7e79b)) |
| sa-east-1 | [ami-04cac088d12e5ebf0](https://sa-east-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-04cac088d12e5ebf0) ([launch](https://sa-east-1.console.aws.amazon.com/ec2/home#launchAmi=ami-04cac088d12e5ebf0)) |
| us-east-1 | [ami-0c2c618b193741157](https://us-east-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0c2c618b193741157) ([launch](https://us-east-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0c2c618b193741157)) |
| us-east-2 | [ami-012e1a22371695544](https://us-east-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-012e1a22371695544) ([launch](https://us-east-2.console.aws.amazon.com/ec2/home#launchAmi=ami-012e1a22371695544)) |
| us-west-1 | [ami-00f0f067a7d90b7e4](https://us-west-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-00f0f067a7d90b7e4) ([launch](https://us-west-1.console.aws.amazon.com/ec2/home#launchAmi=ami-00f0f067a7d90b7e4)) |
| us-west-2 | [ami-0ed0fed8f127914fb](https://us-west-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0ed0fed8f127914fb) ([launch](https://us-west-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0ed0fed8f127914fb)) |
</p></details>
### Alpine Linux Edge (2019-05-28)
<details><summary><i>click to show/hide</i></summary><p>
| Region | alpine-ami-edge-x86_64-20190528032210 |
| ------ | --- |
| ap-northeast-1 | [ami-03a19ed410069a4d8](https://ap-northeast-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-03a19ed410069a4d8) ([launch](https://ap-northeast-1.console.aws.amazon.com/ec2/home#launchAmi=ami-03a19ed410069a4d8)) |
| ap-northeast-2 | [ami-05988a6c4660792ce](https://ap-northeast-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-05988a6c4660792ce) ([launch](https://ap-northeast-2.console.aws.amazon.com/ec2/home#launchAmi=ami-05988a6c4660792ce)) |
| ap-south-1 | [ami-08aaeba360cdab5a4](https://ap-south-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-08aaeba360cdab5a4) ([launch](https://ap-south-1.console.aws.amazon.com/ec2/home#launchAmi=ami-08aaeba360cdab5a4)) |
| ap-southeast-1 | [ami-01ae6c2b20966a358](https://ap-southeast-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-01ae6c2b20966a358) ([launch](https://ap-southeast-1.console.aws.amazon.com/ec2/home#launchAmi=ami-01ae6c2b20966a358)) |
| ap-southeast-2 | [ami-00193ff2f592dc22c](https://ap-southeast-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-00193ff2f592dc22c) ([launch](https://ap-southeast-2.console.aws.amazon.com/ec2/home#launchAmi=ami-00193ff2f592dc22c)) |
| ca-central-1 | [ami-086b7f5aa4cf0194e](https://ca-central-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-086b7f5aa4cf0194e) ([launch](https://ca-central-1.console.aws.amazon.com/ec2/home#launchAmi=ami-086b7f5aa4cf0194e)) |
| eu-central-1 | [ami-089db5b316937779b](https://eu-central-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-089db5b316937779b) ([launch](https://eu-central-1.console.aws.amazon.com/ec2/home#launchAmi=ami-089db5b316937779b)) |
| eu-north-1 | [ami-02ed2f6e56115d6f2](https://eu-north-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-02ed2f6e56115d6f2) ([launch](https://eu-north-1.console.aws.amazon.com/ec2/home#launchAmi=ami-02ed2f6e56115d6f2)) |
| eu-west-1 | [ami-0afa00bfa1c870509](https://eu-west-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0afa00bfa1c870509) ([launch](https://eu-west-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0afa00bfa1c870509)) |
| eu-west-2 | [ami-0b1e309dfd74525f2](https://eu-west-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0b1e309dfd74525f2) ([launch](https://eu-west-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0b1e309dfd74525f2)) |
| eu-west-3 | [ami-0404d34bb3376e370](https://eu-west-3.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0404d34bb3376e370) ([launch](https://eu-west-3.console.aws.amazon.com/ec2/home#launchAmi=ami-0404d34bb3376e370)) |
| sa-east-1 | [ami-053be80e8c7b1ad62](https://sa-east-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-053be80e8c7b1ad62) ([launch](https://sa-east-1.console.aws.amazon.com/ec2/home#launchAmi=ami-053be80e8c7b1ad62)) |
| us-east-1 | [ami-0d1ea89d2b00334f5](https://us-east-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0d1ea89d2b00334f5) ([launch](https://us-east-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0d1ea89d2b00334f5)) |
| us-east-2 | [ami-0939714c9fe9ec10e](https://us-east-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0939714c9fe9ec10e) ([launch](https://us-east-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0939714c9fe9ec10e)) |
| us-west-1 | [ami-0b9c5086efa0f067b](https://us-west-1.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0b9c5086efa0f067b) ([launch](https://us-west-1.console.aws.amazon.com/ec2/home#launchAmi=ami-0b9c5086efa0f067b)) |
| us-west-2 | [ami-0719ffe4d94e67432](https://us-west-2.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId=ami-0719ffe4d94e67432) ([launch](https://us-west-2.console.aws.amazon.com/ec2/home#launchAmi=ami-0719ffe4d94e67432)) |
</p></details>

58
releases/alpine.yaml Normal file
View File

@ -0,0 +1,58 @@
current-x86_64:
3.9.4:
alpine-ami-3.9.4-x86_64-r0:
description: Alpine Linux 3.9.4 x86_64 r0 - https://github.com/mcrute/alpine-ec2-ami
profile: alpine
profile_build: current-x86_64
version: '3.9'
release: 3.9.4
arch: x86_64
revision: r0
end_of_life: '2021-01-01T00:00:00'
build_time: 1559014278
artifacts:
ap-northeast-1: ami-0251fa7f8f8ed0a3b
ap-northeast-2: ami-0bb32f18ed247323e
ap-south-1: ami-0ca42c8d33ec3ef66
ap-southeast-1: ami-032330b6de2f39f75
ap-southeast-2: ami-0681743c5235cb677
ca-central-1: ami-0dfcf967a696ee901
eu-central-1: ami-07a8060b90f208cf2
eu-north-1: ami-0f25dd1f2ab208b34
eu-west-1: ami-07453094c6d42a07e
eu-west-2: ami-03fa8e7cff9293332
eu-west-3: ami-07aad42fdc4a7e79b
sa-east-1: ami-04cac088d12e5ebf0
us-east-1: ami-0c2c618b193741157
us-east-2: ami-012e1a22371695544
us-west-1: ami-00f0f067a7d90b7e4
us-west-2: ami-0ed0fed8f127914fb
edge-x86_64:
edge:
alpine-ami-edge-x86_64-20190528032210:
description: Alpine Linux edge x86_64 20190528032210 - https://github.com/mcrute/alpine-ec2-ami
profile: alpine
profile_build: edge-x86_64
version: edge
release: edge
arch: x86_64
revision: '20190528032210'
end_of_life: '2019-05-29T03:22:10'
build_time: 1559014836
artifacts:
ap-northeast-1: ami-03a19ed410069a4d8
ap-northeast-2: ami-05988a6c4660792ce
ap-south-1: ami-08aaeba360cdab5a4
ap-southeast-1: ami-01ae6c2b20966a358
ap-southeast-2: ami-00193ff2f592dc22c
ca-central-1: ami-086b7f5aa4cf0194e
eu-central-1: ami-089db5b316937779b
eu-north-1: ami-02ed2f6e56115d6f2
eu-west-1: ami-0afa00bfa1c870509
eu-west-2: ami-0b1e309dfd74525f2
eu-west-3: ami-0404d34bb3376e370
sa-east-1: ami-053be80e8c7b1ad62
us-east-1: ami-0d1ea89d2b00334f5
us-east-2: ami-0939714c9fe9ec10e
us-west-1: ami-0b9c5086efa0f067b
us-west-2: ami-0719ffe4d94e67432

View File

@ -0,0 +1,115 @@
@PYTHON@
# vim: ts=4 et:
from datetime import datetime
from distutils.version import StrictVersion
import functools
import os
import re
import sys
import yaml
if len(sys.argv) != 2:
sys.exit("Usage: " + os.path.basename(__file__) + "<profile>")
PROFILE = sys.argv[1]
RELEASE_DIR = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'..', 'releases'
)
README_MD = os.path.join( RELEASE_DIR, 'README.md')
RELEASE_YAML = os.path.join( RELEASE_DIR, PROFILE + '.yaml')
# read in releases/<profile>.yaml
with open(RELEASE_YAML, 'r') as data:
RELEASES = yaml.safe_load(data)
sections = {}
for build, releases in RELEASES.items():
for release, amis in releases.items():
if release in sections:
rel = sections[release]
else:
rel = {
'built': {},
'name': {},
'ami': {}
}
for name, info in amis.items():
arch = info['arch']
built = info['build_time']
if (arch not in rel['built'] or
rel['built'][arch] < built):
rel['name'][arch] = name
rel['built'][arch] = built
for region, ami in info['artifacts'].items():
if region not in rel['ami']:
rel['ami'][region] = {}
rel['ami'][region][arch] = ami
sections[release] = rel
SECTION = """
### Alpine Linux {release} ({date})
<details><summary><i>click to show/hide</i></summary><p>
{rows}
</p></details>
"""
AMI = " [{id}](https://{r}.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId={id}) " + \
"([launch](https://{r}.console.aws.amazon.com/ec2/home#launchAmi={id})) |"
ARCHS = ['x86_64', 'aarch64']
# most -> least recent version, edge at end
def ver_cmp(a, b):
try:
if StrictVersion(a) < StrictVersion(b):
return 1
if StrictVersion(a) > StrictVersion(b):
return -1
return 0
except ValueError:
# "edge" doesn't work with StrictVersion
if a == 'edge':
return 1
if b == 'edge':
return -1
return 0
ami_list = "## AMIs\n"
for release in sorted(list(sections.keys()), key=functools.cmp_to_key(ver_cmp)):
info = sections[release]
rows = []
rows.append('| Region |')
rows.append('| ------ |')
for arch in ARCHS:
if arch in info['name']:
rows[0] += ' {n} |'.format(n=info['name'][arch])
rows[1] += ' --- |'
for region, amis in info['ami'].items():
row = '| {r} |'.format(r=region)
for arch in ARCHS:
if arch in amis:
row += AMI.format(r=region, id=amis[arch])
rows.append(row)
ami_list += SECTION.format(
release=release.capitalize(),
date=datetime.utcfromtimestamp(max(info['built'].values())).date(),
rows="\n".join(rows)
)
with open(README_MD, 'r') as file:
readme = file.read()
readme_re = re.compile('## AMIs.*\Z', re.S)
with open(README_MD, 'w') as file:
file.write(readme_re.sub(ami_list, readme))

61
scripts/make-amis Executable file
View File

@ -0,0 +1,61 @@
#!/bin/sh
# vim: set ts=4 et:
export PACKER=${PACKER:-packer}
cd build || exit 1
# we need a profile, at least
if [ $# -eq 0 ]; then
echo "Usage: $(basename "$0") <profile> [ <build> ... ]" >&2
exit 1
fi
PROFILE=$1; shift
# no build(s) specified? do all the builds!
[ $# -gt 0 ] && BUILDS="$*" || BUILDS=$(ls "profile/$PROFILE")
for BUILD in $BUILDS
do
printf "\n*** $BUILD ***\n\n"
BUILD_DIR="profile/$PROFILE/$BUILD"
# get version, release, and arch
eval "$(
grep -E '"(version|release|arch)"' "$BUILD_DIR/vars.json" | \
sed -e 's/[ ",]//g' -e 's/:/=/g'
)"
if [ "$version" != 'edge' ]; then
# get current Alpine release for this version
alpine_release=$(
curl -s "http://dl-cdn.alpinelinux.org/alpine/v$version/main/$arch/" | \
grep '"alpine-base-' | cut -d'"' -f2 | cut -d- -f3
)
# update core version profile's release if necessary
if [ "$alpine_release" != "$release" ]; then
printf "=== New release ($alpine_release) detected! ===\n\n"
sed -i '' -e "s/$release/$alpine_release/" "../profiles/version/$version"
./resolve-profile.py "$PROFILE"
# NOTE: this does NOT update 'revision', it's at target profile/build level
fi
fi
# execute packer, capture output and exit code
(
"$PACKER" build -var-file="$BUILD_DIR/vars.json" packer.json
echo $? >"$BUILD_DIR/exit"
) | tee "$BUILD_DIR/output"
EXIT=$(cat "$BUILD_DIR/exit")
if [ "$EXIT" = "0" ]; then
./update-release.py "$PROFILE" "$BUILD"
else
# unless AMI revision already exists, exit
grep -q 'is used by an existing AMI' "$BUILD_DIR/output" || exit "$EXIT"
fi
done
# TODO? if PROFILE = alpine-amis then prune?, gen-releases?

View File

@ -2,7 +2,7 @@
[ -x /usr/sbin/nvme ] || exit
PROC="$(basename $0)[$$]"
PROC="$(basename "$0")[$$]"
log() {
FACILITY="kern.$1"
@ -16,8 +16,8 @@ raw_ebs_alias() {
case $ACTION in
add|"")
BASE=$(echo $MDEV | sed -re 's/^(nvme[0-9]+n[0-9]+).*/\1/')
PART=$(echo $MDEV | sed -re 's/nvme[0-9]+n[0-9]+p?//g')
BASE=$(echo "$MDEV" | sed -re 's/^(nvme[0-9]+n[0-9]+).*/\1/')
PART=$(echo "$MDEV" | sed -re 's/nvme[0-9]+n[0-9]+p?//g')
MAXTRY=50
TRY=0
until [ -n "$EBS" ]; do
@ -30,14 +30,15 @@ case $ACTION in
fi
sleep 0.1
done
EBS=${EBS#/dev/}$PART
ln -sf "$MDEV" "${EBS/xvd/sd}" && log notice "Added ${EBS/xvd/sd} symlink for $MDEV"
ln -sf "$MDEV" "${EBS/sd/xvd}" && log notice "Added ${EBS/sd/xvd} symlink for $MDEV"
# remove any leading '/dev/', 'sd', or 'xvd', and append partition
EBS="${${${EBS#/dev/}#sd}#xvd}$PART"
ln -sf "$MDEV" "sd$EBS" && log notice "Added sd$EBS symlink for $MDEV"
ln -sf "$MDEV" "xvd$EBS" && log notice "Added xvd$EBS symlink for $MDEV"
;;
remove)
for TARGET in sd* xvd*
do
[ "$(readlink $TARGET 2>/dev/null)" = "$MDEV" ] && rm -f "$TARGET" && log notice "Removed $TARGET symlink for $MDEV"
[ "$(readlink "$TARGET" 2>/dev/null)" = "$MDEV" ] && rm -f "$TARGET" && log notice "Removed $TARGET symlink for $MDEV"
done
;;
esac

133
scripts/prune-amis.py.in Normal file
View File

@ -0,0 +1,133 @@
@PYTHON@
# vim: ts=4 et:
from datetime import datetime
import os
import sys
import boto3
import yaml
LEVELS = ['revision', 'release', 'version']
if 3 < len(sys.argv) > 4 or sys.argv[1] not in LEVELS:
sys.exit("Usage: " + os.path.basename(__file__) + """ <level> <profile> [<build>]
<level> :-
revision - keep only the latest revision per release
release - keep only the latest release per version
version - keep only the versions that aren't end-of-life""")
NOW = datetime.utcnow()
LEVEL = sys.argv[1]
PROFILE = sys.argv[2]
BUILD = None if len(sys.argv) == 3 else sys.argv[3]
RELEASE_YAML = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'..', 'releases', PROFILE + '.yaml'
)
with open(RELEASE_YAML, 'r') as data:
BEFORE = yaml.safe_load(data)
known = {}
prune = {}
after = {}
# for all builds in the profile...
for build_name, releases in BEFORE.items():
# this is not the build that was specified
if BUILD is not None and BUILD != build_name:
print('< skipping {0}/{1}'.format(PROFILE, build_name))
# ensure its release data remains intact
after[build_name] = BEFORE[build_name]
continue
else:
print('> PRUNING {0}/{1} for {2}'.format(PROFILE, build_name, LEVEL))
criteria = {}
# scan releases for pruning criteria
for release, amis in releases.items():
for ami_name, info in amis.items():
version = info['version']
if info['end_of_life']:
eol = datetime.fromisoformat(info['end_of_life'])
else:
eol = None
built = info['build_time']
for region, ami_id in info['artifacts'].items():
if region not in known:
known[region] = []
known[region].append(ami_id)
if LEVEL == 'revision':
# find build timestamp of most recent revision, per release
if release not in criteria or built > criteria[release]:
criteria[release] = built
elif LEVEL == 'release':
# find build timestamp of most recent revision, per version
if version not in criteria or built > criteria[version]:
criteria[version] = built
elif LEVEL == 'version':
# find latest EOL date, per version
if (version not in criteria or not criteria[version]) or (
eol and eol > criteria[version]):
criteria[version] = eol
# rescan again to determine what doesn't make the cut
for release, amis in releases.items():
for ami_name, info in amis.items():
version = info['version']
if info['end_of_life']:
eol = datetime.fromisoformat(info['end_of_life'])
else:
eol = None
built = info['build_time']
if ((LEVEL == 'revision' and built < criteria[release]) or
(LEVEL == 'release' and built < criteria[version]) or
(LEVEL == 'version' and criteria[version] and (
(version != 'edge' and criteria[version] < NOW) or
(version == 'edge' and ((not eol) or (eol < NOW)))
))):
for region, ami_id in info['artifacts'].items():
if region not in prune:
prune[region] = []
prune[region].append(ami_id)
else:
if build_name not in after:
after[build_name] = {}
if release not in after[build_name]:
after[build_name][release] = {}
after[build_name][release][ami_name] = info
# scan all regions for AMIs
AWS = boto3.session.Session()
for region in AWS.get_available_regions('ec2'):
print("* scanning: " + region + '...')
EC2 = AWS.client('ec2', region_name=region)
for image in EC2.describe_images(Owners=['self'])['Images']:
action = '? UNKNOWN'
if region in prune and image['ImageId'] in prune[region]:
action = '- REMOVING'
elif region in known and image['ImageId'] in known[region]:
action = '+ KEEPING'
print(' ' + action + ': ' + image['Name'] +
"\n = " + image['ImageId'], end='', flush=True)
if action[0] == '-':
EC2.deregister_image(ImageId=image['ImageId'])
for blockdev in image['BlockDeviceMappings']:
if 'Ebs' in blockdev:
print(', ' + blockdev['Ebs']['SnapshotId'],
end='', flush=True)
if action[0] == '-':
EC2.delete_snapshot(
SnapshotId=blockdev['Ebs']['SnapshotId'])
print()
# update releases/<profile>.yaml
with open(RELEASE_YAML, 'w') as data:
yaml.dump(after, data, sort_keys=False)

View File

@ -0,0 +1,105 @@
@PYTHON@
# vim: set ts=4 et:
import json
import os
import shutil
import sys
from datetime import datetime, timedelta
from pyhocon import ConfigFactory
if len(sys.argv) != 2:
sys.exit("Usage: " + os.path.basename(__file__) + " <profile>")
PROFILE = sys.argv[1]
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
# path to the profile config file
PROFILE_CONF = os.path.join(SCRIPT_DIR, '..', 'profiles', PROFILE + '.conf')
# load the profile's build configuration
BUILDS = ConfigFactory.parse_file(PROFILE_CONF)['BUILDS']
# where we store the profile's builds' config/output
PROFILE_DIR = os.path.join(SCRIPT_DIR, 'profile', PROFILE)
if not os.path.exists(PROFILE_DIR):
os.makedirs(PROFILE_DIR)
# fold these build config keys' dict to scalar
FOLD_DICTS = {
'ami_access': ',{0}',
'ami_regions': ',{0}',
'repos': "\n@{1} {0}",
'pkgs': ' {0}@{1}',
'kernel_modules': ',{0}',
'kernel_options': ' {0}'
}
NOW = datetime.utcnow()
ONE_DAY = timedelta(days=1)
# func to fold dict down to scalar
def fold(fdict, ffmt):
folded = ''
for fkey, fval in fdict.items():
fkey = fkey.strip('"') # complex keys may be in quotes
if fval is True:
folded += ffmt[0] + fkey
elif not (fval is None or fval is False):
folded += ffmt.format(fkey, fval)
return folded[1:]
# parse/resolve HOCON profile's builds' config
for build, cfg in BUILDS.items():
build_dir = os.path.join(PROFILE_DIR, build)
# make a fresh profile build directory
if os.path.exists(build_dir):
shutil.rmtree(build_dir)
os.makedirs(build_dir)
# populate profile build vars
cfg['profile'] = PROFILE
cfg['profile_build'] = build
# mostly edge-related temporal substitutions
if cfg['end_of_life'] == '@TOMORROW@':
cfg['end_of_life'] = (NOW + ONE_DAY).isoformat(timespec='seconds')
elif cfg['end_of_life'] is not None:
# to explicitly UTC-ify end_of_life
cfg['end_of_life'] = datetime.fromisoformat(
cfg['end_of_life'] + '+00:00').isoformat(timespec='seconds')
if cfg['revision'] == '@NOW@':
cfg['revision'] = NOW.strftime('%Y%m%d%H%M%S')
# fold dict vars to scalars
for foldkey, foldfmt in FOLD_DICTS.items():
cfg[foldkey] = fold(cfg[foldkey], foldfmt)
# fold 'svcs' dict to scalar
lvls = {}
for svc, lvl in cfg['svcs'].items():
if lvl is True:
# service in default runlevel
lvls['default'].append(svc)
elif not (lvl is None or lvl is False):
# service in specified runlevel (skip svc when false/null)
if lvl not in lvls.keys():
lvls[lvl] = []
lvls[lvl].append(svc)
cfg['svcs'] = ' '.join(
str(lvl) + '=' + ','.join(
str(svc) for svc in svcs
) for lvl, svcs in lvls.items()
)
# resolve ami_name and ami_desc
cfg['ami_name'] = cfg['ami_name'].format(var=cfg)
cfg['ami_desc'] = cfg['ami_desc'].format(var=cfg)
# write build vars file
with open(os.path.join(build_dir, 'vars.json'), 'w') as out:
json.dump(cfg, out, indent=4, separators=(',', ': '))

344
scripts/setup-ami Executable file
View File

@ -0,0 +1,344 @@
#!/bin/sh
# vim: set ts=4 et:
set -eu
DEVICE=/dev/xvdf
TARGET=/mnt/target
# what bootloader should we use?
[ -d "/sys/firmware/efi" ] && BOOTLOADER=grub-efi || BOOTLOADER=syslinux
die() {
printf '\033[1;31mERROR:\033[0m %s\n' "$@" >&2 # bold red
exit 1
}
einfo() {
printf '\n\033[1;36m> %s\033[0m\n' "$@" >&2 # bold cyan
}
rc_add() {
runlevel="$1"; shift # runlevel name
services="$*" # names of services
for svc in $services; do
mkdir -p "$TARGET/etc/runlevels/$runlevel"
ln -s "/etc/init.d/$svc" "$TARGET/etc/runlevels/$runlevel/$svc"
echo " * service $svc added to runlevel $runlevel"
done
}
wgets() (
url="$1" # url to fetch
sha256="$2" # expected SHA256 sum of output
dest="$3" # output path and filename
wget -T 10 -q -O "$dest" "$url"
echo "$sha256 $dest" | sha256sum -c > /dev/null
)
validate_block_device() {
lsblk -P --fs "$DEVICE" >/dev/null 2>&1 || \
die "'$DEVICE' is not a valid block device"
if lsblk -P --fs "$DEVICE" | grep -vq 'FSTYPE=""'; then
die "Block device '$DEVICE' is not blank"
fi
}
fetch_apk_tools() {
store="$(mktemp -d)"
tarball="$(basename "$APK_TOOLS")"
wgets "$APK_TOOLS" "$APK_TOOLS_SHA256" "$store/$tarball"
tar -C "$store" -xf "$store/$tarball"
find "$store" -name apk
}
# mostly from Alpine's /sbin/setup-disk
setup_partitions() {
start=1M # TODO: do we really need to waste 1M?
line=
# create new partitions
(
for line in "$@"; do
case "$line" in
0M*) ;;
*) echo "$start,$line"; start= ;;
esac
done
) | sfdisk --quiet --label dos "$DEVICE"
# we assume that the build host will create the new devices within 5s
tries=5
while [ ! -e "${DEVICE}1" ]; do
[ $tries -eq 0 ] && break
sleep 1
tries=$(( tries - 1 ))
done
[ -e "${DEVICE}1" ] || die "Expected new device ${DEVICE}1 not created"
}
make_filesystem() {
root_dev="$DEVICE"
if [ "$BOOTLOADER" = 'grub-efi' ]; then
# create a small EFI partition (remainder for root), and mount it
setup_partitions '5M,EF' ',L'
root_dev="${DEVICE}2"
mkfs.vfat -n EFI "${DEVICE}1"
fi
mkfs.ext4 -O ^64bit -L / "$root_dev"
mount "$root_dev" "$TARGET"
if [ "$BOOTLOADER" = 'grub-efi' ]; then
mkdir -p "$TARGET/boot/efi"
mount -t vfat "${DEVICE}1" "$TARGET/boot/efi"
fi
}
setup_repositories() {
mkdir -p "$TARGET/etc/apk/keys"
echo "$REPOS" > "$TARGET/etc/apk/repositories"
}
fetch_keys() {
tmp="$(mktemp -d)"
wgets "$ALPINE_KEYS" "$ALPINE_KEYS_SHA256" "$tmp/alpine-keys.apk"
tar -C "$TARGET" --warning=no-unknown-keyword -xvf "$tmp/alpine-keys.apk" etc/apk/keys
rm -rf "$tmp"
}
install_base() {
$apk add --root "$TARGET" --no-cache --initdb alpine-base
# verify release matches
if [ "$VERSION" != "edge" ]; then
ALPINE_RELEASE=$(cat "$TARGET/etc/alpine-release")
[ "$RELEASE" = "$ALPINE_RELEASE" ] || \
die "Newer Alpine release detected: $ALPINE_RELEASE"
fi
}
setup_chroot() {
mount -t proc none "$TARGET/proc"
mount --bind /dev "$TARGET/dev"
mount --bind /sys "$TARGET/sys"
# Needed for bootstrap, will be removed in the cleanup stage.
install -Dm644 /etc/resolv.conf "$TARGET/etc/resolv.conf"
}
install_core_packages() {
chroot "$TARGET" apk --no-cache add $PKGS
chroot "$TARGET" apk --no-cache add --no-scripts $BOOTLOADER
# Disable starting getty for physical ttys because they're all inaccessible
# anyhow. With this configuration boot messages will still display in the
# EC2 console.
sed -Ei '/^tty[0-9]/s/^/#/' "$TARGET/etc/inittab"
# Make it a little more obvious who is logged in by adding username to the
# prompt
sed -i "s/^export PS1='/&\\\\u@/" "$TARGET/etc/profile"
}
setup_mdev() {
cp /tmp/nvme-ebs-links "$TARGET/lib/mdev"
# insert nvme ebs mdev configs just above "# fallback" comment
sed -n -i -e '/# fallback/r /tmp/nvme-ebs-mdev.conf' -e 1x -e '2,${x;p}' -e '${x;p}' "$TARGET/etc/mdev.conf"
}
create_initfs() {
# Enable ENA and NVME features these don't hurt for any instance and are
# hard requirements of the 5 series and i3 series of instances
# TODO: profile-ize?
sed -Ei 's/^features="([^"]+)"/features="\1 nvme ena"/' \
"$TARGET/etc/mkinitfs/mkinitfs.conf"
chroot "$TARGET" /sbin/mkinitfs $(basename $(find "$TARGET/lib/modules/"* -maxdepth 0))
}
install_bootloader() {
case "$BOOTLOADER" in
syslinux) install_extlinux ;;
grub-efi) install_grub_efi ;;
*) die "unknown bootloader '$BOOTLOADER'" ;;
esac
}
install_extlinux() {
# Must use disk labels instead of UUID or devices paths so that this works
# across instance familes. UUID works for many instances but breaks on the
# NVME ones because EBS volumes are hidden behind NVME devices.
#
# Enable ext4 because the root device is formatted ext4
#
# Shorten timeout (1/10s) as EC2 has no way to interact with instance console
#
# ttyS0 is the target for EC2s "Get System Log" feature whereas tty0 is the
# target for EC2s "Get Instance Screenshot" feature. Enabling the serial
# port early in extlinux gives the most complete output in the system log.
sed -Ei -e "s|^[# ]*(root)=.*|\1=LABEL=/|" \
-e "s|^[# ]*(default_kernel_opts)=.*|\1=\"$KERNEL_OPTS\"|" \
-e "s|^[# ]*(serial_port)=.*|\1=ttyS0|" \
-e "s|^[# ]*(modules)=.*|\1=$KERNEL_MODS|" \
-e "s|^[# ]*(default)=.*|\1=virt|" \
-e "s|^[# ]*(timeout)=.*|\1=1|" \
"$TARGET/etc/update-extlinux.conf"
chroot "$TARGET" /sbin/extlinux --install /boot
chroot "$TARGET" /sbin/update-extlinux --warn-only
}
# TODO: this isn't quite working for some reason
install_grub_efi() {
case "$ARCH" in
x86_64) grub_target=x86_64-efi ; fwa=x64 ;;
aarch64) grub_target=arm64-efi ; fwa=aa64 ;;
*) die "ARCH=$ARCH is currently unsupported" ;;
esac
# disable nvram so grub doesn't call efibootmgr
chroot "$TARGET" /usr/sbin/grub-install --target="$grub_target" --efi-directory=/boot/efi \
--bootloader-id=alpine --boot-directory=/boot --no-nvram
# fallback mode
install -D "$TARGET/boot/efi/EFI/alpine/grub$fwa.efi" "$TARGET/boot/efi/EFI/boot/$fwa.efi"
# add cmdline linux defaults to /etc/default/grub
echo "GRUB_CMDLINE_LINUX_DEFAULT=\"modules=$KERNEL_MODS $KERNEL_OPTS\"" >> "$TARGET"/etc/default/grub
# eliminate grub pause
sed -ie 's/^GRUB_TIMEOUT=.$/GRUB_TIMEOUT=0/' "$TARGET/etc/default/grub"
# generate/install new config
[ -e "$TARGET/boot/grub/grub.cfg" ] && cp "$TARGET/boot/grub/grub.cfg" "$TARGET/boot/grub/grub.cfg.backup"
chroot "$TARGET" grub-mkconfig -o /boot/grub/grub.cfg
}
setup_fstab() {
cat > "$TARGET/etc/fstab" <<EOF
# <fs> <mountpoint> <type> <opts> <dump/pass>
LABEL=/ / ext4 defaults,noatime 1 1
EOF
# if we're using grub-efi bootloader, add extra line for EFI partition
if [ "$BOOTLOADER" = 'grub-efi' ]; then
echo "LABEL=EFI /boot/efi vfat defaults,noatime,uid=0,gid=0,umask=077 0 0" >> "$TARGET/etc/fstab"
fi
}
setup_networking() {
cat > "$TARGET/etc/network/interfaces" <<EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
EOF
}
enable_services() {
for lvl_svcs in $SVCS; do
rc_add $(echo "$lvl_svcs" | tr '=,' ' ')
done
}
# TODO: allow profile to specify alternate ALPINE_USER?
# NOTE: tiny-ec2-bootstrap will need to be updated to support that!
create_alpine_user() {
# Allow members of the wheel group to sudo without a password. By default
# this will only be the alpine user. This allows us to ship an AMI that is
# accessible via SSH using the user's configured SSH keys (thanks to
# tiny-ec2-bootstrap) but does not allow remote root access which is the
# best-practice.
sed -i '/%wheel .* NOPASSWD: .*/s/^# //' "$TARGET/etc/sudoers"
# There is no real standard ec2 username across AMIs, Amazon uses ec2-user
# for their Amazon Linux AMIs but Ubuntu uses ubuntu, Fedora uses fedora,
# etc... (see: https://alestic.com/2014/01/ec2-ssh-username/). So our user
# and group are alpine because this is Alpine Linux. On instance bootstrap
# the user can create whatever users they want and delete this one.
chroot "$TARGET" /usr/sbin/addgroup alpine
chroot "$TARGET" /usr/sbin/adduser -h /home/alpine -s /bin/sh -G alpine -D alpine
chroot "$TARGET" /usr/sbin/addgroup alpine wheel
chroot "$TARGET" /usr/bin/passwd -u alpine
}
configure_ntp() {
# EC2 provides an instance-local NTP service syncronized with GPS and
# atomic clocks in-region. Prefer this over external NTP hosts when running
# in EC2.
#
# See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
sed -e 's/^pool /server /' \
-e 's/pool.ntp.org/169.254.169.123/g' \
-i "$TARGET/etc/chrony/chrony.conf"
}
cleanup() {
# Sweep cruft out of the image that doesn't need to ship or will be
# re-generated when the image boots
rm -f \
"$TARGET/var/cache/apk/"* \
"$TARGET/etc/resolv.conf" \
"$TARGET/root/.ash_history" \
"$TARGET/etc/"*-
[ "$BOOTLOADER" = 'grub-efi' ] && umount "$TARGET/boot/efi"
umount \
"$TARGET/dev" \
"$TARGET/proc" \
"$TARGET/sys"
umount "$TARGET"
}
main() {
validate_block_device
[ -d "$TARGET" ] || mkdir "$TARGET"
einfo "Fetching static APK tools"
apk="$(fetch_apk_tools)"
einfo "Creating root filesystem"
make_filesystem
einfo "Configuring Alpine repositories"
setup_repositories
einfo "Fetching Alpine signing keys"
fetch_keys
einfo "Installing base system"
install_base
setup_chroot
einfo "Installing core packages"
install_core_packages
einfo "Configuring and enabling boot loader"
create_initfs
install_bootloader
einfo "Configuring system"
setup_mdev
setup_fstab
setup_networking
enable_services
create_alpine_user
configure_ntp
einfo "All done, cleaning up"
cleanup
}
main "$@"

View File

@ -0,0 +1,62 @@
@PYTHON@
# vim: set ts=4 et:
import json
import os
import re
import sys
import yaml
if len(sys.argv) != 3:
sys.exit("Usage: " + os.path.basename(__file__) + " <profile> <build>")
PROFILE = sys.argv[1]
BUILD = sys.argv[2]
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
MANIFEST_JSON = os.path.join(
SCRIPT_DIR, 'profile', PROFILE, BUILD, 'manifest.json'
)
RELEASE_DIR = os.path.join(SCRIPT_DIR, '..', 'releases')
RELEASE_YAML = os.path.join(RELEASE_DIR, PROFILE + '.yaml')
if not os.path.exists(RELEASE_DIR):
os.makedirs(RELEASE_DIR)
releases = {}
if os.path.exists(RELEASE_YAML):
with open(RELEASE_YAML, 'r') as data:
releases = yaml.safe_load(data)
with open(MANIFEST_JSON, 'r') as data:
MANIFEST = json.load(data)
A = re.split(':|,', MANIFEST['builds'][0]['artifact_id'])
ARTIFACTS = dict(zip(A[0::2], A[1::2]))
BUILD_TIME = MANIFEST['builds'][0]['build_time']
DATA = MANIFEST['builds'][0]['custom_data']
RELEASE = DATA['release']
if BUILD not in releases:
releases[BUILD] = {}
if RELEASE not in releases[BUILD]:
releases[BUILD][RELEASE] = {}
REVISION = {
'description': DATA['ami_desc'],
'profile': PROFILE,
'profile_build': BUILD,
'version': DATA['version'],
'release': RELEASE,
'arch': DATA['arch'],
'revision': DATA['revision'],
'end_of_life': DATA['end_of_life'],
'build_time': BUILD_TIME,
'artifacts': ARTIFACTS
}
releases[BUILD][RELEASE][DATA['ami_name']] = REVISION
with open(RELEASE_YAML, 'w') as data:
yaml.dump(releases, data, sort_keys=False)

View File

@ -1,72 +0,0 @@
@PYTHON@
import re
import yaml
import boto3
# All Alpine AMIs should match this regex if they're valid
AMI_RE = re.compile("^Alpine-(\d+\.\d+)(?:-r(\d+))?-Hardened-EC2")
# Load current AMI version from config
with open("alpine-ami.yaml") as fp:
ami_cfg = yaml.full_load(fp)["variables"]
current = (float(ami_cfg["alpine_release"]), int(ami_cfg["ami_release"]))
# Fetch all matching AMIs
amis = []
for region in boto3.session.Session().get_available_regions("ec2"):
ec2 = boto3.client("ec2", region_name=region)
for image in ec2.describe_images(Owners=["self"])["Images"]:
match = AMI_RE.match(image["Name"])
if not match:
continue
os_rel, ami_rel = match.groups()
amis.append((
region, image["ImageId"],
image["BlockDeviceMappings"][0]["Ebs"]["SnapshotId"],
float(os_rel), int(ami_rel) if ami_rel else 0))
# Determine the set to discard based region and version
ok_regions = set()
discards = []
# Cluster candidates by region/version pair, newest in a region first.
# This should result in the first match for a region always being the newest
# AMI for that region and all subsequent matches in the region being old.
# Even so we must keep track of regions with current images on the off-chance
# that a region only has old images. In that case we want to preserve the old
# images till we can publish new ones manually so users can still launch
# Alpine systems without interruption.
candidates = sorted(amis, key=lambda i: (i[0], (i[1], i[3])), reverse=True)
for ami in candidates:
(region, ami, snapshot), version = ami[:3], ami[3:]
if version > current:
print("{} has AMI '{}' newer than current".format(region, ami))
continue
elif version == current:
ok_regions.add(region)
continue
elif version < current and region in ok_regions:
discards.append((region, ami, snapshot))
else:
print("Not discarding old image in {}".format(region))
continue
# Scrub the old ones
for region, image, snapshot in discards:
print("Removing image '{}', snapshot '{}' in {}".format(
image, snapshot, region))
ec2 = boto3.client("ec2", region_name=region)
ec2.deregister_image(ImageId=image)
ec2.delete_snapshot(SnapshotId=snapshot)

View File

@ -1,77 +0,0 @@
### Builder-Instance Options ###
# Region to build in, if we initiate a build from outside AWS
region:
# Subnet ID in which the builder instance is to be launched. VPC will be
# automatically determined.
subnet:
# Optional security group to apply to the builder instance
security_group:
# By default, public IPs are assigned (or not) per the subnet's configuration.
# Set to "true" or "false" to explicitly override the subnet's public IP auto-
# assign configuration.
public_ip: ""
### Build Options ###
# Uncomment/increment every for every rebuild of an Alpine release;
# re-comment/zero for every new Alpine release
#revision: "-1"
# AMI name prefix and suffix
ami_name_prefix: "alpine-ami-"
ami_name_suffix: ""
# AMI description prefix and suffix
ami_desc_prefix: "Alpine Linux "
ami_desc_suffix: " - https://github.com/mcrute/alpine-ec2-ami"
# List of custom lines to add to /etc/apk/repositories
add_repos:
# - "@my-repo http://my-repo.tld/path"
# List of additional packages to add to the AMI.
add_pkgs:
# - package-name
# Additional services to start at the specified level
add_svcs:
# boot:
# - service1
# default:
# - service2
# Size of the AMI image (in GiB).
volume_size: "1"
# Encrypt the AMI?
encrypt_ami: "false"
# List of groups that should have access to the AMI. However, only two
# values are currently supported: 'all' for public, '' or unset for private.
ami_access:
- "all"
# List of regions to where the AMI should be copied
deploy_regions:
- "us-east-1"
- "us-east-2"
- "us-west-1"
- "us-west-2"
- "ca-central-1"
- "eu-central-1"
- "eu-north-1"
- "eu-west-1"
- "eu-west-2"
- "eu-west-3"
- "ap-northeast-1"
- "ap-northeast-2"
# - "ap-northeast-3" # skipped, available by subscription only
- "ap-southeast-1"
- "ap-southeast-2"
- "ap-south-1"
- "sa-east-1"