alpine-zdt-images/README.md
Stefan Reimer 21ca8eb427 Squashed 'alpine-cloud-images/' content from commit 71bd350
git-subtree-dir: alpine-cloud-images
git-subtree-split: 71bd3504d30f1fb1c98e9c61e813764ef5407b68
2022-11-03 18:44:42 +01:00

184 lines
7.9 KiB
Markdown

# Alpine Linux Cloud Image Builder
This repository contains the code and and configs for the build system used to
create official Alpine Linux images for various cloud providers, in various
configurations. This build system is flexible, enabling others to build their
own customized images.
----
## Pre-Built Offical Cloud Images
To get started with offical pre-built Alpine Linux cloud images, visit
https://alpinelinux.org/cloud. Currently, we build official images for the
following cloud platforms...
* AWS
...we are working on also publishing offical images to other major cloud
providers.
Each published image's name contains the Alpine version release, architecture,
firmware, bootstrap, and image revision. These details (and more) are also
tagged on the images...
| Tag | Description / Values |
|-----|----------------------|
| name | `alpine-`_`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-r`_`revision`_ |
| project | `https://alpinelinux.org/cloud` |
| image_key | _`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-`_`cloud`_ |
| version | Alpine version (_`x.y`_ or `edge`) |
| release | Alpine release (_`x.y.z`_ or _`YYYYMMDD`_ for edge) |
| arch | architecture (`aarch64` or `x86_64`) |
| firmware | boot mode (`bios` or `uefi`) |
| bootstrap | initial bootstrap system (`tiny` = Tiny Cloud) |
| cloud | provider short name (`aws`) |
| revision | image revision number |
| imported | image import timestamp |
| import_id | imported image id |
| import_region | imported image region |
| published | image publication timestamp |
| description | image description |
Although AWS does not allow cross-account filtering by tags, the image name can
still be used to filter images. For example, to get a list of available Alpine
3.x aarch64 images in AWS eu-west-2...
```
aws ec2 describe-images \
--region eu-west-2 \
--owners 538276064493 \
--filters \
Name=name,Values='alpine-3.*-aarch64-*' \
Name=state,Values=available \
--output text \
--query 'reverse(sort_by(Images, &CreationDate))[].[ImageId,Name,CreationDate]'
```
To get just the most recent matching image, use...
```
--query 'max_by(Image, &CreationDate).[ImageId,Name,CreationDate]'
```
----
## Build System
The build system consists of a number of components:
* the primary `build` script
* the `configs/` directory, defining the set of images to be built
* the `scripts/` directory, containing scripts and related data used to set up
image contents during provisioning
* the Packer `alpine.pkr.hcl`, which orchestrates build, import, and publishing
of images
* the `cloud_helper.py` script that Packer runs in order to do cloud-specific
import and publish operations
### Build Requirements
* [Python](https://python.org) (3.9.7 is known to work)
* [Packer](https://packer.io) (1.7.6 is known to work)
* [QEMU](https://www.qemu.org) (6.1.0 is known to work)
* cloud provider account(s)
### Cloud Credentials
By default, the build system relies on the cloud providers' Python API
libraries to find and use the necessary credentials, usually via configuration
under the user's home directory (i.e. `~/.aws/`, `~/.oci/`, etc.) or or via
environment variables (i.e. `AWS_...`, `OCI_...`, etc.)
The credentials' user/role needs sufficient permission to query, import, and
publish images -- the exact details will vary from cloud to cloud. _It is
recommended that only the minimum required permissions are granted._
_We manage the credentials for publishing official Alpine images with an
"identity broker" service, and retrieve those credentials via the
`--use-broker` argument of the `build` script._
### The `build` Script
```
usage: build [-h] [--debug] [--clean] [--custom DIR [DIR ...]]
[--skip KEY [KEY ...]] [--only KEY [KEY ...]] [--revise] [--use-broker]
[--no-color] [--parallel N] [--vars FILE [FILE ...]]
{configs,state,local,import,publish}
positional arguments: (build up to and including this step)
configs resolve image build configuration
state refresh current image build state
local build images locally
import import local images to cloud provider default region
publish set image permissions and publish to cloud regions
optional arguments:
-h, --help show this help message and exit
--debug enable debug output
--clean start with a clean work environment
--custom DIR [DIR ...] overlay custom directory in work environment
--skip KEY [KEY ...] skip variants with dimension key(s)
--only KEY [KEY ...] only variants with dimension key(s)
--revise remove existing local/imported image, or bump
revision and rebuild if published
--use-broker use the identity broker to get credentials
--no-color turn off Packer color output
--parallel N build N images in parallel (default: 1)
--vars FILE [FILE ...] supply Packer with -vars-file(s)
```
The `build` script will automatically create a `work/` directory containing a
Python virtual environment if one does not already exist. This directory also
hosts other data related to building images. The `--clean` argument will
remove everything in the `work/` directory except for things related to the
Python virtual environment.
If `work/configs/` or `work/scripts/` directories do not yet exist, they will
be populated with the base configuration and scripts from `configs/` and/or
`scripts/` directories. If any custom overlay directories are specified with
the `--custom` argument, their `configs/` and `scripts/` subdirectories are
also added to `work/configs/` and `work/scripts/`.
The "build step" positional argument deterimines the last step the `build`
script should execute -- all steps before this targeted step may also be
executed. That is, `build local` will first execute the `configs` step (if
necessary) and then the `state` step (always) before proceeding to the `local`
step.
The `configs` step resolves configuration for all buildable images, and writes
it to `work/images.yaml`, if it does not already exist.
The `state` step always checks the current state of the image builds,
determines what actions need to be taken, and updates `work/images.yaml`. A
subset of image builds can be targeted by using the `--skip` and `--only`
arguments. The `--revise` argument indicates that any _unpublished_ local
or imported images should be removed and rebuilt; as _published_ images can't
be removed, `--revise` instead increments the _`revision`_ value to rebuild
new images.
`local`, `import`, and `publish` steps are orchestrated by Packer. By default,
each image will be processed serially; providing the `--parallel` argument with
a value greater than 1 will parallelize operations. The degree to which you
can parallelze `local` image builds will depend on the local build hardware --
as QEMU virtual machines are launched for each image being built. Image
`import` and `publish` steps are much more lightweight, and can support higher
parallelism.
The `local` step builds local images with QEMU, for those that are not already
built locally or have already been imported.
The `import` step imports the local images into the cloud providers' default
regions, unless they've already been imported. At this point the images are
not available publicly, allowing for additional testing prior to publishing.
The `publish` step copies the image from the default region to other regions,
if they haven't already been copied there. This step will always update
image permissions, descriptions, tags, and deprecation date (if applicable)
in all regions where the image has been published.
### The `cloud_helper.py` Script
This script is meant to be called only by Packer from its `post-processor`
block for image `import` and `publish` steps.
----
## Build Configuration
For more in-depth information about how the build system configuration works,
how to create custom config overlays, and details about individual config
settings, see [CONFIGURATION.md](CONFIGURATION.md).