Update Docs / Misc Refinements

Fleshed out Documentation

alpine.conf
* improve motd readability
* default access = public
* default regions = all
* remove version 3.11 (EOL)

alpine-testing.conf
* access is private
* limit aws regions

build
* improve/refine overlay installation
* rename "actions" step to "state"

image_configs.py
* target step "state" updates images.yaml as if "publish" WOULD be done (but won't be)
This commit is contained in:
Jake Buchholz Göktürk 2021-12-26 21:52:47 +00:00
parent 0cf623f7a5
commit c1469d6c31
8 changed files with 533 additions and 236 deletions

318
CONFIGURATION.md Normal file
View File

@ -0,0 +1,318 @@
# Configuration
All the configuration for building image variants is defined by multiple
config files; the base configs for official Alpine Linux cloud images are in
the [`configs/`](configs/) directory.
We use [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md) for
configuration -- this primarily facilitates importing deeper configs from
other files, but also allows the extension/concatenation of arrays and maps
(which can be a useful feature for customization), and inline comments.
----
## Resolving Work Environment Configs and Scripts
If `work/configs/` and `work/scripts/` don't exist, the `build` script will
install the contents of the base [`configs/`](configs/) and [`scripts/`](scripts/)
directories, and overlay additional `configs/` and `scripts/` subdirectories
from `--custom` directories (if any).
Files cannot be installed over existing files, with one exception -- the
[`configs/images.conf`](configs/images.conf) same-directory symlink. Because
the `build` script _always_ loads `work/configs/images.conf`, this is the hook
for "rolling your own" custom Alpine Linux cloud images.
The base [`configs/images.conf`](configs/images.conf) symlinks to
[`alpine.conf`](configs/images.conf), but this can be overridden using a
`--custom` directory containing a new `configs/images.conf` same-directory
symlink pointing to its custom top-level config.
For example, the configs and scripts in the [`overlays/testing/`](overlays/testing/)
directory can be resolved in a _clean_ work environment with...
```
./build configs --custom overlays/testing
```
This results in the `work/configs/images.conf` symlink to point to
`work/configs/alpine-testing.conf` instead of `work/configs/alpine.conf`.
If multiple directories are specified with `--custom`, they are applied in
the order given.
----
## Top-Level Config File
Examples of top-level config files are [`configs/alpine.conf`](configs/alpine.conf)
and [`overlays/testing/configs/alpine-testing.conf`](overlays/testing/configs/alpine-testing.conf).
There are three main blocks that need to exist (or be `import`ed into) the top
level HOCON configuration, and are merged in this exact order:
### `Default`
All image variant configs start with this block's contents as a starting point.
Arrays and maps can be appended by configs in `Dimensions` and `Mandatory`
blocks.
### `Dimensions`
The sub-blocks in `Dimensions` define the "dimensions" a variant config is
comprised of, and the different config values possible for that dimension.
The default [`alpine.conf`](configs/alpine.conf) defines the following
dimensional configs:
* `version` - Alpine Linux _x_._y_ (plus `edge`) versions
* `arch` - machine architectures, `x86_64` or `aarch64`
* `firmware` - supports launching via legacy BIOS or UEFI
* `bootstrap` - the system/scripts responsible for setting up an instance
during its initial launch
* `cloud` - for specific cloud platforms
The specific dimensional configs for an image variant are merged in the order
that the dimensions are listed.
### `Mandatory`
After a variant's dimensional configs have been applied, this is the last block
that's merged to the image variant configuration. This block is the ultimate
enforcer of any non-overrideable configuration across all variants, and can
also provide the last element to array config items.
----
## Dimensional Config Directives
Because a full cross-product across all dimensional configs may produce images
variants that are not viable (i.e. `aarch64` simply does not support legacy
`bios`), or may require further adjustments (i.e. the `aws` `aarch64` images
require an additional kernel module from `3.15` forward, which aren't available
in previous versions), we have two special directives which may appear in
dimensional configs.
### `EXCLUDE` array
This directive provides an array of dimensional config keys which are
incompatible with the current dimensional config. For example,
[`configs/arch/aarch64.conf`](configs/arch/aarch64.conf) specifies...
```
# aarch64 is UEFI only
EXCLUDE = [bios]
```
...which indicates that any image variant that includes both `aarch64` (the
current dimensional config) and `bios` configuration should be skipped.
### `WHEN` block
This directive conditionally merges additional configuration ***IF*** the
image variant also includes a specific dimensional config key (or keys). In
order to handle more complex situations, `WHEN` blocks may be nested. For
example, [`configs/cloud/aws.conf`](configs/cloud/aws.conf) has...
```
WHEN {
aarch64 {
# new AWS aarch64 default...
kernel_modules.gpio_pl061 = true
initfs_features.gpio_pl061 = true
WHEN {
"3.14 3.13 3.12" {
# ...but not supported for older versions
kernel_modules.gpio_pl061 = false
initfs_features.gpio_pl061 = false
}
}
}
```
This configures AWS `aarch64` images to use the `gpio_pl061` kernel module in
order to cleanly shutdown/reboot instances from the web console, CLI, or SDK.
However, this module is unavailable on older Alpine versions.
Spaces in `WHEN` block keys serve as an "OR" operator; nested `WHEN` blocks
function as "AND" operators.
----
## Config Settings
**Scalar** values can be simply overridden in later configs.
**Array** and **map** settings in later configs are merged with the previous
values, _or entirely reset if it's first set to `null`_, for example...
```
some_array = [ thing ]
# [...]
some_array = null
some_array = [ other_thing ]
```
Mostly in order of appearance, as we walk through
[`configs/alpine.conf`](configs/alpine.conf) and the deeper configs it
imports...
### `project` string
This is a unique identifier for the whole collection of images being built.
For the official Alpine Linux cloud images, this is set to
`https://alpinelinux.org/cloud`.
When building custom images, you **MUST** override **AT LEAST** this setting to
avoid image import and publishing collisions.
### `name` array
The ultimate contents of this array contribute to the overall naming of the
resultant image. Almost all dimensional configs will add to the `name` array,
with two notable exceptions: **version** configs' contribution to this array is
determined when `work/images.yaml` is resolved, and is set to the current
Alpine Linux release (_x.y.z_ or _YYYYMMDD_ for edge); also because
**cloud** images are isolated from each other, it's redundant to include that
in the image name.
### `description` array
Similar to the `name` array, the elements of this array contribute to the final
image description. However, for the official Alpine configs, only the
**version** dimension adds to this array, via the same mechanism that sets the
revision for the `name` array.
### `motd` map
This setting controls the contents of what ultimately gets written into the
variant image's `/etc/motd` file. Later configs can add additional messages,
replace existing contents, or remove them entirely (by setting the value to
`null`).
The `motd.version_notes` and `motd.release_notes` settings have slightly
different behavior:
* if the Alpine release (_x.y.z_) ends with `.0`, `release_notes` is dropped
to avoid redundancy
* edge versions are technically not released, so both of these notes are
dropped from `/etc/motd`
* otherwise, `version_notes` and `release_notes` are concatenated together as
`release_notes` to avoid a blank line between them
### `scripts` array
These are the scripts that will be executed by Packer, in order, to do various
setup tasks inside a variant's image. The `work/scripts/` directory contains
all scripts, including those that may have been added via `build --custom`.
### `script_dirs` array
Directories (under `work/scripts/`) that contain additional data that the
`scripts` will need. Packer will copy these to the VM responsible for setting
up the variant image.
### `size` string
The size of the image disk, by default we use `1G` (1 GiB). This disk may (or
may not) be further partitioned, based on other factors.
### `login` string
The image's primary login user, set to `alpine`.
### `local_format` string
The local VM's disk image format, set to `qcow2`.
### `repos` map
Defines the contents of the image's `/etc/apk/repositories` file. The map's
key is the URL of the repo, and the value determines how that URL will be
represented in the `repositories` file...
| value | result |
|-|-|
| `null` | make no reference to this repo |
| `false` | this repo is commented out (disabled) |
| `true` | this repo is enabled for use |
| _tag_ | enable this repo with `@`_`tag`_ |
### `packages` map
Defines what APK packages to add/delete. The map's key is the package
name, and the value determines whether (or not) to install/uninstall the
package...
| value | result |
|-|-|
| `null` | don't add or delete |
| `false` | explicitly delete |
| `true` | add from default repos |
| _tag_ | add from `@`_`tag`_ repo |
| `--no-scripts` | add with `--no-scripts` option |
| `--no-scripts` _tag_ | add from `@`_`tag`_ repo, with `--no-scripts` option |
### `services` map of maps
Defines what services are enabled/disabled at various runlevels. The first
map's key is the runlevel, the second key is the service. The service value
determines whether (or not) to enable/disable the service at that runlevel...
| value | result |
|-|-|
| `null` | don't enable or disable |
| `false` | explicitly disable |
| `true` | explicitly enable |
### `kernel_modules` map
Defines what kernel modules are specified in the boot loader. The key is the
kernel module, and the value determines whether or not it's in the final
list...
| value | result |
|-|-|
| `null` | skip |
| `false` | skip |
| `true` | include |
### `kernel_options` map
Defines what kernel options are specified on the kernel command line. The keys
are the kernel options, the value determines whether or not it's in the final
list...
| value | result |
|-|-|
| `null` | skip |
| `false` | skip |
| `true` | include |
### `initfs_features` map
Defines what initfs features are included when making the image's initramfs
file. The keys are the initfs features, and the values determine whether or
not they're included in the final list...
| value | result |
|-|-|
| `null` | skip |
| `false` | skip |
| `true` | include |
### `builder` string
The Packer builder that's used to build images. This is set to `qemu`.
### `qemu.machine_type` string
The QEMU machine type to use when building local images. For x86_64, this is
set to `null`, for aarch64, we use `virt`.
### `qemu.args` list of lists
Additional QEMU arguments. For x86_64, this is set to `null`; but aarch64
requires several additional arguments to start an operational VM.
### `qemu.firmware` string
The path to the QEMU firmware (installed in `work/firmware/`). This is only
used when creating UEFI images.
### `bootloader` string
The bootloader to use, currently `extlinux` or `grub-efi`.
### `access` map
When images are published, this determines who has access to those images.
The key is the cloud account (or `PUBLIC`), and the value is whether or not
access is granted, `true` or `false`/`null`.
### `regions` map
Determines where images should be published. The key is the region
identifier (or `ALL`), and the value is whether or not to publish to that
region, `true` or `false`/`null`.

289
README.md
View File

@ -1,11 +1,3 @@
## _**NOTE: This is a Work-in-Progress**_
_It is intended that this will eventually replace
https://gitlab.alpinelinux.org/alpine/cloud/alpine-ec2-ami
as the offical multi-cloud image builder for Alpine Linux._
----
# Alpine Linux Cloud Image Builder
This repository contains the code and and configs for the build system used to
@ -18,26 +10,64 @@ own customized images.
To get started with offical pre-built Alpine Linux cloud images, visit
https://alpinelinux.org/cloud. Currently, we build official images for the
following providers:
following cloud platforms...
* AWS
You should also be able to find the most recently published Alpine Linux
images via your cloud provider's web console, or programatically query their
API with a CLI tool or library.
...we are working on also publishing offical images to other major cloud
providers.
_(TODO: examples)_
Each published image's name contains the Alpine version release, architecture, firmware, bootstrap, and image revision. These details (and more) are also
tagged on the images...
| Tag | Description / Values |
|-----|----------------------|
| name | `alpine-`_`release-arch-firmware-bootstrap-`_`r`_`revision`_ |
| project | `https://alpinelinux.org/cloud` |
| image_key | _`release-arch-firmware-bootstrap-cloud`_ |
| version | Alpine version (_`x.y`_ or `edge`) |
| release | Alpine release (_`x.y.z`_ or _`YYYYMMDD`_ for edge) |
| arch | architecture (`aarch64` or `x86_64`) |
| firmware | boot mode (`bios` or `uefi`) |
| bootstrap | initial bootstrap system (`tiny` = tiny-ec2-bootstrap) |
| cloud | provider short name (`aws`) |
| revision | image revision number |
| imported | image import timestamp |
| import_id | imported image id |
| import_region | imported image region |
| published | image publication timestamp |
| description | image description |
Although AWS does not allow cross-account filtering by tags, the image name can
still be used to filter images. For example, to get a list of available Alpine
3.x aarch64 images in AWS eu-west-2...
```
aws ec2 describe-images \
--region eu-west-2 \
--owners 538276064493 \
--filters \
Name=name,Values='alpine-3.*-aarch64-*' \
Name=state,Values=available \
--output text \
--query 'reverse(sort_by(Images, &CreationDate))[].[ImageId,Name,CreationDate]'
```
To get just the most recent matching image, use...
```
--query 'max_by(Image, &CreationDate).[ImageId,Name,CreationDate]'
```
----
## Build System
The build system consists of a number of components:
* the primary `build` script and related cloud-specific helpers
* a directory of `configs/` defining the set of images to be built
* a Packer `alpine.pkr.hcl` orchestrating the images' local build, as well as
importing them to cloud providers and publishing them to desitnation regions
* a directory of `scripts/` which set up the images' contents during
provisioning
* the primary `build` script
* the `configs/` directory, defining the set of images to be built
* the `scripts/` directory, containing scripts and related data used to set up
image contents during provisioning
* the Packer `alpine.pkr.hcl`, which orchestrates build, import, and publishing
of images
* the `cloud_helper.py` script that Packer runs in order to do cloud-specific
import and publish operations
### Build Requirements
* [Python](https://python.org) (3.9.7 is known to work)
@ -47,182 +77,105 @@ The build system consists of a number of components:
### Cloud Credentials
This build system relies on the cloud providers' Python API libraries to find
and use the necessary credentials -- via configuration in the user's home
directory (i.e. `~/.aws/...`, `~/.oci/...`, etc.) or with special environment
variables (i.e. `AWS_...`, `OCI_...`, etc.)
By default, the build system relies on the cloud providers' Python API
libraries to find and use the necessary credentials, usually via configuration
under the user's home directory (i.e. `~/.aws/`, `~/.oci/`, etc.) or or via
environment variables (i.e. `AWS_...`, `OCI_...`, etc.)
It is expected that each cloud provider's user/role will have been set up with
sufficient permission in order to accomplish the operations necessary to query,
import, and publish images; _it is highly recommended that no permissions are
granted beyond what is absolutely necessary_.
The credentials' user/role needs sufficient permission to query, import, and
publish images -- the exact details will vary from cloud to cloud. _It is
recommended that only the minimum required permissions are granted._
_We manage the credentials for publishing official Alpine images with an
"identity broker" service, and retrieve those credentials via the
`--use-broker` argument of the `build` script._
### The `build` Script
```
usage: build [-h] [--debug] [--clean] [--revise] {configs,local,import,publish}
[--custom DIR [DIR ...]] [--skip KEY [KEY ...]] [--only KEY [KEY ...]]
usage: build [-h] [--debug] [--clean] [--custom DIR [DIR ...]]
[--skip KEY [KEY ...]] [--only KEY [KEY ...]] [--revise] [--use-broker]
[--no-color] [--parallel N] [--vars FILE [FILE ...]]
{configs,state,local,import,publish}
build steps:
configs resolve build configuration
local build local images
import import to cloud providers
publish set permissions and publish to cloud regions
positional arguments: (build up to and including this step)
configs resolve image build configuration
state refresh current image build state
local build images locally
import import local images to cloud provider default region
publish set image permissions and publish to cloud regions
optional arguments:
-h, --help show this help message and exit
--debug enable debug output (False)
--clean start with a clean work environment (False)
--revise bump revisions if images already published (False)
--debug enable debug output
--clean start with a clean work environment
--custom DIR [DIR ...] overlay custom directory in work environment
--skip KEY [KEY ...] skip variants with dimension key(s)
--only KEY [KEY ...] only variants with dimension key(s)
--no-color turn off Packer color output (False)
--parallel N build N images in parallel (1)
--vars FILE [FILE ...] supply Packer with additional -vars-file(s)
--revise remove existing local/imported image, or bump
revision and rebuild if published
--use-broker use the identity broker to get credentials
--no-color turn off Packer color output
--parallel N build N images in parallel (default: 1)
--vars FILE [FILE ...] supply Packer with -vars-file(s)
```
A `work/` directory will be created for its Python virtual environment, any
necessary Python libraries will be `pip install`ed, and `build` will execute
itself to ensure that it's running in the work environment.
The `build` script will automatically create a `work/` directory containing a
Python virtual environment if one does not already exist. This directory also
hosts other data related to building images. The `--clean` argument will
remove everything in the `work/` directory except for things related to the
Python virtual environment.
This directory also contains `configs/` and `scripts/` subdirs (with custom
overlays), UEFI firmware for QEMU, Packer cache, the generated `configs.yaml`
and `actions.yaml` configs, and the `images/` tree for local image builds.
If `work/configs/` or `work/scripts/` directories do not yet exist, they will
be populated with the base configuration and scripts from `configs/` and/or
`scripts/` directories. If any custom overlay directories are specified with
the `--custom` argument, their `configs/` and `scripts/` subdirectories are
also added to `work/configs/` and `work/scripts/`.
Use `--clean` if you want to re-overlay, re-download, re-generate, or rebuild
anything in the `work/` directory. To redo the Python virtual environment,
simply remove the `work/` directory and its contents, and it will be recreated
the next time `build` is run.
The "build step" positional argument deterimines the last step the `build`
script should execute -- all steps before this targeted step may also be
executed. That is, `build local` will first execute the `configs` step (if
necessary) and then the `state` step (always) before proceeding to the `local`
step.
### Build Steps
The `configs` step resolves configuration for all buildable images, and writes
it to `work/images.yaml`, if it does not already exist.
When executing `build` you also provide the target step you wish to reach. For
example, if you only want to build local images, use `build local`. Any
predecessor steps which haven't been done will also be executed -- that is,
`build local` also implies `build configs` if that step hasn't completed yet.
The `state` step always checks the current state of the image builds,
determines what actions need to be taken, and updates `work/images.yaml`. A
subset of image builds can be targeted by using the `--skip` and `--only`
arguments. The `--revise` argument indicates that any _unpublished_ local
or imported images should be removed and rebuilt; as _published_ images can't
be removed, `--revise` instead increments the _`revision`_ value to rebuild
new images.
The **configs** step determines the latest stable Alpine Linux release, and
ensures that the `configs/` and `scripts/` overlays, UEFI firmware, and
`configs.yaml` exist. This allows you to validate the generated build variant
configuration before attempting to build any images locally.
`local`, `import`, and `publish` steps are orchestrated by Packer. By default,
each image will be processed serially; providing the `--parallel` argument with
a value greater than 1 will parallelize operations. The degree to which you
can parallelze `local` image builds will depend on the local build hardware --
as QEMU virtual machines are launched for each image being built. Image
`import` and `publish` steps are much more lightweight, and can support higher parallelism.
If `build` is moving on past **configs** to other steps, it will determine which
image variants to work on (based on `--skip` and `--only` values) and what
actions will be taken, based on existence of local/imported/published images, and
generate the `actions.yaml` file. Providing the `--revise` flag allows you to
rebuild local images that were previously built, reimport unpublished images to
cloud providers, and bump the "revision" value of previously published images --
this is useful if published images require fixes but the Alpine release itself
isn't changing; published images are not removed (though they may be pruned once
their "end-of-life" date has passed).
The `local` step builds local images with QEMU, for those that are not already
built locally or have already been imported.
At this point, `build` executes Packer, which is responsible for the remaining
**local**, **import**, and **publish** steps -- and also for parallelization, if
the `--parallel` argument is given. Because build hardware varies, it is also
possible to tune a number of QEMU timeouts and memory requirements by providing
an HCL2 Packer Vars file and specifying `--vars <filename>` to override the
defaults in `alpine.pkr.hcl`.
The `import` step imports the local images into the cloud providers' default
regions, unless they've already been imported. At this point the images are
not available publicly, allowing for additional testing prior to publishing.
### Packer and `alpine.pkr.hcl`
Packer loads and merges `actions.yaml` and `configs.yaml`, and iterates the
resulting object in order to determine what it should do with each image
variant configuration.
`alpine.pkr.hcl` defines two base `source` blocks -- `null` is used when an
image variant is already built locally and/or already imported to the
destination cloud provider; otherwise, the `qemu` source is used.
The `qemu` builder spins up a QEMU virtual machine with a blank virtual disk
attached, using the latest stable Alpine Linux Virtual ISO, brings up the VM's
network, enables the SSH daemon, and sets a random password for root.
If an image variant is to be **built locally**, the two dynamic provisioners copy
the required data for the setup scripts to the VM's `/tmp/` directory, and then
run those setup scripts. It's these scripts that are ultimately responsible for
installing and configuring the desired image on the attached virtual disk.
When the setup scripts are complete, the virtual machine is shut down, and the
resulting local disk image can be found at
`work/images/<cloud>/<build-name>/image.qcow2`.
The dynamic post-processor uses the `cloud_helper.py` script to **import** a
local image to the cloud provider, and/or **publish** an imported image to the
cloud provider's destination regions, based on what actions are applicable for
that image variant. When the **publish** step is reapplied to an
already-published image, the script ensures that images have been copied to all
destination regions (for example, if the cloud provider recently added a new
region), and that all launch permissions are set as expected.
The `publish` step copies the image from the default region to other regions,
if they haven't already been copied there. This step will always update
image permissions, descriptions, tags, and deprecation date (if applicable)
in all regions where the image has been published.
### The `cloud_helper.py` Script
This script is only meant to be imported by `build` and called from Packer, and
provides a normalized cloud-agnostic way of doing common cloud operations --
getting details about a variant's latest imported image, importing new local
image to the cloud, removing a previouly imported (but unpublished) image so it
can be replaced, or publishing an imported image to destination regions.
This script is meant to be called only by Packer from its `post-processor`
block for image `import` and `publish` steps.
----
## Build Configuration
The `build` script generates `work/configs.yaml` based on the contents of the
top-level config file, `work/configs/configs.conf`; normally this is a symlink to
`alpine.conf`, but can be overridden for custom builds. All configs are
declared in [HOCON](https://github.com/lightbend/config/blob/master/HOCON.md)
format, which allows importing from other files, simple variable interpolation,
and easy merging of objects. This flexibility helps keep configuration
[DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
The top-level `build.conf` has three main blocks, `Default` (default/starting
values), `Dimensions` (with configs that apply in different circumstances), and
`Mandatory` (required/final values). The configuration for these blocks are
merged in this exact order.
### Dimensions and Build Variants
Build variants _(I was watching Loki™ at the time...)_ are the sets of
dimensional "features" and related configuration details produced from a
Cartesian product across each of the dimensional keys. Dimensional configs are
merged together in the order they appear in `build.conf`.
If two dimensional keys are incompatible (for example, **version/3.11** did not
yet support **arch/aarch64**), an `EXCLUDE` directive indicates that such a
variant is non-viable, and will be skipped.
Likewise, if one dimension's configuration depends on the value of a different
dimensional key, the `WHEN` directive will supply the conditional config
details when that other dimensional key is part of the variant.
Currently the base set of dimensions (and dimension keys) are...
**version** - current "release" value for each is autodetected, and always a
component of an image's name
* **edge** ("release" value is the current UTC date)
* all *non-EOL* Alpine Linux versions
**arch** - machine architecture
* **x86_64** (aka "amd64")
* **aarch64** (aka "arm64")
**firmware** - machine boot firmware
* **bios** (legacy BIOS)
* **uefi**
**bootstrap** - image instantiation bootstrap is provided by...
* **tiny** (tiny-cloud-boostrap)
* **cloudinit** (cloud-init)
**cloud** - cloud provider or platform
* **aws** - Amazone Web Services / EC2
* **oci** - Oracle Cloud Infrastructure _(WiP)_
* **gcp** - Google Cloud Platform _(WiP)_
* **azure** - Microsoft Azure _(WiP)_
...each dimension may (or may not) contribute to the image name or description,
if the dimensional key's config contributes to the `name` or `description`
array values.
### Customized Builds
_(TODO)_
For more in-depth information about how the build system configuration works,
how to create custom config overlays, and details about individual config
settings, see [CONFIGURATION.md](CONFIGURATION.md).

View File

@ -1,13 +1,19 @@
# Enable script debug output, set via 'packer build -var DEBUG=1'
# Alpine Cloud Images Packer Configuration
### Variables
# include debug output from provisioning/post-processing scripts
variable "DEBUG" {
default = 0
}
# indicates cloud_helper.py should be run with --use-broker
variable "USE_BROKER" {
default = 0
}
# Tuneable based on perfomance of whatever Packer's running on,
# override with './build --vars <pkrvars-file>'
# tuneable QEMU VM parameters, based on perfomance of the local machine;
# overrideable via build script --vars parameter referencing a Packer
# ".vars.hcl" file containing alternate settings
variable "qemu" {
default = {
boot_wait = {
@ -20,6 +26,7 @@ variable "qemu" {
}
}
### Local Data
locals {
debug_arg = var.DEBUG == 0 ? "" : "--debug"

74
build
View File

@ -37,6 +37,7 @@ import logging
import shutil
import time
from glob import glob
from subprocess import Popen, PIPE
from urllib.request import urlopen
@ -47,7 +48,7 @@ from image_configs import ImageConfigManager
### Constants & Variables
STEPS = ['configs', 'actions', 'local', 'import', 'publish']
STEPS = ['configs', 'state', 'local', 'import', 'publish']
LOGFORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
WORK_CLEAN = {'bin', 'include', 'lib', 'pyvenv.cfg', '__pycache__'}
WORK_OVERLAYS = ['configs', 'scripts']
@ -100,46 +101,54 @@ def clean_work():
os.unlink(x)
def is_same_dir_symlink(x):
if not os.path.islink(x):
def is_images_conf(o, x):
if not all([
o == 'configs',
x.endswith('/images.conf'),
os.path.islink(x),
]):
return False
# must also link to file in the same directory
x_link = os.path.normpath(os.readlink(x))
return x_link == os.path.basename(x_link)
# TODO: revisit this to improve --custom overlay implementation
def install_overlay(overlay):
log.info("Installing '%s' overlay in work environment", overlay)
dest_dir = os.path.join('work', overlay)
os.makedirs(dest_dir, exist_ok=True)
for src in unique_list(['.'] + args.custom):
src_dir = os.path.join(src, overlay)
for x in os.listdir(src_dir):
for x in glob(os.path.join(src_dir, '**'), recursive=True):
x = x.removeprefix(src_dir + '/')
src_x = os.path.join(src_dir, x)
dest_x = os.path.join(dest_dir, x)
if is_same_dir_symlink(src_x):
rel_x = os.readlink(src_x)
else:
rel_x = os.path.relpath(src_x, dest_dir)
# TODO: only images.conf symlink can be overridden, in reality
if os.path.islink(dest_x):
# only same-dir symlinks can be overridden
if not is_same_dir_symlink(dest_x):
log.warning("Can't override %s with %s", dest_x, src_x)
if is_images_conf(overlay, src_x):
rel_x = os.readlink(src_x)
if os.path.islink(dest_x):
print(f"\toverriding {dest_x}")
os.unlink(dest_x)
print(f"\tln -s {rel_x} {dest_x}")
os.symlink(rel_x, dest_x)
continue
if os.path.isdir(src_x):
if not os.path.exists(dest_x):
log.debug('makedirs %s', dest_x)
os.makedirs(dest_x)
if os.path.isdir(dest_x):
continue
log.debug('overriding %s with %s', dest_x, src_x)
os.unlink(dest_x)
elif os.path.exists(dest_x):
# we expect only symlnks in the overlay destination!
log.critical('Config overlay non-symlink detected: %s', dest_x)
if os.path.exists(dest_x):
log.critical('Unallowable destination overwirte detected: %s', dest_x)
sys.exit(1)
log.debug('ln -sf %s %s', rel_x, dest_x)
os.symlink(rel_x, dest_x)
log.debug('cp -p %s %s', src_x, dest_x)
shutil.copy(src_x, dest_x)
def install_overlays():
@ -188,22 +197,24 @@ parser.add_argument(
'--debug', action='store_true', help='enable debug output')
parser.add_argument(
'--clean', action='store_true', help='start with a clean work environment')
parser.add_argument(
'--revise', action='store_true',
help='bump revisions if images already published')
# positional argument
parser.add_argument(
'step', choices=STEPS, help='build up to and including this step')
# config options
parser.add_argument(
'--custom', metavar='DIR', nargs='+', action=are_args_valid(os.path.isdir),
default=[], help='overlay custom directory in work environment')
# state options
parser.add_argument(
'--skip', metavar='KEY', nargs='+', action=remove_dupe_args(),
default=[], help='skip variants with dimension key(s)')
parser.add_argument(
'--only', metavar='KEY', nargs='+', action=remove_dupe_args(),
default=[], help='only variants with dimension key(s)')
parser.add_argument(
'--revise', action='store_true',
help='remove existing local/imported image, or bump revision and rebuild'
'if published')
parser.add_argument(
'--use-broker', action='store_true',
help='use the identity broker to get credentials')
# packer options
parser.add_argument(
'--no-color', action='store_true', help='turn off Packer color output')
@ -213,10 +224,9 @@ parser.add_argument(
parser.add_argument(
'--vars', metavar='FILE', nargs='+', action=are_args_valid(os.path.isfile),
default=[], help='supply Packer with -vars-file(s)')
# positional argument
parser.add_argument(
'--use-broker', action='store_true',
help='use the identity broker to get credentials')
# perhaps others?
'step', choices=STEPS, help='build up to and including this step')
args = parser.parse_args()
log = logging.getLogger('build')
@ -261,7 +271,7 @@ if not image_configs.refresh_state(
log.info('No pending actions to take at this time.')
sys.exit(0)
if args.step == 'actions':
if args.step == 'state':
sys.exit(0)
# install firmware if missing

View File

@ -5,7 +5,7 @@
# *AT LEAST* the 'project' setting with a unique identifier string value
# via a "config overlay" to avoid image import and publishing collisions.
project = https://alpinelinux.org/cloud
project = "https://alpinelinux.org/cloud"
# all build configs start with these
Default {
@ -17,13 +17,14 @@ Default {
motd {
welcome = "Welcome to Alpine!"
wiki = \
"The Alpine Wiki contains a large amount of how-to guides and general\n"\
"information about administrating Alpine systems.\n"\
"See <https://wiki.alpinelinux.org/>."
version_notes = "Release Notes:\n"\
"* <https://alpinelinux.org/posts/alpine-{version}.0/released.html>"
release_notes = "* <https://alpinelinux.org/posts/{release}/released.html"
wiki = "The Alpine Wiki contains a large amount of how-to guides and general\n"
wiki += "information about administrating Alpine systems.\n"
wiki += "See <https://wiki.alpinelinux.org/>."
version_notes = "Release Notes:\n"
version_notes += "* <https://alpinelinux.org/posts/alpine-{version}.0/released.html>"
release_notes = "* <https://alpinelinux.org/posts/{release}/released.html"
}
# initial provisioning script and data directory
@ -34,6 +35,12 @@ Default {
login = alpine
local_format = qcow2
# image access
access.PUBLIC = true
# image publication
regions.ALL = true
}
# profile build matrix
@ -43,7 +50,6 @@ Dimensions {
"3.14" { include required("version/3.14.conf") }
"3.13" { include required("version/3.13.conf") }
"3.12" { include required("version/3.12.conf") }
"3.11" { include required("version/3.11.conf") }
edge { include required("version/edge.conf") }
}
arch {
@ -65,11 +71,10 @@ Dimensions {
# all build configs merge these at the very end
Mandatory {
name = [ "r{revision}" ]
description = [ - https://alpinelinux.org/cloud ]
description = [ "- https://alpinelinux.org/cloud" ]
motd {
motd_change = "You may change this message by editing /etc/motd."
}
# final motd message
motd.motd_change = "You may change this message by editing /etc/motd."
# final provisioning script
scripts = [ cleanup ]

View File

@ -1,6 +0,0 @@
# vim: ts=2 et:
include required("base/1.conf")
# Alpine 3.11 doesn't support aarch64
EXCLUDE = [ aarch64 ]

View File

@ -366,32 +366,41 @@ class ImageConfig():
actions = {}
revision = 0
remote_image = clouds.latest_build_image(self)
step_state = step == 'state'
# enable actions based on the specified step
if step in ['local', 'import', 'publish']:
if step in ['local', 'import', 'publish', 'state']:
actions['build'] = True
if step in ['import', 'publish']:
if step in ['import', 'publish', 'state']:
actions['import'] = True
if step == 'publish':
if step in ['publish', 'state']:
# we will resolve publish destinations (if any) later
actions['publish'] = True
if revise:
if self.local_path.exists():
# remove previously built local image artifacts
log.warning('Removing existing local image dir %s', self.local_dir)
shutil.rmtree(self.local_dir)
log.warning('%s existing local image dir %s',
'Would remove' if step_state else 'Removing',
self.local_dir)
if not step_state:
shutil.rmtree(self.local_dir)
if remote_image and remote_image.published:
log.warning('Bumping image revision for %s', self.image_key)
log.warning('%s image revision for %s',
'Would bump' if step_state else 'Bumping',
self.image_key)
revision = int(remote_image.revision) + 1
elif remote_image and remote_image.imported:
# remove existing imported (but unpublished) image
log.warning('Removing unpublished remote image %s', remote_image.import_id)
clouds.remove_image(self, remote_image.import_id)
log.warning('%s unpublished remote image %s',
'Would remove' if step_state else 'Removing',
remote_image.import_id)
if not step_state:
clouds.remove_image(self, remote_image.import_id)
remote_image = None

View File

@ -23,6 +23,11 @@ Dimensions {
#cloudinit { include required("testing/cloudinit.conf") }
}
cloud {
# just test in these regions
aws.regions {
us-west-2 = true
us-east-1 = true
}
# adapters need to be written
#oci { include required("testing/oci.conf") }
#gcp { include required("testing/gcp.conf") }
@ -30,10 +35,6 @@ Dimensions {
}
}
# test in private, and only in a couple regions
# test in private, and only in regions specified above
Mandatory.access.PUBLIC = false
Mandatory.regions = {
ALL = false
us-west-2 = true
us-east-1 = true
}
Mandatory.regions.ALL = false