ci: switch to upstream alpine-cloud-images repo
This commit is contained in:
parent
8bff1d5b0f
commit
3d09aeda97
@ -1,2 +0,0 @@
|
|||||||
[flake8]
|
|
||||||
ignore = E265,E266,E402,E501
|
|
7
alpine-cloud-images/.gitignore
vendored
7
alpine-cloud-images/.gitignore
vendored
@ -1,7 +0,0 @@
|
|||||||
*~
|
|
||||||
*.bak
|
|
||||||
*.swp
|
|
||||||
.DS_Store
|
|
||||||
.vscode/
|
|
||||||
/work/
|
|
||||||
releases*yaml
|
|
@ -1,310 +0,0 @@
|
|||||||
# Configuration
|
|
||||||
|
|
||||||
All the configuration for building image variants is defined by multiple
|
|
||||||
config files; the base configs for official Alpine Linux cloud images are in
|
|
||||||
the [`configs/`](configs/) directory.
|
|
||||||
|
|
||||||
We use [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md) for
|
|
||||||
configuration -- this primarily facilitates importing deeper configs from
|
|
||||||
other files, but also allows the extension/concatenation of arrays and maps
|
|
||||||
(which can be a useful feature for customization), and inline comments.
|
|
||||||
|
|
||||||
----
|
|
||||||
## Resolving Work Environment Configs and Scripts
|
|
||||||
|
|
||||||
If `work/configs/` and `work/scripts/` don't exist, the `build` script will
|
|
||||||
install the contents of the base [`configs/`](configs/) and [`scripts/`](scripts/)
|
|
||||||
directories, and overlay additional `configs/` and `scripts/` subdirectories
|
|
||||||
from `--custom` directories (if any).
|
|
||||||
|
|
||||||
Files cannot be installed over existing files, with one exception -- the
|
|
||||||
[`configs/images.conf`](configs/images.conf) same-directory symlink. Because
|
|
||||||
the `build` script _always_ loads `work/configs/images.conf`, this is the hook
|
|
||||||
for "rolling your own" custom Alpine Linux cloud images.
|
|
||||||
|
|
||||||
The base [`configs/images.conf`](configs/images.conf) symlinks to
|
|
||||||
[`alpine.conf`](configs/images.conf), but this can be overridden using a
|
|
||||||
`--custom` directory containing a new `configs/images.conf` same-directory
|
|
||||||
symlink pointing to its custom top-level config.
|
|
||||||
|
|
||||||
For example, the configs and scripts in the [`overlays/testing/`](overlays/testing/)
|
|
||||||
directory can be resolved in a _clean_ work environment with...
|
|
||||||
```
|
|
||||||
./build configs --custom overlays/testing
|
|
||||||
```
|
|
||||||
This results in the `work/configs/images.conf` symlink to point to
|
|
||||||
`work/configs/alpine-testing.conf` instead of `work/configs/alpine.conf`.
|
|
||||||
|
|
||||||
If multiple directories are specified with `--custom`, they are applied in
|
|
||||||
the order given.
|
|
||||||
|
|
||||||
----
|
|
||||||
## Top-Level Config File
|
|
||||||
|
|
||||||
Examples of top-level config files are [`configs/alpine.conf`](configs/alpine.conf)
|
|
||||||
and [`overlays/testing/configs/alpine-testing.conf`](overlays/testing/configs/alpine-testing.conf).
|
|
||||||
|
|
||||||
There are three main blocks that need to exist (or be `import`ed into) the top
|
|
||||||
level HOCON configuration, and are merged in this exact order:
|
|
||||||
|
|
||||||
### `Default`
|
|
||||||
|
|
||||||
All image variant configs start with this block's contents as a starting point.
|
|
||||||
Arrays and maps can be appended by configs in `Dimensions` and `Mandatory`
|
|
||||||
blocks.
|
|
||||||
|
|
||||||
### `Dimensions`
|
|
||||||
|
|
||||||
The sub-blocks in `Dimensions` define the "dimensions" a variant config is
|
|
||||||
comprised of, and the different config values possible for that dimension.
|
|
||||||
The default [`alpine.conf`](configs/alpine.conf) defines the following
|
|
||||||
dimensional configs:
|
|
||||||
|
|
||||||
* `version` - Alpine Linux _x_._y_ (plus `edge`) versions
|
|
||||||
* `arch` - machine architectures, `x86_64` or `aarch64`
|
|
||||||
* `firmware` - supports launching via legacy BIOS or UEFI
|
|
||||||
* `bootstrap` - the system/scripts responsible for setting up an instance
|
|
||||||
during its initial launch
|
|
||||||
* `cloud` - for specific cloud platforms
|
|
||||||
|
|
||||||
The specific dimensional configs for an image variant are merged in the order
|
|
||||||
that the dimensions are listed.
|
|
||||||
|
|
||||||
### `Mandatory`
|
|
||||||
|
|
||||||
After a variant's dimensional configs have been applied, this is the last block
|
|
||||||
that's merged to the image variant configuration. This block is the ultimate
|
|
||||||
enforcer of any non-overrideable configuration across all variants, and can
|
|
||||||
also provide the last element to array config items.
|
|
||||||
|
|
||||||
----
|
|
||||||
## Dimensional Config Directives
|
|
||||||
|
|
||||||
Because a full cross-product across all dimensional configs may produce images
|
|
||||||
variants that are not viable (i.e. `aarch64` simply does not support legacy
|
|
||||||
`bios`), or may require further adjustments (i.e. the `aws` `aarch64` images
|
|
||||||
require an additional kernel module from `3.15` forward, which aren't available
|
|
||||||
in previous versions), we have two special directives which may appear in
|
|
||||||
dimensional configs.
|
|
||||||
|
|
||||||
### `EXCLUDE` array
|
|
||||||
|
|
||||||
This directive provides an array of dimensional config keys which are
|
|
||||||
incompatible with the current dimensional config. For example,
|
|
||||||
[`configs/arch/aarch64.conf`](configs/arch/aarch64.conf) specifies...
|
|
||||||
```
|
|
||||||
# aarch64 is UEFI only
|
|
||||||
EXCLUDE = [bios]
|
|
||||||
```
|
|
||||||
...which indicates that any image variant that includes both `aarch64` (the
|
|
||||||
current dimensional config) and `bios` configuration should be skipped.
|
|
||||||
|
|
||||||
### `WHEN` block
|
|
||||||
|
|
||||||
This directive conditionally merges additional configuration ***IF*** the
|
|
||||||
image variant also includes a specific dimensional config key (or keys). In
|
|
||||||
order to handle more complex situations, `WHEN` blocks may be nested. For
|
|
||||||
example, [`configs/cloud/aws.conf`](configs/cloud/aws.conf) has...
|
|
||||||
```
|
|
||||||
WHEN {
|
|
||||||
aarch64 {
|
|
||||||
# new AWS aarch64 default...
|
|
||||||
kernel_modules.gpio_pl061 = true
|
|
||||||
initfs_features.gpio_pl061 = true
|
|
||||||
WHEN {
|
|
||||||
"3.14 3.13 3.12" {
|
|
||||||
# ...but not supported for older versions
|
|
||||||
kernel_modules.gpio_pl061 = false
|
|
||||||
initfs_features.gpio_pl061 = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
This configures AWS `aarch64` images to use the `gpio_pl061` kernel module in
|
|
||||||
order to cleanly shutdown/reboot instances from the web console, CLI, or SDK.
|
|
||||||
However, this module is unavailable on older Alpine versions.
|
|
||||||
|
|
||||||
Spaces in `WHEN` block keys serve as an "OR" operator; nested `WHEN` blocks
|
|
||||||
function as "AND" operators.
|
|
||||||
|
|
||||||
----
|
|
||||||
## Config Settings
|
|
||||||
|
|
||||||
**Scalar** values can be simply overridden in later configs.
|
|
||||||
|
|
||||||
**Array** and **map** settings in later configs are merged with the previous
|
|
||||||
values, _or entirely reset if it's first set to `null`_, for example...
|
|
||||||
```
|
|
||||||
some_array = [ thing ]
|
|
||||||
# [...]
|
|
||||||
some_array = null
|
|
||||||
some_array = [ other_thing ]
|
|
||||||
```
|
|
||||||
|
|
||||||
Mostly in order of appearance, as we walk through
|
|
||||||
[`configs/alpine.conf`](configs/alpine.conf) and the deeper configs it
|
|
||||||
imports...
|
|
||||||
|
|
||||||
### `project` string
|
|
||||||
|
|
||||||
This is a unique identifier for the whole collection of images being built.
|
|
||||||
For the official Alpine Linux cloud images, this is set to
|
|
||||||
`https://alpinelinux.org/cloud`.
|
|
||||||
|
|
||||||
When building custom images, you **MUST** override **AT LEAST** this setting to
|
|
||||||
avoid image import and publishing collisions.
|
|
||||||
|
|
||||||
### `name` array
|
|
||||||
|
|
||||||
The ultimate contents of this array contribute to the overall naming of the
|
|
||||||
resultant image. Almost all dimensional configs will add to the `name` array,
|
|
||||||
with two notable exceptions: **version** configs' contribution to this array is
|
|
||||||
determined when `work/images.yaml` is resolved, and is set to the current
|
|
||||||
Alpine Linux release (_x.y.z_ or _YYYYMMDD_ for edge); also because
|
|
||||||
**cloud** images are isolated from each other, it's redundant to include that
|
|
||||||
in the image name.
|
|
||||||
|
|
||||||
### `description` array
|
|
||||||
|
|
||||||
Similar to the `name` array, the elements of this array contribute to the final
|
|
||||||
image description. However, for the official Alpine configs, only the
|
|
||||||
**version** dimension adds to this array, via the same mechanism that sets the
|
|
||||||
revision for the `name` array.
|
|
||||||
|
|
||||||
### `motd` map
|
|
||||||
|
|
||||||
This setting controls the contents of what ultimately gets written into the
|
|
||||||
variant image's `/etc/motd` file. Later configs can add additional messages,
|
|
||||||
replace existing contents, or remove them entirely (by setting the value to
|
|
||||||
`null`).
|
|
||||||
|
|
||||||
The `motd.version_notes` and `motd.release_notes` settings have slightly
|
|
||||||
different behavior:
|
|
||||||
* if the Alpine release (_x.y.z_) ends with `.0`, `release_notes` is dropped
|
|
||||||
to avoid redundancy
|
|
||||||
* edge versions are technically not released, so both of these notes are
|
|
||||||
dropped from `/etc/motd`
|
|
||||||
* otherwise, `version_notes` and `release_notes` are concatenated together as
|
|
||||||
`release_notes` to avoid a blank line between them
|
|
||||||
|
|
||||||
### `scripts` array
|
|
||||||
|
|
||||||
These are the scripts that will be executed by Packer, in order, to do various
|
|
||||||
setup tasks inside a variant's image. The `work/scripts/` directory contains
|
|
||||||
all scripts, including those that may have been added via `build --custom`.
|
|
||||||
|
|
||||||
### `script_dirs` array
|
|
||||||
|
|
||||||
Directories (under `work/scripts/`) that contain additional data that the
|
|
||||||
`scripts` will need. Packer will copy these to the VM responsible for setting
|
|
||||||
up the variant image.
|
|
||||||
|
|
||||||
### `size` string
|
|
||||||
|
|
||||||
The size of the image disk, by default we use `1G` (1 GiB). This disk may (or
|
|
||||||
may not) be further partitioned, based on other factors.
|
|
||||||
|
|
||||||
### `login` string
|
|
||||||
|
|
||||||
The image's primary login user, set to `alpine`.
|
|
||||||
|
|
||||||
### `repos` map
|
|
||||||
|
|
||||||
Defines the contents of the image's `/etc/apk/repositories` file. The map's
|
|
||||||
key is the URL of the repo, and the value determines how that URL will be
|
|
||||||
represented in the `repositories` file...
|
|
||||||
| value | result |
|
|
||||||
|-|-|
|
|
||||||
| `null` | make no reference to this repo |
|
|
||||||
| `false` | this repo is commented out (disabled) |
|
|
||||||
| `true` | this repo is enabled for use |
|
|
||||||
| _tag_ | enable this repo with `@`_`tag`_ |
|
|
||||||
|
|
||||||
### `packages` map
|
|
||||||
|
|
||||||
Defines what APK packages to add/delete. The map's key is the package
|
|
||||||
name, and the value determines whether (or not) to install/uninstall the
|
|
||||||
package...
|
|
||||||
| value | result |
|
|
||||||
|-|-|
|
|
||||||
| `null` | don't add or delete |
|
|
||||||
| `false` | explicitly delete |
|
|
||||||
| `true` | add from default repos |
|
|
||||||
| _tag_ | add from `@`_`tag`_ repo |
|
|
||||||
| `--no-scripts` | add with `--no-scripts` option |
|
|
||||||
| `--no-scripts` _tag_ | add from `@`_`tag`_ repo, with `--no-scripts` option |
|
|
||||||
|
|
||||||
### `services` map of maps
|
|
||||||
|
|
||||||
Defines what services are enabled/disabled at various runlevels. The first
|
|
||||||
map's key is the runlevel, the second key is the service. The service value
|
|
||||||
determines whether (or not) to enable/disable the service at that runlevel...
|
|
||||||
| value | result |
|
|
||||||
|-|-|
|
|
||||||
| `null` | don't enable or disable |
|
|
||||||
| `false` | explicitly disable |
|
|
||||||
| `true` | explicitly enable |
|
|
||||||
|
|
||||||
### `kernel_modules` map
|
|
||||||
|
|
||||||
Defines what kernel modules are specified in the boot loader. The key is the
|
|
||||||
kernel module, and the value determines whether or not it's in the final
|
|
||||||
list...
|
|
||||||
| value | result |
|
|
||||||
|-|-|
|
|
||||||
| `null` | skip |
|
|
||||||
| `false` | skip |
|
|
||||||
| `true` | include |
|
|
||||||
|
|
||||||
### `kernel_options` map
|
|
||||||
|
|
||||||
Defines what kernel options are specified on the kernel command line. The keys
|
|
||||||
are the kernel options, the value determines whether or not it's in the final
|
|
||||||
list...
|
|
||||||
| value | result |
|
|
||||||
|-|-|
|
|
||||||
| `null` | skip |
|
|
||||||
| `false` | skip |
|
|
||||||
| `true` | include |
|
|
||||||
|
|
||||||
### `initfs_features` map
|
|
||||||
|
|
||||||
Defines what initfs features are included when making the image's initramfs
|
|
||||||
file. The keys are the initfs features, and the values determine whether or
|
|
||||||
not they're included in the final list...
|
|
||||||
| value | result |
|
|
||||||
|-|-|
|
|
||||||
| `null` | skip |
|
|
||||||
| `false` | skip |
|
|
||||||
| `true` | include |
|
|
||||||
|
|
||||||
### `qemu.machine_type` string
|
|
||||||
|
|
||||||
The QEMU machine type to use when building local images. For x86_64, this is
|
|
||||||
set to `null`, for aarch64, we use `virt`.
|
|
||||||
|
|
||||||
### `qemu.args` list of lists
|
|
||||||
|
|
||||||
Additional QEMU arguments. For x86_64, this is set to `null`; but aarch64
|
|
||||||
requires several additional arguments to start an operational VM.
|
|
||||||
|
|
||||||
### `qemu.firmware` string
|
|
||||||
|
|
||||||
The path to the QEMU firmware (installed in `work/firmware/`). This is only
|
|
||||||
used when creating UEFI images.
|
|
||||||
|
|
||||||
### `bootloader` string
|
|
||||||
|
|
||||||
The bootloader to use, currently `extlinux` or `grub-efi`.
|
|
||||||
|
|
||||||
### `access` map
|
|
||||||
|
|
||||||
When images are published, this determines who has access to those images.
|
|
||||||
The key is the cloud account (or `PUBLIC`), and the value is whether or not
|
|
||||||
access is granted, `true` or `false`/`null`.
|
|
||||||
|
|
||||||
### `regions` map
|
|
||||||
|
|
||||||
Determines where images should be published. The key is the region
|
|
||||||
identifier (or `ALL`), and the value is whether or not to publish to that
|
|
||||||
region, `true` or `false`/`null`.
|
|
@ -1,19 +0,0 @@
|
|||||||
Copyright (c) 2017-2022 Jake Buchholz Göktürk, Michael Crute
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
|
||||||
this software and associated documentation files (the "Software"), to deal in
|
|
||||||
the Software without restriction, including without limitation the rights to
|
|
||||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
|
||||||
of the Software, and to permit persons to whom the Software is furnished to do
|
|
||||||
so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
SOFTWARE.
|
|
@ -1,183 +0,0 @@
|
|||||||
# Alpine Linux Cloud Image Builder
|
|
||||||
|
|
||||||
This repository contains the code and and configs for the build system used to
|
|
||||||
create official Alpine Linux images for various cloud providers, in various
|
|
||||||
configurations. This build system is flexible, enabling others to build their
|
|
||||||
own customized images.
|
|
||||||
|
|
||||||
----
|
|
||||||
## Pre-Built Offical Cloud Images
|
|
||||||
|
|
||||||
To get started with offical pre-built Alpine Linux cloud images, visit
|
|
||||||
https://alpinelinux.org/cloud. Currently, we build official images for the
|
|
||||||
following cloud platforms...
|
|
||||||
* AWS
|
|
||||||
|
|
||||||
...we are working on also publishing offical images to other major cloud
|
|
||||||
providers.
|
|
||||||
|
|
||||||
Each published image's name contains the Alpine version release, architecture,
|
|
||||||
firmware, bootstrap, and image revision. These details (and more) are also
|
|
||||||
tagged on the images...
|
|
||||||
|
|
||||||
| Tag | Description / Values |
|
|
||||||
|-----|----------------------|
|
|
||||||
| name | `alpine-`_`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-r`_`revision`_ |
|
|
||||||
| project | `https://alpinelinux.org/cloud` |
|
|
||||||
| image_key | _`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-`_`cloud`_ |
|
|
||||||
| version | Alpine version (_`x.y`_ or `edge`) |
|
|
||||||
| release | Alpine release (_`x.y.z`_ or _`YYYYMMDD`_ for edge) |
|
|
||||||
| arch | architecture (`aarch64` or `x86_64`) |
|
|
||||||
| firmware | boot mode (`bios` or `uefi`) |
|
|
||||||
| bootstrap | initial bootstrap system (`tiny` = Tiny Cloud) |
|
|
||||||
| cloud | provider short name (`aws`) |
|
|
||||||
| revision | image revision number |
|
|
||||||
| imported | image import timestamp |
|
|
||||||
| import_id | imported image id |
|
|
||||||
| import_region | imported image region |
|
|
||||||
| published | image publication timestamp |
|
|
||||||
| description | image description |
|
|
||||||
|
|
||||||
Although AWS does not allow cross-account filtering by tags, the image name can
|
|
||||||
still be used to filter images. For example, to get a list of available Alpine
|
|
||||||
3.x aarch64 images in AWS eu-west-2...
|
|
||||||
```
|
|
||||||
aws ec2 describe-images \
|
|
||||||
--region eu-west-2 \
|
|
||||||
--owners 538276064493 \
|
|
||||||
--filters \
|
|
||||||
Name=name,Values='alpine-3.*-aarch64-*' \
|
|
||||||
Name=state,Values=available \
|
|
||||||
--output text \
|
|
||||||
--query 'reverse(sort_by(Images, &CreationDate))[].[ImageId,Name,CreationDate]'
|
|
||||||
```
|
|
||||||
To get just the most recent matching image, use...
|
|
||||||
```
|
|
||||||
--query 'max_by(Image, &CreationDate).[ImageId,Name,CreationDate]'
|
|
||||||
```
|
|
||||||
|
|
||||||
----
|
|
||||||
## Build System
|
|
||||||
|
|
||||||
The build system consists of a number of components:
|
|
||||||
|
|
||||||
* the primary `build` script
|
|
||||||
* the `configs/` directory, defining the set of images to be built
|
|
||||||
* the `scripts/` directory, containing scripts and related data used to set up
|
|
||||||
image contents during provisioning
|
|
||||||
* the Packer `alpine.pkr.hcl`, which orchestrates build, import, and publishing
|
|
||||||
of images
|
|
||||||
* the `cloud_helper.py` script that Packer runs in order to do cloud-specific
|
|
||||||
import and publish operations
|
|
||||||
|
|
||||||
### Build Requirements
|
|
||||||
* [Python](https://python.org) (3.9.7 is known to work)
|
|
||||||
* [Packer](https://packer.io) (1.7.6 is known to work)
|
|
||||||
* [QEMU](https://www.qemu.org) (6.1.0 is known to work)
|
|
||||||
* cloud provider account(s)
|
|
||||||
|
|
||||||
### Cloud Credentials
|
|
||||||
|
|
||||||
By default, the build system relies on the cloud providers' Python API
|
|
||||||
libraries to find and use the necessary credentials, usually via configuration
|
|
||||||
under the user's home directory (i.e. `~/.aws/`, `~/.oci/`, etc.) or or via
|
|
||||||
environment variables (i.e. `AWS_...`, `OCI_...`, etc.)
|
|
||||||
|
|
||||||
The credentials' user/role needs sufficient permission to query, import, and
|
|
||||||
publish images -- the exact details will vary from cloud to cloud. _It is
|
|
||||||
recommended that only the minimum required permissions are granted._
|
|
||||||
|
|
||||||
_We manage the credentials for publishing official Alpine images with an
|
|
||||||
"identity broker" service, and retrieve those credentials via the
|
|
||||||
`--use-broker` argument of the `build` script._
|
|
||||||
|
|
||||||
### The `build` Script
|
|
||||||
|
|
||||||
```
|
|
||||||
usage: build [-h] [--debug] [--clean] [--custom DIR [DIR ...]]
|
|
||||||
[--skip KEY [KEY ...]] [--only KEY [KEY ...]] [--revise] [--use-broker]
|
|
||||||
[--no-color] [--parallel N] [--vars FILE [FILE ...]]
|
|
||||||
{configs,state,local,import,publish}
|
|
||||||
|
|
||||||
positional arguments: (build up to and including this step)
|
|
||||||
configs resolve image build configuration
|
|
||||||
state refresh current image build state
|
|
||||||
local build images locally
|
|
||||||
import import local images to cloud provider default region
|
|
||||||
publish set image permissions and publish to cloud regions
|
|
||||||
|
|
||||||
optional arguments:
|
|
||||||
-h, --help show this help message and exit
|
|
||||||
--debug enable debug output
|
|
||||||
--clean start with a clean work environment
|
|
||||||
--custom DIR [DIR ...] overlay custom directory in work environment
|
|
||||||
--skip KEY [KEY ...] skip variants with dimension key(s)
|
|
||||||
--only KEY [KEY ...] only variants with dimension key(s)
|
|
||||||
--revise remove existing local/imported image, or bump
|
|
||||||
revision and rebuild if published
|
|
||||||
--use-broker use the identity broker to get credentials
|
|
||||||
--no-color turn off Packer color output
|
|
||||||
--parallel N build N images in parallel (default: 1)
|
|
||||||
--vars FILE [FILE ...] supply Packer with -vars-file(s)
|
|
||||||
```
|
|
||||||
|
|
||||||
The `build` script will automatically create a `work/` directory containing a
|
|
||||||
Python virtual environment if one does not already exist. This directory also
|
|
||||||
hosts other data related to building images. The `--clean` argument will
|
|
||||||
remove everything in the `work/` directory except for things related to the
|
|
||||||
Python virtual environment.
|
|
||||||
|
|
||||||
If `work/configs/` or `work/scripts/` directories do not yet exist, they will
|
|
||||||
be populated with the base configuration and scripts from `configs/` and/or
|
|
||||||
`scripts/` directories. If any custom overlay directories are specified with
|
|
||||||
the `--custom` argument, their `configs/` and `scripts/` subdirectories are
|
|
||||||
also added to `work/configs/` and `work/scripts/`.
|
|
||||||
|
|
||||||
The "build step" positional argument deterimines the last step the `build`
|
|
||||||
script should execute -- all steps before this targeted step may also be
|
|
||||||
executed. That is, `build local` will first execute the `configs` step (if
|
|
||||||
necessary) and then the `state` step (always) before proceeding to the `local`
|
|
||||||
step.
|
|
||||||
|
|
||||||
The `configs` step resolves configuration for all buildable images, and writes
|
|
||||||
it to `work/images.yaml`, if it does not already exist.
|
|
||||||
|
|
||||||
The `state` step always checks the current state of the image builds,
|
|
||||||
determines what actions need to be taken, and updates `work/images.yaml`. A
|
|
||||||
subset of image builds can be targeted by using the `--skip` and `--only`
|
|
||||||
arguments. The `--revise` argument indicates that any _unpublished_ local
|
|
||||||
or imported images should be removed and rebuilt; as _published_ images can't
|
|
||||||
be removed, `--revise` instead increments the _`revision`_ value to rebuild
|
|
||||||
new images.
|
|
||||||
|
|
||||||
`local`, `import`, and `publish` steps are orchestrated by Packer. By default,
|
|
||||||
each image will be processed serially; providing the `--parallel` argument with
|
|
||||||
a value greater than 1 will parallelize operations. The degree to which you
|
|
||||||
can parallelze `local` image builds will depend on the local build hardware --
|
|
||||||
as QEMU virtual machines are launched for each image being built. Image
|
|
||||||
`import` and `publish` steps are much more lightweight, and can support higher
|
|
||||||
parallelism.
|
|
||||||
|
|
||||||
The `local` step builds local images with QEMU, for those that are not already
|
|
||||||
built locally or have already been imported.
|
|
||||||
|
|
||||||
The `import` step imports the local images into the cloud providers' default
|
|
||||||
regions, unless they've already been imported. At this point the images are
|
|
||||||
not available publicly, allowing for additional testing prior to publishing.
|
|
||||||
|
|
||||||
The `publish` step copies the image from the default region to other regions,
|
|
||||||
if they haven't already been copied there. This step will always update
|
|
||||||
image permissions, descriptions, tags, and deprecation date (if applicable)
|
|
||||||
in all regions where the image has been published.
|
|
||||||
|
|
||||||
### The `cloud_helper.py` Script
|
|
||||||
|
|
||||||
This script is meant to be called only by Packer from its `post-processor`
|
|
||||||
block for image `import` and `publish` steps.
|
|
||||||
|
|
||||||
----
|
|
||||||
## Build Configuration
|
|
||||||
|
|
||||||
For more in-depth information about how the build system configuration works,
|
|
||||||
how to create custom config overlays, and details about individual config
|
|
||||||
settings, see [CONFIGURATION.md](CONFIGURATION.md).
|
|
@ -1,200 +0,0 @@
|
|||||||
# Alpine Cloud Images Packer Configuration
|
|
||||||
|
|
||||||
### Variables
|
|
||||||
|
|
||||||
# include debug output from provisioning/post-processing scripts
|
|
||||||
variable "DEBUG" {
|
|
||||||
default = 0
|
|
||||||
}
|
|
||||||
# indicates cloud_helper.py should be run with --use-broker
|
|
||||||
variable "USE_BROKER" {
|
|
||||||
default = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# tuneable QEMU VM parameters, based on perfomance of the local machine;
|
|
||||||
# overrideable via build script --vars parameter referencing a Packer
|
|
||||||
# ".vars.hcl" file containing alternate settings
|
|
||||||
variable "qemu" {
|
|
||||||
default = {
|
|
||||||
boot_wait = {
|
|
||||||
aarch64 = "1m"
|
|
||||||
x86_64 = "1m"
|
|
||||||
}
|
|
||||||
cmd_wait = "5s"
|
|
||||||
ssh_timeout = "1m"
|
|
||||||
memory = 1024 # MiB
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
### Local Data
|
|
||||||
|
|
||||||
locals {
|
|
||||||
# possible actions for the post-processor
|
|
||||||
actions = [
|
|
||||||
"build", "upload", "import", "publish", "release"
|
|
||||||
]
|
|
||||||
|
|
||||||
debug_arg = var.DEBUG == 0 ? "" : "--debug"
|
|
||||||
broker_arg = var.USE_BROKER == 0 ? "" : "--use-broker"
|
|
||||||
|
|
||||||
# randomly generated password
|
|
||||||
password = uuidv4()
|
|
||||||
|
|
||||||
# resolve actionable build configs
|
|
||||||
configs = { for b, cfg in yamldecode(file("work/images.yaml")):
|
|
||||||
b => cfg if contains(keys(cfg), "actions")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
### Build Sources
|
|
||||||
|
|
||||||
# Don't build
|
|
||||||
source null alpine {
|
|
||||||
communicator = "none"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Common to all QEMU builds
|
|
||||||
source qemu alpine {
|
|
||||||
# qemu machine
|
|
||||||
headless = true
|
|
||||||
memory = var.qemu.memory
|
|
||||||
net_device = "virtio-net"
|
|
||||||
disk_interface = "virtio"
|
|
||||||
|
|
||||||
# build environment
|
|
||||||
boot_command = [
|
|
||||||
"root<enter>",
|
|
||||||
"setup-interfaces<enter><enter><enter><enter>",
|
|
||||||
"ifup eth0<enter><wait${var.qemu.cmd_wait}>",
|
|
||||||
"setup-sshd openssh<enter><wait${var.qemu.cmd_wait}>",
|
|
||||||
"echo PermitRootLogin yes >> /etc/ssh/sshd_config<enter>",
|
|
||||||
"service sshd restart<enter>",
|
|
||||||
"echo 'root:${local.password}' | chpasswd<enter>",
|
|
||||||
]
|
|
||||||
ssh_username = "root"
|
|
||||||
ssh_password = local.password
|
|
||||||
ssh_timeout = var.qemu.ssh_timeout
|
|
||||||
shutdown_command = "poweroff"
|
|
||||||
}
|
|
||||||
|
|
||||||
build {
|
|
||||||
name = "alpine"
|
|
||||||
|
|
||||||
## Builders
|
|
||||||
|
|
||||||
# QEMU builder
|
|
||||||
dynamic "source" {
|
|
||||||
for_each = { for b, c in local.configs:
|
|
||||||
b => c if contains(c.actions, "build")
|
|
||||||
}
|
|
||||||
iterator = B
|
|
||||||
labels = ["qemu.alpine"] # links us to the base source
|
|
||||||
|
|
||||||
content {
|
|
||||||
name = B.key
|
|
||||||
|
|
||||||
# qemu machine
|
|
||||||
qemu_binary = "qemu-system-${B.value.arch}"
|
|
||||||
qemuargs = B.value.qemu.args
|
|
||||||
machine_type = B.value.qemu.machine_type
|
|
||||||
firmware = B.value.qemu.firmware
|
|
||||||
|
|
||||||
# build environment
|
|
||||||
iso_url = B.value.qemu.iso_url
|
|
||||||
iso_checksum = "file:${B.value.qemu.iso_url}.sha512"
|
|
||||||
boot_wait = var.qemu.boot_wait[B.value.arch]
|
|
||||||
|
|
||||||
# results
|
|
||||||
output_directory = "work/images/${B.value.cloud}/${B.value.image_key}"
|
|
||||||
disk_size = B.value.size
|
|
||||||
format = "qcow2"
|
|
||||||
vm_name = "image.qcow2"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Null builder (don't build, but we might import and/or publish)
|
|
||||||
dynamic "source" {
|
|
||||||
for_each = { for b, c in local.configs:
|
|
||||||
b => c if !contains(c.actions, "build")
|
|
||||||
}
|
|
||||||
iterator = B
|
|
||||||
labels = ["null.alpine"]
|
|
||||||
content {
|
|
||||||
name = B.key
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
## build provisioners
|
|
||||||
|
|
||||||
# install setup files
|
|
||||||
dynamic "provisioner" {
|
|
||||||
for_each = { for b, c in local.configs:
|
|
||||||
b => c if contains(c.actions, "build")
|
|
||||||
}
|
|
||||||
iterator = B
|
|
||||||
labels = ["file"]
|
|
||||||
content {
|
|
||||||
only = [ "qemu.${B.key}" ] # configs specific to one build
|
|
||||||
|
|
||||||
sources = [ for d in B.value.script_dirs: "work/scripts/${d}" ]
|
|
||||||
destination = "/tmp/"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# run setup scripts
|
|
||||||
dynamic "provisioner" {
|
|
||||||
for_each = { for b, c in local.configs:
|
|
||||||
b => c if contains(c.actions, "build")
|
|
||||||
}
|
|
||||||
iterator = B
|
|
||||||
labels = ["shell"]
|
|
||||||
content {
|
|
||||||
only = [ "qemu.${B.key}" ] # configs specific to one build
|
|
||||||
|
|
||||||
scripts = [ for s in B.value.scripts: "work/scripts/${s}" ]
|
|
||||||
use_env_var_file = true
|
|
||||||
environment_vars = [
|
|
||||||
"DEBUG=${var.DEBUG}",
|
|
||||||
"ARCH=${B.value.arch}",
|
|
||||||
"BOOTLOADER=${B.value.bootloader}",
|
|
||||||
"BOOTSTRAP=${B.value.bootstrap}",
|
|
||||||
"BUILD_NAME=${B.value.name}",
|
|
||||||
"BUILD_REVISION=${B.value.revision}",
|
|
||||||
"CLOUD=${B.value.cloud}",
|
|
||||||
"END_OF_LIFE=${B.value.end_of_life}",
|
|
||||||
"FIRMWARE=${B.value.firmware}",
|
|
||||||
"IMAGE_LOGIN=${B.value.login}",
|
|
||||||
"INITFS_FEATURES=${B.value.initfs_features}",
|
|
||||||
"KERNEL_MODULES=${B.value.kernel_modules}",
|
|
||||||
"KERNEL_OPTIONS=${B.value.kernel_options}",
|
|
||||||
"MOTD=${B.value.motd}",
|
|
||||||
"NTP_SERVER=${B.value.ntp_server}",
|
|
||||||
"PACKAGES_ADD=${B.value.packages.add}",
|
|
||||||
"PACKAGES_DEL=${B.value.packages.del}",
|
|
||||||
"PACKAGES_NOSCRIPTS=${B.value.packages.noscripts}",
|
|
||||||
"RELEASE=${B.value.release}",
|
|
||||||
"REPOS=${B.value.repos}",
|
|
||||||
"SERVICES_ENABLE=${B.value.services.enable}",
|
|
||||||
"SERVICES_DISABLE=${B.value.services.disable}",
|
|
||||||
"VERSION=${B.value.version}",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
## build post-processor
|
|
||||||
|
|
||||||
# import and/or publish cloud images
|
|
||||||
dynamic "post-processor" {
|
|
||||||
for_each = { for b, c in local.configs:
|
|
||||||
b => c if length(setintersection(c.actions, local.actions)) > 0
|
|
||||||
}
|
|
||||||
iterator = B
|
|
||||||
labels = ["shell-local"]
|
|
||||||
content {
|
|
||||||
only = [ "qemu.${B.key}", "null.${B.key}" ]
|
|
||||||
inline = [ for action in local.actions:
|
|
||||||
"./cloud_helper.py ${action} ${local.debug_arg} ${local.broker_arg} ${B.key}" if contains(B.value.actions, action)
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,104 +0,0 @@
|
|||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
import json
|
|
||||||
import re
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from urllib.request import urlopen
|
|
||||||
|
|
||||||
|
|
||||||
class Alpine():
|
|
||||||
|
|
||||||
DEFAULT_RELEASES_URL = 'https://alpinelinux.org/releases.json'
|
|
||||||
DEFAULT_CDN_URL = 'https://dl-cdn.alpinelinux.org/alpine'
|
|
||||||
DEFAULT_WEB_TIMEOUT = 5
|
|
||||||
|
|
||||||
def __init__(self, releases_url=None, cdn_url=None, web_timeout=None):
|
|
||||||
self.now = datetime.utcnow()
|
|
||||||
self.release_today = self.now.strftime('%Y%m%d')
|
|
||||||
self.eol_tomorrow = (self.now + timedelta(days=1)).strftime('%F')
|
|
||||||
self.latest = None
|
|
||||||
self.versions = {}
|
|
||||||
self.releases_url = releases_url or self.DEFAULT_RELEASES_URL
|
|
||||||
self.web_timeout = web_timeout or self.DEFAULT_WEB_TIMEOUT
|
|
||||||
self.cdn_url = cdn_url or self.DEFAULT_CDN_URL
|
|
||||||
|
|
||||||
# get all Alpine versions, and their EOL and latest release
|
|
||||||
res = urlopen(self.releases_url, timeout=self.web_timeout)
|
|
||||||
r = json.load(res)
|
|
||||||
branches = sorted(
|
|
||||||
r['release_branches'], reverse=True,
|
|
||||||
key=lambda x: x.get('branch_date', '0000-00-00')
|
|
||||||
)
|
|
||||||
for b in branches:
|
|
||||||
ver = b['rel_branch'].lstrip('v')
|
|
||||||
if not self.latest:
|
|
||||||
self.latest = ver
|
|
||||||
|
|
||||||
rel = None
|
|
||||||
if releases := b.get('releases', None):
|
|
||||||
rel = sorted(
|
|
||||||
releases, reverse=True, key=lambda x: x['date']
|
|
||||||
)[0]['version']
|
|
||||||
elif ver == 'edge':
|
|
||||||
# edge "releases" is today's YYYYMMDD
|
|
||||||
rel = self.release_today
|
|
||||||
|
|
||||||
self.versions[ver] = {
|
|
||||||
'version': ver,
|
|
||||||
'release': rel,
|
|
||||||
'end_of_life': b.get('eol_date', self.eol_tomorrow),
|
|
||||||
'arches': b.get('arches'),
|
|
||||||
}
|
|
||||||
|
|
||||||
def _ver(self, ver=None):
|
|
||||||
if not ver or ver == 'latest' or ver == 'latest-stable':
|
|
||||||
ver = self.latest
|
|
||||||
|
|
||||||
return ver
|
|
||||||
|
|
||||||
def repo_url(self, repo, arch, ver=None):
|
|
||||||
ver = self._ver(ver)
|
|
||||||
if ver != 'edge':
|
|
||||||
ver = 'v' + ver
|
|
||||||
|
|
||||||
return f"{self.cdn_url}/{ver}/{repo}/{arch}"
|
|
||||||
|
|
||||||
def virt_iso_url(self, arch, ver=None):
|
|
||||||
ver = self._ver(ver)
|
|
||||||
rel = self.versions[ver]['release']
|
|
||||||
return f"{self.cdn_url}/v{ver}/releases/{arch}/alpine-virt-{rel}-{arch}.iso"
|
|
||||||
|
|
||||||
def version_info(self, ver=None):
|
|
||||||
ver = self._ver(ver)
|
|
||||||
if ver not in self.versions:
|
|
||||||
# perhaps a release candidate?
|
|
||||||
apk_ver = self.apk_version('main', 'x86_64', 'alpine-base', ver=ver)
|
|
||||||
rel = apk_ver.split('-')[0]
|
|
||||||
ver = '.'.join(rel.split('.')[:2])
|
|
||||||
self.versions[ver] = {
|
|
||||||
'version': ver,
|
|
||||||
'release': rel,
|
|
||||||
'end_of_life': self.eol_tomorrow,
|
|
||||||
'arches': self.versions['edge']['arches'], # reasonable assumption
|
|
||||||
}
|
|
||||||
|
|
||||||
return self.versions[ver]
|
|
||||||
|
|
||||||
# TODO? maybe implement apk_info() to read from APKINDEX, but for now
|
|
||||||
# this apk_version() seems faster and gets what we need
|
|
||||||
|
|
||||||
def apk_version(self, repo, arch, apk, ver=None):
|
|
||||||
ver = self._ver(ver)
|
|
||||||
repo_url = self.repo_url(repo, arch, ver=ver)
|
|
||||||
apks_re = re.compile(f'"{apk}-(\\d.*)\\.apk"')
|
|
||||||
res = urlopen(repo_url, timeout=self.web_timeout)
|
|
||||||
for line in map(lambda x: x.decode('utf8'), res):
|
|
||||||
if not line.startswith('<a href="'):
|
|
||||||
continue
|
|
||||||
|
|
||||||
match = apks_re.search(line)
|
|
||||||
if match:
|
|
||||||
return match.group(1)
|
|
||||||
|
|
||||||
# didn't find it?
|
|
||||||
raise RuntimeError(f"Unable to find {apk} APK via {repo_url}")
|
|
@ -1,343 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
# Ensure we're using the Python virtual env with our installed dependencies
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
sys.pycache_prefix = 'work/__pycache__'
|
|
||||||
|
|
||||||
# Create the work environment if it doesn't exist.
|
|
||||||
if not os.path.exists('work'):
|
|
||||||
import venv
|
|
||||||
|
|
||||||
PIP_LIBS = [
|
|
||||||
'mergedeep',
|
|
||||||
'pyhocon',
|
|
||||||
'python-dateutil',
|
|
||||||
'ruamel.yaml',
|
|
||||||
]
|
|
||||||
print('Work environment does not exist, creating...', file=sys.stderr)
|
|
||||||
venv.create('work', with_pip=True)
|
|
||||||
subprocess.run(['work/bin/pip', 'install', '-U', 'pip', 'wheel'])
|
|
||||||
subprocess.run(['work/bin/pip', 'install', '-U', *PIP_LIBS])
|
|
||||||
|
|
||||||
# Re-execute using the right virtual environment, if necessary.
|
|
||||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
|
||||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
|
||||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
|
||||||
os.execv(venv_args[0], venv_args)
|
|
||||||
|
|
||||||
# We're now in the right Python environment...
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import io
|
|
||||||
import logging
|
|
||||||
import shutil
|
|
||||||
import time
|
|
||||||
|
|
||||||
from glob import glob
|
|
||||||
from subprocess import Popen, PIPE
|
|
||||||
from urllib.request import urlopen
|
|
||||||
|
|
||||||
import clouds
|
|
||||||
from alpine import Alpine
|
|
||||||
from image_configs import ImageConfigManager
|
|
||||||
|
|
||||||
|
|
||||||
### Constants & Variables
|
|
||||||
|
|
||||||
STEPS = ['configs', 'state', 'rollback', 'local', 'upload', 'import', 'publish', 'release']
|
|
||||||
LOGFORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
||||||
WORK_CLEAN = {'bin', 'include', 'lib', 'pyvenv.cfg', '__pycache__'}
|
|
||||||
WORK_OVERLAYS = ['configs', 'scripts']
|
|
||||||
UEFI_FIRMWARE = {
|
|
||||||
'aarch64': {
|
|
||||||
'apk': 'aavmf',
|
|
||||||
'bin': 'usr/share/AAVMF/QEMU_EFI.fd',
|
|
||||||
},
|
|
||||||
'x86_64': {
|
|
||||||
'apk': 'ovmf',
|
|
||||||
'bin': 'usr/share/OVMF/OVMF.fd',
|
|
||||||
}
|
|
||||||
}
|
|
||||||
alpine = Alpine()
|
|
||||||
|
|
||||||
|
|
||||||
### Functions
|
|
||||||
|
|
||||||
# ensure list has unique values, preserving order
|
|
||||||
def unique_list(x):
|
|
||||||
d = {e: 1 for e in x}
|
|
||||||
return list(d.keys())
|
|
||||||
|
|
||||||
|
|
||||||
def remove_dupe_args():
|
|
||||||
class RemoveDupeArgs(argparse.Action):
|
|
||||||
def __call__(self, parser, args, values, option_string=None):
|
|
||||||
setattr(args, self.dest, unique_list(values))
|
|
||||||
|
|
||||||
return RemoveDupeArgs
|
|
||||||
|
|
||||||
|
|
||||||
def are_args_valid(checker):
|
|
||||||
class AreArgsValid(argparse.Action):
|
|
||||||
def __call__(self, parser, args, values, option_string=None):
|
|
||||||
# remove duplicates
|
|
||||||
values = unique_list(values)
|
|
||||||
for x in values:
|
|
||||||
if not checker(x):
|
|
||||||
parser.error(f"{option_string} value is not a {self.metavar}: {x}")
|
|
||||||
|
|
||||||
setattr(args, self.dest, values)
|
|
||||||
|
|
||||||
return AreArgsValid
|
|
||||||
|
|
||||||
|
|
||||||
def clean_work():
|
|
||||||
log.info('Cleaning work environment')
|
|
||||||
|
|
||||||
for x in (set(os.listdir('work')) - WORK_CLEAN):
|
|
||||||
x = os.path.join('work', x)
|
|
||||||
log.debug('removing %s', x)
|
|
||||||
if os.path.isdir(x) and not os.path.islink(x):
|
|
||||||
shutil.rmtree(x)
|
|
||||||
else:
|
|
||||||
os.unlink(x)
|
|
||||||
|
|
||||||
|
|
||||||
def is_images_conf(o, x):
|
|
||||||
if not all([
|
|
||||||
o == 'configs',
|
|
||||||
x.endswith('/images.conf'),
|
|
||||||
os.path.islink(x),
|
|
||||||
]):
|
|
||||||
return False
|
|
||||||
|
|
||||||
# must also link to file in the same directory
|
|
||||||
x_link = os.path.normpath(os.readlink(x))
|
|
||||||
return x_link == os.path.basename(x_link)
|
|
||||||
|
|
||||||
|
|
||||||
def install_overlay(overlay):
|
|
||||||
log.info("Installing '%s' overlay in work environment", overlay)
|
|
||||||
dest_dir = os.path.join('work', overlay)
|
|
||||||
os.makedirs(dest_dir, exist_ok=True)
|
|
||||||
for src in unique_list(['.'] + args.custom):
|
|
||||||
src_dir = os.path.join(src, overlay)
|
|
||||||
if not os.path.exists(src_dir):
|
|
||||||
log.debug('%s does not exist, skipping', src_dir)
|
|
||||||
continue
|
|
||||||
for x in glob(os.path.join(src_dir, '**'), recursive=True):
|
|
||||||
x = x.removeprefix(src_dir + '/')
|
|
||||||
src_x = os.path.join(src_dir, x)
|
|
||||||
dest_x = os.path.join(dest_dir, x)
|
|
||||||
|
|
||||||
if is_images_conf(overlay, src_x):
|
|
||||||
rel_x = os.readlink(src_x)
|
|
||||||
if os.path.islink(dest_x):
|
|
||||||
log.debug('overriding %s', dest_x)
|
|
||||||
os.unlink(dest_x)
|
|
||||||
|
|
||||||
log.debug('ln -s %s %s', rel_x, dest_x)
|
|
||||||
os.symlink(rel_x, dest_x)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if os.path.isdir(src_x):
|
|
||||||
if not os.path.exists(dest_x):
|
|
||||||
log.debug('makedirs %s', dest_x)
|
|
||||||
os.makedirs(dest_x)
|
|
||||||
|
|
||||||
if os.path.isdir(dest_x):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if os.path.exists(dest_x):
|
|
||||||
log.critical('Unallowable destination overwirte detected: %s', dest_x)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
log.debug('cp -p %s %s', src_x, dest_x)
|
|
||||||
shutil.copy(src_x, dest_x)
|
|
||||||
|
|
||||||
|
|
||||||
def install_overlays():
|
|
||||||
for overlay in WORK_OVERLAYS:
|
|
||||||
if not os.path.isdir(os.path.join('work', overlay)):
|
|
||||||
install_overlay(overlay)
|
|
||||||
|
|
||||||
else:
|
|
||||||
log.info("Using existing '%s' in work environment", overlay)
|
|
||||||
|
|
||||||
|
|
||||||
def install_qemu_firmware():
|
|
||||||
firm_dir = 'work/firmware'
|
|
||||||
if os.path.isdir(firm_dir):
|
|
||||||
log.info('Using existing UEFI firmware in work environment')
|
|
||||||
return
|
|
||||||
|
|
||||||
log.info('Installing UEFI firmware in work environment')
|
|
||||||
|
|
||||||
os.makedirs(firm_dir)
|
|
||||||
for arch, a_cfg in UEFI_FIRMWARE.items():
|
|
||||||
apk = a_cfg['apk']
|
|
||||||
bin = a_cfg['bin']
|
|
||||||
v = alpine.apk_version('community', arch, apk)
|
|
||||||
apk_url = f"{alpine.repo_url('community', arch)}/{apk}-{v}.apk"
|
|
||||||
data = urlopen(apk_url).read()
|
|
||||||
|
|
||||||
# Python tarfile library can't extract from APKs
|
|
||||||
tar_cmd = ['tar', '-zxf', '-', '-C', firm_dir, bin]
|
|
||||||
p = Popen(tar_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
|
|
||||||
out, err = p.communicate(input=data)
|
|
||||||
if p.returncode:
|
|
||||||
log.critical('Unable to untar %s to get %s', apk_url, bin)
|
|
||||||
log.error('%s = %s', p.returncode, ' '.join(tar_cmd))
|
|
||||||
log.error('STDOUT:\n%s', out.decode('utf8'))
|
|
||||||
log.error('STDERR:\n%s', err.decode('utf8'))
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
firm_bin = os.path.join(firm_dir, f"uefi-{arch}.bin")
|
|
||||||
os.symlink(bin, firm_bin)
|
|
||||||
|
|
||||||
|
|
||||||
### Command Line & Logging
|
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
|
|
||||||
# general options
|
|
||||||
parser.add_argument(
|
|
||||||
'--debug', action='store_true', help='enable debug output')
|
|
||||||
parser.add_argument(
|
|
||||||
'--clean', action='store_true', help='start with a clean work environment')
|
|
||||||
# config options
|
|
||||||
parser.add_argument(
|
|
||||||
'--custom', metavar='DIR', nargs='+', action=are_args_valid(os.path.isdir),
|
|
||||||
default=[], help='overlay custom directory in work environment')
|
|
||||||
# state options
|
|
||||||
parser.add_argument(
|
|
||||||
'--skip', metavar='KEY', nargs='+', action=remove_dupe_args(),
|
|
||||||
default=[], help='skip variants with dimension key(s)')
|
|
||||||
parser.add_argument(
|
|
||||||
'--only', metavar='KEY', nargs='+', action=remove_dupe_args(),
|
|
||||||
default=[], help='only variants with dimension key(s)')
|
|
||||||
parser.add_argument(
|
|
||||||
'--revise', action='store_true',
|
|
||||||
help='remove existing local/uploaded/imported image, or bump revision and '
|
|
||||||
' rebuild if published or released')
|
|
||||||
parser.add_argument(
|
|
||||||
'--use-broker', action='store_true',
|
|
||||||
help='use the identity broker to get credentials')
|
|
||||||
# packer options
|
|
||||||
parser.add_argument(
|
|
||||||
'--no-color', action='store_true', help='turn off Packer color output')
|
|
||||||
parser.add_argument(
|
|
||||||
'--parallel', metavar='N', type=int, default=1,
|
|
||||||
help='build N images in parallel')
|
|
||||||
parser.add_argument(
|
|
||||||
'--vars', metavar='FILE', nargs='+', action=are_args_valid(os.path.isfile),
|
|
||||||
default=[], help='supply Packer with -vars-file(s)')
|
|
||||||
# positional argument
|
|
||||||
parser.add_argument(
|
|
||||||
'step', choices=STEPS, help='build up to and including this step')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
log = logging.getLogger('build')
|
|
||||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
|
||||||
console = logging.StreamHandler()
|
|
||||||
logfmt = logging.Formatter(LOGFORMAT, datefmt='%FT%TZ')
|
|
||||||
logfmt.converter = time.gmtime
|
|
||||||
console.setFormatter(logfmt)
|
|
||||||
log.addHandler(console)
|
|
||||||
log.debug(args)
|
|
||||||
|
|
||||||
if args.step == 'rollback':
|
|
||||||
log.warning('"rollback" step enables --revise option')
|
|
||||||
args.revise = True
|
|
||||||
|
|
||||||
# set up credential provider, if we're going to use it
|
|
||||||
if args.use_broker:
|
|
||||||
clouds.set_credential_provider(debug=args.debug)
|
|
||||||
|
|
||||||
### Setup Configs
|
|
||||||
|
|
||||||
latest = alpine.version_info()
|
|
||||||
log.info('Latest Alpine version %s and release %s', latest['version'], latest['release'])
|
|
||||||
|
|
||||||
if args.clean:
|
|
||||||
clean_work()
|
|
||||||
|
|
||||||
# install overlay(s) if missing
|
|
||||||
install_overlays()
|
|
||||||
|
|
||||||
image_configs = ImageConfigManager(
|
|
||||||
conf_path='work/configs/images.conf',
|
|
||||||
yaml_path='work/images.yaml',
|
|
||||||
log='build',
|
|
||||||
alpine=alpine,
|
|
||||||
)
|
|
||||||
|
|
||||||
log.info('Configuration Complete')
|
|
||||||
if args.step == 'configs':
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
### What needs doing?
|
|
||||||
|
|
||||||
if not image_configs.refresh_state(
|
|
||||||
step=args.step, only=args.only, skip=args.skip, revise=args.revise):
|
|
||||||
log.info('No pending actions to take at this time.')
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
if args.step == 'state' or args.step == 'rollback':
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
# install firmware if missing
|
|
||||||
install_qemu_firmware()
|
|
||||||
|
|
||||||
### Build/Import/Publish with Packer
|
|
||||||
|
|
||||||
env = os.environ | {
|
|
||||||
'TZ': 'UTC',
|
|
||||||
'PACKER_CACHE_DIR': 'work/packer_cache'
|
|
||||||
}
|
|
||||||
|
|
||||||
packer_cmd = [
|
|
||||||
'packer', 'build', '-timestamp-ui',
|
|
||||||
'-parallel-builds', str(args.parallel)
|
|
||||||
]
|
|
||||||
if args.no_color:
|
|
||||||
packer_cmd.append('-color=false')
|
|
||||||
|
|
||||||
if args.use_broker:
|
|
||||||
packer_cmd += ['-var', 'USE_BROKER=1']
|
|
||||||
|
|
||||||
if args.debug:
|
|
||||||
# do not add '-debug', it will pause between steps
|
|
||||||
packer_cmd += ['-var', 'DEBUG=1']
|
|
||||||
|
|
||||||
for var_file in args.vars:
|
|
||||||
packer_cmd.append(f"-var-file={var_file}")
|
|
||||||
|
|
||||||
packer_cmd += ['.']
|
|
||||||
log.info('Executing Packer...')
|
|
||||||
log.debug(packer_cmd)
|
|
||||||
out = io.StringIO()
|
|
||||||
p = Popen(packer_cmd, stdout=PIPE, encoding='utf8', env=env)
|
|
||||||
while p.poll() is None:
|
|
||||||
text = p.stdout.readline()
|
|
||||||
out.write(text)
|
|
||||||
print(text, end="")
|
|
||||||
|
|
||||||
if p.returncode != 0:
|
|
||||||
log.critical('Packer Failure')
|
|
||||||
sys.exit(p.returncode)
|
|
||||||
|
|
||||||
log.info('Packer Completed')
|
|
||||||
|
|
||||||
# update final state in work/images.yaml
|
|
||||||
image_configs.refresh_state(
|
|
||||||
step='final',
|
|
||||||
only=args.only,
|
|
||||||
skip=args.skip
|
|
||||||
)
|
|
||||||
|
|
||||||
log.info('Build Finished')
|
|
@ -1,103 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
# Ensure we're using the Python virtual env with our installed dependencies
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import textwrap
|
|
||||||
|
|
||||||
NOTE = textwrap.dedent("""
|
|
||||||
This script is meant to be run as a Packer post-processor, and Packer is only
|
|
||||||
meant to be executed from the main 'build' script, which is responsible for
|
|
||||||
setting up the work environment.
|
|
||||||
""")
|
|
||||||
|
|
||||||
sys.pycache_prefix = 'work/__pycache__'
|
|
||||||
|
|
||||||
if not os.path.exists('work'):
|
|
||||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
|
||||||
print(NOTE, file=sys.stderr)
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
# Re-execute using the right virtual environment, if necessary.
|
|
||||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
|
||||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
|
||||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
|
||||||
os.execv(venv_args[0], venv_args)
|
|
||||||
|
|
||||||
# We're now in the right Python environment
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
from ruamel.yaml import YAML
|
|
||||||
|
|
||||||
import clouds
|
|
||||||
from image_configs import ImageConfigManager
|
|
||||||
|
|
||||||
|
|
||||||
### Constants & Variables
|
|
||||||
|
|
||||||
ACTIONS = ['build', 'upload', 'import', 'publish', 'release']
|
|
||||||
LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
|
||||||
|
|
||||||
|
|
||||||
### Command Line & Logging
|
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description=NOTE)
|
|
||||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
|
||||||
parser.add_argument(
|
|
||||||
'--use-broker', action='store_true',
|
|
||||||
help='use the identity broker to get credentials')
|
|
||||||
parser.add_argument('action', choices=ACTIONS)
|
|
||||||
parser.add_argument('image_keys', metavar='IMAGE_KEY', nargs='+')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
log = logging.getLogger(args.action)
|
|
||||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
|
||||||
# log to STDOUT so that it's not all red when executed by packer
|
|
||||||
console = logging.StreamHandler(sys.stdout)
|
|
||||||
console.setFormatter(logging.Formatter(LOGFORMAT))
|
|
||||||
log.addHandler(console)
|
|
||||||
log.debug(args)
|
|
||||||
|
|
||||||
# set up credential provider, if we're going to use it
|
|
||||||
if args.use_broker:
|
|
||||||
clouds.set_credential_provider(debug=args.debug)
|
|
||||||
|
|
||||||
# load build configs
|
|
||||||
configs = ImageConfigManager(
|
|
||||||
conf_path='work/configs/images.conf',
|
|
||||||
yaml_path='work/images.yaml',
|
|
||||||
log=args.action
|
|
||||||
)
|
|
||||||
|
|
||||||
yaml = YAML()
|
|
||||||
yaml.explicit_start = True
|
|
||||||
|
|
||||||
for image_key in args.image_keys:
|
|
||||||
image_config = configs.get(image_key)
|
|
||||||
|
|
||||||
if args.action == 'build':
|
|
||||||
image_config.convert_image()
|
|
||||||
|
|
||||||
elif args.action == 'upload':
|
|
||||||
# TODO: image_config.upload_image()
|
|
||||||
pass
|
|
||||||
|
|
||||||
elif args.action == 'import':
|
|
||||||
clouds.import_image(image_config)
|
|
||||||
|
|
||||||
elif args.action == 'publish':
|
|
||||||
# TODO: we should probably always ensure the directory exists
|
|
||||||
os.makedirs(image_config.local_dir, exist_ok=True)
|
|
||||||
# TODO: save artifacts to image_config itself
|
|
||||||
artifacts = clouds.publish_image(image_config)
|
|
||||||
yaml.dump(artifacts, image_config.artifacts_yaml)
|
|
||||||
|
|
||||||
elif args.action == 'release':
|
|
||||||
pass
|
|
||||||
# TODO: image_config.release_image() - configurable steps to take on remote host
|
|
||||||
|
|
||||||
# save per-image metadata
|
|
||||||
image_config.save_metadata(upload=(False if args.action =='build' else True))
|
|
@ -1,44 +0,0 @@
|
|||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
from . import aws # , oci, gcp, azure
|
|
||||||
|
|
||||||
ADAPTERS = {}
|
|
||||||
|
|
||||||
|
|
||||||
def register(*mods):
|
|
||||||
for mod in mods:
|
|
||||||
cloud = mod.__name__.split('.')[-1]
|
|
||||||
if p := mod.register(cloud):
|
|
||||||
ADAPTERS[cloud] = p
|
|
||||||
|
|
||||||
|
|
||||||
register(aws) # , oci, azure, gcp)
|
|
||||||
|
|
||||||
|
|
||||||
# using a credential provider is optional, set across all adapters
|
|
||||||
def set_credential_provider(debug=False):
|
|
||||||
from .identity_broker_client import IdentityBrokerClient
|
|
||||||
cred_provider = IdentityBrokerClient(debug=debug)
|
|
||||||
for adapter in ADAPTERS.values():
|
|
||||||
adapter.cred_provider = cred_provider
|
|
||||||
|
|
||||||
|
|
||||||
### forward to the correct adapter
|
|
||||||
|
|
||||||
def latest_build_image(config):
|
|
||||||
return ADAPTERS[config.cloud].latest_build_image(
|
|
||||||
config.project,
|
|
||||||
config.image_key
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def import_image(config):
|
|
||||||
return ADAPTERS[config.cloud].import_image(config)
|
|
||||||
|
|
||||||
|
|
||||||
def delete_image(config, image_id):
|
|
||||||
return ADAPTERS[config.cloud].delete_image(image_id)
|
|
||||||
|
|
||||||
|
|
||||||
def publish_image(config):
|
|
||||||
return ADAPTERS[config.cloud].publish_image(config)
|
|
@ -1,392 +0,0 @@
|
|||||||
# NOTE: not meant to be executed directly
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import hashlib
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
|
|
||||||
from datetime import datetime
|
|
||||||
from subprocess import run
|
|
||||||
|
|
||||||
from .interfaces.adapter import CloudAdapterInterface
|
|
||||||
from image_configs import Tags, DictObj
|
|
||||||
|
|
||||||
|
|
||||||
class AWSCloudAdapter(CloudAdapterInterface):
|
|
||||||
IMAGE_INFO = [
|
|
||||||
'revision', 'imported', 'import_id', 'import_region', 'published',
|
|
||||||
]
|
|
||||||
CRED_MAP = {
|
|
||||||
'access_key': 'aws_access_key_id',
|
|
||||||
'secret_key': 'aws_secret_access_key',
|
|
||||||
'session_token': 'aws_session_token',
|
|
||||||
}
|
|
||||||
ARCH = {
|
|
||||||
'aarch64': 'arm64',
|
|
||||||
'x86_64': 'x86_64',
|
|
||||||
}
|
|
||||||
BOOT_MODE = {
|
|
||||||
'bios': 'legacy-bios',
|
|
||||||
'uefi': 'uefi',
|
|
||||||
}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def sdk(self):
|
|
||||||
# delayed import/install of SDK until we want to use it
|
|
||||||
if not self._sdk:
|
|
||||||
try:
|
|
||||||
import boto3
|
|
||||||
except ModuleNotFoundError:
|
|
||||||
run(['work/bin/pip', 'install', '-U', 'boto3'])
|
|
||||||
import boto3
|
|
||||||
|
|
||||||
self._sdk = boto3
|
|
||||||
|
|
||||||
return self._sdk
|
|
||||||
|
|
||||||
def session(self, region=None):
|
|
||||||
if region not in self._sessions:
|
|
||||||
creds = {'region_name': region} | self.credentials(region)
|
|
||||||
self._sessions[region] = self.sdk.session.Session(**creds)
|
|
||||||
|
|
||||||
return self._sessions[region]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def regions(self):
|
|
||||||
if self.cred_provider:
|
|
||||||
return self.cred_provider.get_regions(self.cloud)
|
|
||||||
|
|
||||||
# list of all subscribed regions
|
|
||||||
return {r['RegionName']: True for r in self.session().client('ec2').describe_regions()['Regions']}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_region(self):
|
|
||||||
if self.cred_provider:
|
|
||||||
return self.cred_provider.get_default_region(self.cloud)
|
|
||||||
|
|
||||||
# rely on our env or ~/.aws config for the default
|
|
||||||
return None
|
|
||||||
|
|
||||||
def credentials(self, region=None):
|
|
||||||
if not self.cred_provider:
|
|
||||||
# use the cloud SDK's default credential discovery
|
|
||||||
return {}
|
|
||||||
|
|
||||||
creds = self.cred_provider.get_credentials(self.cloud, region)
|
|
||||||
# return dict suitable to use for session()
|
|
||||||
return {self.CRED_MAP[k]: v for k, v in creds.items() if k in self.CRED_MAP}
|
|
||||||
|
|
||||||
def _get_images_with_tags(self, project, image_key, tags={}, region=None):
|
|
||||||
ec2r = self.session(region).resource('ec2')
|
|
||||||
req = {'Owners': ['self'], 'Filters': []}
|
|
||||||
tags |= {
|
|
||||||
'project': project,
|
|
||||||
'image_key': image_key,
|
|
||||||
}
|
|
||||||
for k, v in tags.items():
|
|
||||||
req['Filters'].append({'Name': f"tag:{k}", 'Values': [str(v)]})
|
|
||||||
|
|
||||||
return sorted(
|
|
||||||
ec2r.images.filter(**req), key=lambda k: k.creation_date, reverse=True)
|
|
||||||
|
|
||||||
# necessary cloud-agnostic image info
|
|
||||||
def _image_info(self, i):
|
|
||||||
tags = Tags(from_list=i.tags)
|
|
||||||
return DictObj({k: tags.get(k, None) for k in self.IMAGE_INFO})
|
|
||||||
|
|
||||||
# get the latest imported image for a given build name
|
|
||||||
def latest_build_image(self, project, image_key):
|
|
||||||
images = self._get_images_with_tags(
|
|
||||||
project=project,
|
|
||||||
image_key=image_key,
|
|
||||||
)
|
|
||||||
if images:
|
|
||||||
# first one is the latest
|
|
||||||
return self._image_info(images[0])
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
# import an image
|
|
||||||
# NOTE: requires 'vmimport' role with read/write of <s3_bucket>.* and its objects
|
|
||||||
def import_image(self, ic):
|
|
||||||
log = logging.getLogger('import')
|
|
||||||
description = ic.image_description
|
|
||||||
|
|
||||||
session = self.session()
|
|
||||||
s3r = session.resource('s3')
|
|
||||||
ec2c = session.client('ec2')
|
|
||||||
ec2r = session.resource('ec2')
|
|
||||||
|
|
||||||
bucket_name = 'alpine-cloud-images.' + hashlib.sha1(os.urandom(40)).hexdigest()
|
|
||||||
|
|
||||||
bucket = s3r.Bucket(bucket_name)
|
|
||||||
log.info('Creating S3 bucket %s', bucket.name)
|
|
||||||
bucket.create(
|
|
||||||
CreateBucketConfiguration={'LocationConstraint': ec2c.meta.region_name}
|
|
||||||
)
|
|
||||||
bucket.wait_until_exists()
|
|
||||||
s3_url = f"s3://{bucket.name}/{ic.image_file}"
|
|
||||||
|
|
||||||
try:
|
|
||||||
log.info('Uploading %s to %s', ic.image_path, s3_url)
|
|
||||||
bucket.upload_file(str(ic.image_path), ic.image_file)
|
|
||||||
|
|
||||||
# import snapshot from S3
|
|
||||||
log.info('Importing EC2 snapshot from %s', s3_url)
|
|
||||||
ss_import_opts = {
|
|
||||||
'DiskContainer': {
|
|
||||||
'Description': description, # https://github.com/boto/boto3/issues/2286
|
|
||||||
'Format': 'VHD',
|
|
||||||
'Url': s3_url,
|
|
||||||
},
|
|
||||||
'Encrypted': True if ic.encrypted else False,
|
|
||||||
# NOTE: TagSpecifications -- doesn't work with ResourceType: snapshot?
|
|
||||||
}
|
|
||||||
if type(ic.encrypted) is str:
|
|
||||||
ss_import_opts['KmsKeyId'] = ic.encrypted
|
|
||||||
|
|
||||||
ss_import = ec2c.import_snapshot(**ss_import_opts)
|
|
||||||
ss_task_id = ss_import['ImportTaskId']
|
|
||||||
while True:
|
|
||||||
ss_task = ec2c.describe_import_snapshot_tasks(
|
|
||||||
ImportTaskIds=[ss_task_id]
|
|
||||||
)
|
|
||||||
task_detail = ss_task['ImportSnapshotTasks'][0]['SnapshotTaskDetail']
|
|
||||||
if task_detail['Status'] not in ['pending', 'active', 'completed']:
|
|
||||||
msg = f"Bad EC2 snapshot import: {task_detail['Status']} - {task_detail['StatusMessage']}"
|
|
||||||
log.error(msg)
|
|
||||||
raise RuntimeError(msg)
|
|
||||||
|
|
||||||
if task_detail['Status'] == 'completed':
|
|
||||||
snapshot_id = task_detail['SnapshotId']
|
|
||||||
break
|
|
||||||
|
|
||||||
time.sleep(15)
|
|
||||||
except Exception:
|
|
||||||
log.error('Unable to import snapshot from S3:', exc_info=True)
|
|
||||||
raise
|
|
||||||
finally:
|
|
||||||
# always cleanup S3, even if there was an exception raised
|
|
||||||
log.info('Cleaning up %s', s3_url)
|
|
||||||
bucket.Object(ic.image_file).delete()
|
|
||||||
bucket.delete()
|
|
||||||
|
|
||||||
# tag snapshot
|
|
||||||
snapshot = ec2r.Snapshot(snapshot_id)
|
|
||||||
try:
|
|
||||||
log.info('Tagging EC2 snapshot %s', snapshot_id)
|
|
||||||
tags = ic.tags
|
|
||||||
tags.Name = tags.name # because AWS is special
|
|
||||||
snapshot.create_tags(Tags=tags.as_list())
|
|
||||||
except Exception:
|
|
||||||
log.error('Unable to tag snapshot:', exc_info=True)
|
|
||||||
log.info('Removing snapshot')
|
|
||||||
snapshot.delete()
|
|
||||||
raise
|
|
||||||
|
|
||||||
# register AMI
|
|
||||||
try:
|
|
||||||
log.info('Registering EC2 AMI from snapshot %s', snapshot_id)
|
|
||||||
img = ec2c.register_image(
|
|
||||||
Architecture=self.ARCH[ic.arch],
|
|
||||||
BlockDeviceMappings=[{
|
|
||||||
'DeviceName': '/dev/xvda',
|
|
||||||
'Ebs': {'SnapshotId': snapshot_id,
|
|
||||||
'VolumeType': 'gp3'}
|
|
||||||
}],
|
|
||||||
Description=description,
|
|
||||||
EnaSupport=True,
|
|
||||||
Name=ic.image_name,
|
|
||||||
RootDeviceName='/dev/xvda',
|
|
||||||
SriovNetSupport='simple',
|
|
||||||
VirtualizationType='hvm',
|
|
||||||
BootMode=self.BOOT_MODE[ic.firmware],
|
|
||||||
)
|
|
||||||
except Exception:
|
|
||||||
log.error('Unable to register image:', exc_info=True)
|
|
||||||
log.info('Removing snapshot')
|
|
||||||
snapshot.delete()
|
|
||||||
raise
|
|
||||||
|
|
||||||
image_id = img['ImageId']
|
|
||||||
image = ec2r.Image(image_id)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# tag image (adds imported tag)
|
|
||||||
log.info('Tagging EC2 AMI %s', image_id)
|
|
||||||
tags.imported = datetime.utcnow().isoformat()
|
|
||||||
tags.import_id = image_id
|
|
||||||
tags.import_region = ec2c.meta.region_name
|
|
||||||
image.create_tags(Tags=tags.as_list())
|
|
||||||
except Exception:
|
|
||||||
log.error('Unable to tag image:', exc_info=True)
|
|
||||||
log.info('Removing image and snapshot')
|
|
||||||
image.delete()
|
|
||||||
snapshot.delete()
|
|
||||||
raise
|
|
||||||
|
|
||||||
return self._image_info(image)
|
|
||||||
|
|
||||||
# delete an (unpublished) image
|
|
||||||
def delete_image(self, image_id):
|
|
||||||
log = logging.getLogger('build')
|
|
||||||
ec2r = self.session().resource('ec2')
|
|
||||||
image = ec2r.Image(image_id)
|
|
||||||
snapshot_id = image.block_device_mappings[0]['Ebs']['SnapshotId']
|
|
||||||
snapshot = ec2r.Snapshot(snapshot_id)
|
|
||||||
log.info('Deregistering %s', image_id)
|
|
||||||
image.deregister()
|
|
||||||
log.info('Deleting %s', snapshot_id)
|
|
||||||
snapshot.delete()
|
|
||||||
|
|
||||||
# publish an image
|
|
||||||
def publish_image(self, ic):
|
|
||||||
log = logging.getLogger('publish')
|
|
||||||
source_image = self.latest_build_image(
|
|
||||||
ic.project,
|
|
||||||
ic.image_key,
|
|
||||||
)
|
|
||||||
if not source_image:
|
|
||||||
log.error('No source image for %s', ic.image_key)
|
|
||||||
raise RuntimeError('Missing source imamge')
|
|
||||||
|
|
||||||
source_id = source_image.import_id
|
|
||||||
source_region = source_image.import_region
|
|
||||||
log.info('Publishing source: %s/%s', source_region, source_id)
|
|
||||||
source = self.session().resource('ec2').Image(source_id)
|
|
||||||
|
|
||||||
# we may be updating tags, get them from image config
|
|
||||||
tags = ic.tags
|
|
||||||
|
|
||||||
# sort out published image access permissions
|
|
||||||
perms = {'groups': [], 'users': []}
|
|
||||||
if ic.access.get('PUBLIC', None):
|
|
||||||
perms['groups'] = ['all']
|
|
||||||
else:
|
|
||||||
for k, v in ic.access.items():
|
|
||||||
if v:
|
|
||||||
log.debug('users: %s', k)
|
|
||||||
perms['users'].append(str(k))
|
|
||||||
|
|
||||||
log.debug('perms: %s', perms)
|
|
||||||
|
|
||||||
# resolve destination regions
|
|
||||||
regions = self.regions
|
|
||||||
if ic.regions.pop('ALL', None):
|
|
||||||
log.info('Publishing to ALL available regions')
|
|
||||||
else:
|
|
||||||
# clear ALL out of the way if it's still there
|
|
||||||
ic.regions.pop('ALL', None)
|
|
||||||
regions = {r: regions[r] for r in ic.regions}
|
|
||||||
|
|
||||||
publishing = {}
|
|
||||||
for r in regions.keys():
|
|
||||||
if not regions[r]:
|
|
||||||
log.warning('Skipping unsubscribed AWS region %s', r)
|
|
||||||
continue
|
|
||||||
|
|
||||||
images = self._get_images_with_tags(
|
|
||||||
region=r,
|
|
||||||
project=ic.project,
|
|
||||||
image_key=ic.image_key,
|
|
||||||
tags={'revision': ic.revision}
|
|
||||||
)
|
|
||||||
if images:
|
|
||||||
image = images[0]
|
|
||||||
log.info('%s: Already exists as %s', r, image.id)
|
|
||||||
else:
|
|
||||||
ec2c = self.session(r).client('ec2')
|
|
||||||
copy_image_opts = {
|
|
||||||
'Description': source.description,
|
|
||||||
'Name': source.name,
|
|
||||||
'SourceImageId': source_id,
|
|
||||||
'SourceRegion': source_region,
|
|
||||||
'Encrypted': True if ic.encrypted else False,
|
|
||||||
}
|
|
||||||
if type(ic.encrypted) is str:
|
|
||||||
copy_image_opts['KmsKeyId'] = ic.encrypted
|
|
||||||
|
|
||||||
try:
|
|
||||||
res = ec2c.copy_image(**copy_image_opts)
|
|
||||||
except Exception:
|
|
||||||
log.warning('Skipping %s, unable to copy image:', r, exc_info=True)
|
|
||||||
continue
|
|
||||||
|
|
||||||
image_id = res['ImageId']
|
|
||||||
log.info('%s: Publishing to %s', r, image_id)
|
|
||||||
image = self.session(r).resource('ec2').Image(image_id)
|
|
||||||
|
|
||||||
publishing[r] = image
|
|
||||||
|
|
||||||
artifacts = {}
|
|
||||||
copy_wait = 180
|
|
||||||
while len(artifacts) < len(publishing):
|
|
||||||
for r, image in publishing.items():
|
|
||||||
if r not in artifacts:
|
|
||||||
image.reload()
|
|
||||||
if image.state == 'available':
|
|
||||||
# tag image
|
|
||||||
log.info('%s: Adding tags to %s', r, image.id)
|
|
||||||
image_tags = Tags(from_list=image.tags)
|
|
||||||
fresh = False
|
|
||||||
if 'published' not in image_tags:
|
|
||||||
fresh = True
|
|
||||||
|
|
||||||
if fresh:
|
|
||||||
tags.published = datetime.utcnow().isoformat()
|
|
||||||
|
|
||||||
tags.Name = tags.name # because AWS is special
|
|
||||||
image.create_tags(Tags=tags.as_list())
|
|
||||||
|
|
||||||
# tag image's snapshot, too
|
|
||||||
snapshot = self.session(r).resource('ec2').Snapshot(
|
|
||||||
image.block_device_mappings[0]['Ebs']['SnapshotId']
|
|
||||||
)
|
|
||||||
snapshot.create_tags(Tags=tags.as_list())
|
|
||||||
|
|
||||||
# update image description to match description in tags
|
|
||||||
log.info('%s: Updating description to %s', r, tags.description)
|
|
||||||
image.modify_attribute(
|
|
||||||
Description={'Value': tags.description},
|
|
||||||
)
|
|
||||||
|
|
||||||
# apply launch perms
|
|
||||||
if perms['groups'] or perms['users']:
|
|
||||||
log.info('%s: Applying launch perms to %s', r, image.id)
|
|
||||||
image.reset_attribute(Attribute='launchPermission')
|
|
||||||
image.modify_attribute(
|
|
||||||
Attribute='launchPermission',
|
|
||||||
OperationType='add',
|
|
||||||
UserGroups=perms['groups'],
|
|
||||||
UserIds=perms['users'],
|
|
||||||
)
|
|
||||||
|
|
||||||
# set up AMI deprecation
|
|
||||||
ec2c = image.meta.client
|
|
||||||
log.info('%s: Setting EOL deprecation time on %s', r, image.id)
|
|
||||||
try:
|
|
||||||
ec2c.enable_image_deprecation(
|
|
||||||
ImageId=image.id,
|
|
||||||
DeprecateAt=f"{tags.end_of_life}T23:59:00Z"
|
|
||||||
)
|
|
||||||
except Exception:
|
|
||||||
log.warning('Unable to set EOL Deprecation on %s image:', r, exc_info=True)
|
|
||||||
|
|
||||||
artifacts[r] = image.id
|
|
||||||
|
|
||||||
if image.state == 'failed':
|
|
||||||
log.error('%s: %s - %s - %s', r, image.id, image.state, image.state_reason)
|
|
||||||
artifacts[r] = None
|
|
||||||
|
|
||||||
remaining = len(publishing) - len(artifacts)
|
|
||||||
if remaining > 0:
|
|
||||||
log.info('Waiting %ds for %d images to complete', copy_wait, remaining)
|
|
||||||
time.sleep(copy_wait)
|
|
||||||
copy_wait = 30
|
|
||||||
|
|
||||||
return artifacts
|
|
||||||
|
|
||||||
|
|
||||||
def register(cloud, cred_provider=None):
|
|
||||||
return AWSCloudAdapter(cloud, cred_provider)
|
|
@ -1,135 +0,0 @@
|
|||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
import urllib.error
|
|
||||||
|
|
||||||
from datetime import datetime
|
|
||||||
from email.utils import parsedate
|
|
||||||
from urllib.request import Request, urlopen
|
|
||||||
|
|
||||||
|
|
||||||
class IdentityBrokerClient:
|
|
||||||
"""Client for identity broker
|
|
||||||
|
|
||||||
Export IDENTITY_BROKER_ENDPOINT to override the default broker endpoint.
|
|
||||||
Export IDENTITY_BROKER_API_KEY to specify an API key for the broker.
|
|
||||||
|
|
||||||
See README_BROKER.md for more information and a spec.
|
|
||||||
"""
|
|
||||||
|
|
||||||
_DEFAULT_ENDPOINT = 'https://aws-access.crute.us/api/account'
|
|
||||||
_DEFAULT_ACCOUNT = 'alpine-amis-user'
|
|
||||||
_LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
|
||||||
|
|
||||||
def __init__(self, endpoint=None, key=None, account=None, debug=False):
|
|
||||||
# log to STDOUT so that it's not all red when executed by Packer
|
|
||||||
self._logger = logging.getLogger('identity-broker')
|
|
||||||
self._logger.setLevel(logging.DEBUG if debug else logging.INFO)
|
|
||||||
console = logging.StreamHandler(sys.stdout)
|
|
||||||
console.setFormatter(logging.Formatter(self._LOGFORMAT))
|
|
||||||
self._logger.addHandler(console)
|
|
||||||
|
|
||||||
self._endpoint = os.environ.get('IDENTITY_BROKER_ENDPOINT') or endpoint \
|
|
||||||
or self._DEFAULT_ENDPOINT
|
|
||||||
self._key = os.environ.get('IDENTITY_BROKER_API_KEY') or key
|
|
||||||
self._account = account or self._DEFAULT_ACCOUNT
|
|
||||||
if not self._key:
|
|
||||||
raise Exception('No identity broker key found')
|
|
||||||
|
|
||||||
self._headers = {
|
|
||||||
'Accept': 'application/vnd.broker.v2+json',
|
|
||||||
'X-API-Key': self._key
|
|
||||||
}
|
|
||||||
self._cache = {}
|
|
||||||
self._expires = {}
|
|
||||||
self._default_region = {}
|
|
||||||
|
|
||||||
def _is_cache_valid(self, path):
|
|
||||||
if path not in self._cache:
|
|
||||||
return False
|
|
||||||
|
|
||||||
# path is subject to expiry AND its time has passed
|
|
||||||
if self._expires[path] and self._expires[path] < datetime.utcnow():
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _get(self, path):
|
|
||||||
self._logger.debug("request: %s", path)
|
|
||||||
if not self._is_cache_valid(path):
|
|
||||||
while True: # to handle rate limits
|
|
||||||
try:
|
|
||||||
res = urlopen(Request(path, headers=self._headers))
|
|
||||||
except urllib.error.HTTPError as ex:
|
|
||||||
if ex.status == 401:
|
|
||||||
raise Exception('Expired or invalid identity broker token')
|
|
||||||
|
|
||||||
if ex.status == 406:
|
|
||||||
raise Exception('Invalid or malformed identity broker token')
|
|
||||||
|
|
||||||
# TODO: will this be entirely handled by the 401 above?
|
|
||||||
if ex.headers.get('Location') == '/logout':
|
|
||||||
raise Exception('Identity broker token is expired')
|
|
||||||
|
|
||||||
if ex.status == 429:
|
|
||||||
self._logger.warning(
|
|
||||||
'Rate-limited by identity broker, sleeping 30 seconds')
|
|
||||||
time.sleep(30)
|
|
||||||
continue
|
|
||||||
|
|
||||||
raise ex
|
|
||||||
|
|
||||||
if res.status not in {200, 429}:
|
|
||||||
raise Exception(res.reason)
|
|
||||||
|
|
||||||
# never expires without valid RFC 1123 Expires header
|
|
||||||
if expires := res.getheader('Expires'):
|
|
||||||
expires = parsedate(expires)
|
|
||||||
# convert RFC 1123 to datetime, if parsed successfully
|
|
||||||
expires = datetime(*expires[:6])
|
|
||||||
|
|
||||||
self._expires[path] = expires
|
|
||||||
self._cache[path] = json.load(res)
|
|
||||||
break
|
|
||||||
|
|
||||||
self._logger.debug("response: %s", self._cache[path])
|
|
||||||
return self._cache[path]
|
|
||||||
|
|
||||||
def get_credentials_url(self, vendor):
|
|
||||||
accounts = self._get(self._endpoint)
|
|
||||||
if vendor not in accounts:
|
|
||||||
raise Exception(f'No {vendor} credentials found')
|
|
||||||
|
|
||||||
for account in accounts[vendor]:
|
|
||||||
if account['short_name'] == self._account:
|
|
||||||
return account['credentials_url']
|
|
||||||
|
|
||||||
raise Exception('No account credentials found')
|
|
||||||
|
|
||||||
def get_regions(self, vendor):
|
|
||||||
out = {}
|
|
||||||
|
|
||||||
for region in self._get(self.get_credentials_url(vendor)):
|
|
||||||
if region['enabled']:
|
|
||||||
out[region['name']] = region['credentials_url']
|
|
||||||
|
|
||||||
if region['default']:
|
|
||||||
self._default_region[vendor] = region['name']
|
|
||||||
|
|
||||||
return out
|
|
||||||
|
|
||||||
def get_default_region(self, vendor):
|
|
||||||
if vendor not in self._default_region:
|
|
||||||
self.get_regions(vendor)
|
|
||||||
|
|
||||||
return self._default_region.get(vendor)
|
|
||||||
|
|
||||||
def get_credentials(self, vendor, region=None):
|
|
||||||
if not region:
|
|
||||||
region = self.get_default_region(vendor)
|
|
||||||
|
|
||||||
return self._get(self.get_regions(vendor)[region])
|
|
@ -1,40 +0,0 @@
|
|||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
class CloudAdapterInterface:
|
|
||||||
|
|
||||||
def __init__(self, cloud, cred_provider=None):
|
|
||||||
self._sdk = None
|
|
||||||
self._sessions = {}
|
|
||||||
self.cloud = cloud
|
|
||||||
self.cred_provider = cred_provider
|
|
||||||
self._default_region = None
|
|
||||||
|
|
||||||
@property
|
|
||||||
def sdk(self):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def regions(self):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_region(self):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def credentials(self, region=None):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def session(self, region=None):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def latest_build_image(self, project, image_key):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def import_image(self, config):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def delete_image(self, config, image_id):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def publish_image(self, config):
|
|
||||||
raise NotImplementedError
|
|
@ -1,88 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
# NOTE: If you are using alpine-cloud-images to build public cloud images
|
|
||||||
# for something/someone other than Alpine Linux, you *MUST* override
|
|
||||||
# *AT LEAST* the 'project' setting with a unique identifier string value
|
|
||||||
# via a "config overlay" to avoid image import and publishing collisions.
|
|
||||||
|
|
||||||
project = "https://alpinelinux.org/cloud"
|
|
||||||
|
|
||||||
# all build configs start with these
|
|
||||||
Default {
|
|
||||||
project = ${project}
|
|
||||||
|
|
||||||
# image name/description components
|
|
||||||
name = [ alpine ]
|
|
||||||
description = [ Alpine Linux ]
|
|
||||||
|
|
||||||
motd {
|
|
||||||
welcome = "Welcome to Alpine!"
|
|
||||||
|
|
||||||
wiki = "The Alpine Wiki contains a large amount of how-to guides and general\n"\
|
|
||||||
"information about administrating Alpine systems.\n"\
|
|
||||||
"See <https://wiki.alpinelinux.org/>."
|
|
||||||
|
|
||||||
version_notes = "Release Notes:\n"\
|
|
||||||
"* <https://alpinelinux.org/posts/Alpine-{version}.0/released.html>"
|
|
||||||
release_notes = "* <https://alpinelinux.org/posts/Alpine-{release}/released.html>"
|
|
||||||
}
|
|
||||||
|
|
||||||
# initial provisioning script and data directory
|
|
||||||
scripts = [ setup ]
|
|
||||||
script_dirs = [ setup.d ]
|
|
||||||
|
|
||||||
size = 1G
|
|
||||||
login = alpine
|
|
||||||
|
|
||||||
image_format = qcow2
|
|
||||||
|
|
||||||
# these paths are subject to change, as image downloads are developed
|
|
||||||
storage_url = "ssh://tomalok@dev.alpinelinux.org/public_html/alpine-cloud-images/{v_version}/cloud/{cloud}"
|
|
||||||
download_url = "https://dev.alpinelinux.org/~tomalok/alpine-cloud-images/{v_version}/cloud/{cloud}" # development
|
|
||||||
#download_url = "https://dl-cdn.alpinelinux.org/alpine/{v_version}/cloud/{cloud}"
|
|
||||||
|
|
||||||
# image access
|
|
||||||
access.PUBLIC = true
|
|
||||||
|
|
||||||
# image publication
|
|
||||||
regions.ALL = true
|
|
||||||
}
|
|
||||||
|
|
||||||
# profile build matrix
|
|
||||||
Dimensions {
|
|
||||||
version {
|
|
||||||
"3.16" { include required("version/3.16.conf") }
|
|
||||||
"3.15" { include required("version/3.15.conf") }
|
|
||||||
"3.14" { include required("version/3.14.conf") }
|
|
||||||
"3.13" { include required("version/3.13.conf") }
|
|
||||||
edge { include required("version/edge.conf") }
|
|
||||||
}
|
|
||||||
arch {
|
|
||||||
x86_64 { include required("arch/x86_64.conf") }
|
|
||||||
aarch64 { include required("arch/aarch64.conf") }
|
|
||||||
}
|
|
||||||
firmware {
|
|
||||||
bios { include required("firmware/bios.conf") }
|
|
||||||
uefi { include required("firmware/uefi.conf") }
|
|
||||||
}
|
|
||||||
bootstrap {
|
|
||||||
tiny { include required("bootstrap/tiny.conf") }
|
|
||||||
cloudinit { include required("bootstrap/cloudinit.conf") }
|
|
||||||
}
|
|
||||||
cloud {
|
|
||||||
aws { include required("cloud/aws.conf") }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# all build configs merge these at the very end
|
|
||||||
Mandatory {
|
|
||||||
name = [ "r{revision}" ]
|
|
||||||
description = [ "- https://alpinelinux.org/cloud" ]
|
|
||||||
encrypted = false
|
|
||||||
|
|
||||||
# final motd message
|
|
||||||
motd.motd_change = "You may change this message by editing /etc/motd."
|
|
||||||
|
|
||||||
# final provisioning script
|
|
||||||
scripts = [ cleanup ]
|
|
||||||
}
|
|
@ -1,15 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
name = [aarch64]
|
|
||||||
arch_name = aarch64
|
|
||||||
|
|
||||||
# aarch64 is UEFI only
|
|
||||||
EXCLUDE = [bios]
|
|
||||||
|
|
||||||
qemu.machine_type = virt
|
|
||||||
qemu.args = [
|
|
||||||
[-cpu, cortex-a57],
|
|
||||||
[-boot, d],
|
|
||||||
[-device, virtio-gpu-pci],
|
|
||||||
[-device, usb-ehci],
|
|
||||||
[-device, usb-kbd],
|
|
||||||
]
|
|
@ -1,6 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
name = [x86_64]
|
|
||||||
arch_name = x86_64
|
|
||||||
|
|
||||||
qemu.machine_type = null
|
|
||||||
qemu.args = null
|
|
@ -1,16 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
name = [cloudinit]
|
|
||||||
bootstrap_name = cloud-init
|
|
||||||
bootstrap_url = "https://cloud-init.io"
|
|
||||||
|
|
||||||
# start cloudinit images with 3.15
|
|
||||||
EXCLUDE = ["3.12", "3.13", "3.14"]
|
|
||||||
|
|
||||||
packages {
|
|
||||||
cloud-init = true
|
|
||||||
openssh-server-pam = true
|
|
||||||
e2fsprogs-extra = true # for resize2fs
|
|
||||||
}
|
|
||||||
services.default.cloud-init-hotplugd = true
|
|
||||||
|
|
||||||
scripts = [ setup-cloudinit ]
|
|
@ -1,33 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
name = [tiny]
|
|
||||||
bootstrap_name = Tiny Cloud
|
|
||||||
bootstrap_url = "https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud"
|
|
||||||
|
|
||||||
services {
|
|
||||||
sysinit.tiny-cloud-early = true
|
|
||||||
default.tiny-cloud = true
|
|
||||||
default.tiny-cloud-final = true
|
|
||||||
}
|
|
||||||
|
|
||||||
WHEN {
|
|
||||||
aws {
|
|
||||||
packages.tiny-cloud-aws = true
|
|
||||||
WHEN {
|
|
||||||
"3.12" {
|
|
||||||
# tiny-cloud-network requires ifupdown-ng (unavailable in 3.12)
|
|
||||||
packages.tiny-cloud-aws = null
|
|
||||||
services.sysinit.tiny-cloud-early = null
|
|
||||||
services.default.tiny-cloud = null
|
|
||||||
services.default.tiny-cloud-final = null
|
|
||||||
# fall back to tiny-ec2-bootstrap instead
|
|
||||||
packages.tiny-ec2-bootstrap = true
|
|
||||||
services.default.tiny-ec2-bootstrap = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
# azure.packages.tiny-cloud-azure = true
|
|
||||||
# gcp.packages.tiny-cloud-gcp = true
|
|
||||||
# oci.packages.tiny-cloud-oci = true
|
|
||||||
}
|
|
||||||
|
|
||||||
scripts = [ setup-tiny ]
|
|
@ -1,38 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
cloud_name = Amazon Web Services
|
|
||||||
image_format = vhd
|
|
||||||
|
|
||||||
kernel_modules {
|
|
||||||
ena = true
|
|
||||||
nvme = true
|
|
||||||
}
|
|
||||||
kernel_options {
|
|
||||||
"nvme_core.io_timeout=4294967295" = true
|
|
||||||
}
|
|
||||||
initfs_features {
|
|
||||||
ena = true
|
|
||||||
nvme = true
|
|
||||||
}
|
|
||||||
|
|
||||||
ntp_server = 169.254.169.123
|
|
||||||
|
|
||||||
access.PUBLIC = true
|
|
||||||
regions.ALL = true
|
|
||||||
|
|
||||||
cloud_region_url = "https://{region}.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId={image_id}",
|
|
||||||
cloud_launch_url = "https://{region}.console.aws.amazon.com/ec2/home#launchAmi={image_id}"
|
|
||||||
|
|
||||||
WHEN {
|
|
||||||
aarch64 {
|
|
||||||
# new AWS aarch64 default...
|
|
||||||
kernel_modules.gpio_pl061 = true
|
|
||||||
initfs_features.gpio_pl061 = true
|
|
||||||
WHEN {
|
|
||||||
"3.14 3.13 3.12" {
|
|
||||||
# ...but not supported for older versions
|
|
||||||
kernel_modules.gpio_pl061 = false
|
|
||||||
initfs_features.gpio_pl061 = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,7 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
name = [bios]
|
|
||||||
firmware_name = BIOS
|
|
||||||
|
|
||||||
bootloader = extlinux
|
|
||||||
packages.syslinux = --no-scripts
|
|
||||||
qemu.firmware = null
|
|
@ -1,18 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
name = [uefi]
|
|
||||||
firmware_name = UEFI
|
|
||||||
|
|
||||||
bootloader = grub-efi
|
|
||||||
packages {
|
|
||||||
grub-efi = --no-scripts
|
|
||||||
dosfstools = true
|
|
||||||
}
|
|
||||||
|
|
||||||
WHEN {
|
|
||||||
aarch64 {
|
|
||||||
qemu.firmware = work/firmware/uefi-aarch64.bin
|
|
||||||
}
|
|
||||||
x86_64 {
|
|
||||||
qemu.firmware = work/firmware/uefi-x86_64.bin
|
|
||||||
}
|
|
||||||
}
|
|
@ -1 +0,0 @@
|
|||||||
alpine.conf
|
|
@ -1,5 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("base/1.conf")
|
|
||||||
|
|
||||||
# NOTE: EOL 2022-05-01
|
|
@ -1,3 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("base/2.conf")
|
|
@ -1,3 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("base/2.conf")
|
|
@ -1,7 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("base/3.conf")
|
|
||||||
|
|
||||||
motd {
|
|
||||||
sudo_deprecated = "NOTE: 'sudo' has been deprecated, please use 'doas' instead."
|
|
||||||
}
|
|
@ -1,7 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("base/4.conf")
|
|
||||||
|
|
||||||
motd {
|
|
||||||
sudo_removed = "NOTE: 'sudo' is no longer installed by default, please use 'doas' instead."
|
|
||||||
}
|
|
@ -1,60 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
repos {
|
|
||||||
"https://dl-cdn.alpinelinux.org/alpine/v{version}/main" = true
|
|
||||||
"https://dl-cdn.alpinelinux.org/alpine/v{version}/community" = true
|
|
||||||
"https://dl-cdn.alpinelinux.org/alpine/v{version}/testing" = false
|
|
||||||
}
|
|
||||||
|
|
||||||
packages {
|
|
||||||
alpine-base = true
|
|
||||||
linux-virt = true
|
|
||||||
alpine-mirrors = true
|
|
||||||
chrony = true
|
|
||||||
e2fsprogs = true
|
|
||||||
openssh = true
|
|
||||||
sudo = true
|
|
||||||
tzdata = true
|
|
||||||
}
|
|
||||||
|
|
||||||
services {
|
|
||||||
sysinit {
|
|
||||||
devfs = true
|
|
||||||
dmesg = true
|
|
||||||
hwdrivers = true
|
|
||||||
mdev = true
|
|
||||||
}
|
|
||||||
boot {
|
|
||||||
acpid = true
|
|
||||||
bootmisc = true
|
|
||||||
hostname = true
|
|
||||||
hwclock = true
|
|
||||||
modules = true
|
|
||||||
swap = true
|
|
||||||
sysctl = true
|
|
||||||
syslog = true
|
|
||||||
}
|
|
||||||
default {
|
|
||||||
chronyd = true
|
|
||||||
networking = true
|
|
||||||
sshd = true
|
|
||||||
}
|
|
||||||
shutdown {
|
|
||||||
killprocs = true
|
|
||||||
mount-ro = true
|
|
||||||
savecache = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
kernel_modules {
|
|
||||||
sd-mod = true
|
|
||||||
usb-storage = true
|
|
||||||
ext4 = true
|
|
||||||
}
|
|
||||||
|
|
||||||
kernel_options {
|
|
||||||
"console=ttyS0,115200n8" = true
|
|
||||||
}
|
|
||||||
|
|
||||||
initfs_features {
|
|
||||||
}
|
|
@ -1,8 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("1.conf")
|
|
||||||
|
|
||||||
packages {
|
|
||||||
# drop old alpine-mirrors
|
|
||||||
alpine-mirrors = null
|
|
||||||
}
|
|
@ -1,8 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("2.conf")
|
|
||||||
|
|
||||||
packages {
|
|
||||||
# doas will officially replace sudo in 3.16
|
|
||||||
doas = true
|
|
||||||
}
|
|
@ -1,8 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("3.conf")
|
|
||||||
|
|
||||||
packages {
|
|
||||||
# doas officially replaces sudo in 3.16
|
|
||||||
sudo = false
|
|
||||||
}
|
|
@ -1,15 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
include required("base/4.conf")
|
|
||||||
|
|
||||||
motd {
|
|
||||||
sudo_removed = "NOTE: 'sudo' is no longer installed by default, please use 'doas' instead."
|
|
||||||
}
|
|
||||||
|
|
||||||
# clear out inherited repos
|
|
||||||
repos = null
|
|
||||||
repos {
|
|
||||||
"https://dl-cdn.alpinelinux.org/alpine/edge/main" = true
|
|
||||||
"https://dl-cdn.alpinelinux.org/alpine/edge/community" = true
|
|
||||||
"https://dl-cdn.alpinelinux.org/alpine/edge/testing" = true
|
|
||||||
}
|
|
@ -1,214 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
# TODO: perhaps integrate into "./build release"
|
|
||||||
|
|
||||||
# Ensure we're using the Python virtual env with our installed dependencies
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import textwrap
|
|
||||||
|
|
||||||
NOTE = textwrap.dedent("""
|
|
||||||
This script's output provides a mustache-ready datasource to alpine-mksite
|
|
||||||
(https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite) and should be
|
|
||||||
run after the main 'build' script has published ALL images.
|
|
||||||
STDOUT from this script should be saved as 'cloud/releases.yaml' in the
|
|
||||||
above alpine-mksite repo.
|
|
||||||
""")
|
|
||||||
|
|
||||||
sys.pycache_prefix = 'work/__pycache__'
|
|
||||||
|
|
||||||
if not os.path.exists('work'):
|
|
||||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
|
||||||
print(NOTE, file=sys.stderr)
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
# Re-execute using the right virtual environment, if necessary.
|
|
||||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
|
||||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
|
||||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
|
||||||
os.execv(venv_args[0], venv_args)
|
|
||||||
|
|
||||||
# We're now in the right Python environment
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import logging
|
|
||||||
|
|
||||||
from collections import defaultdict
|
|
||||||
from ruamel.yaml import YAML
|
|
||||||
|
|
||||||
import clouds
|
|
||||||
from image_configs import ImageConfigManager
|
|
||||||
|
|
||||||
|
|
||||||
### Constants & Variables
|
|
||||||
|
|
||||||
LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
|
||||||
|
|
||||||
|
|
||||||
### Functions
|
|
||||||
|
|
||||||
# allows us to set values deep within an object that might not be fully defined
|
|
||||||
def dictfactory():
|
|
||||||
return defaultdict(dictfactory)
|
|
||||||
|
|
||||||
|
|
||||||
# undo dictfactory() objects to normal objects
|
|
||||||
def undictfactory(o):
|
|
||||||
if isinstance(o, defaultdict):
|
|
||||||
o = {k: undictfactory(v) for k, v in o.items()}
|
|
||||||
return o
|
|
||||||
|
|
||||||
|
|
||||||
### Command Line & Logging
|
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description=NOTE)
|
|
||||||
parser.add_argument(
|
|
||||||
'--use-broker', action='store_true',
|
|
||||||
help='use the identity broker to get credentials')
|
|
||||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
log = logging.getLogger('gen_mksite_releases')
|
|
||||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
|
||||||
console = logging.StreamHandler(sys.stderr)
|
|
||||||
console.setFormatter(logging.Formatter(LOGFORMAT))
|
|
||||||
log.addHandler(console)
|
|
||||||
log.debug(args)
|
|
||||||
|
|
||||||
# set up credential provider, if we're going to use it
|
|
||||||
if args.use_broker:
|
|
||||||
clouds.set_credential_provider()
|
|
||||||
|
|
||||||
# load build configs
|
|
||||||
configs = ImageConfigManager(
|
|
||||||
conf_path='work/configs/images.conf',
|
|
||||||
yaml_path='work/images.yaml',
|
|
||||||
log='gen_mksite_releases'
|
|
||||||
)
|
|
||||||
# make sure images.yaml is up-to-date with reality
|
|
||||||
configs.refresh_state('final')
|
|
||||||
|
|
||||||
yaml = YAML()
|
|
||||||
|
|
||||||
filters = dictfactory()
|
|
||||||
versions = dictfactory()
|
|
||||||
data = {}
|
|
||||||
|
|
||||||
log.info('Transforming image data')
|
|
||||||
for i_key, i_cfg in configs.get().items():
|
|
||||||
if not i_cfg.published:
|
|
||||||
continue
|
|
||||||
|
|
||||||
version = i_cfg.version
|
|
||||||
if version == 'edge':
|
|
||||||
continue
|
|
||||||
|
|
||||||
image_name = i_cfg.image_name
|
|
||||||
release = i_cfg.release
|
|
||||||
arch = i_cfg.arch
|
|
||||||
firmware = i_cfg.firmware
|
|
||||||
bootstrap = i_cfg.bootstrap
|
|
||||||
cloud = i_cfg.cloud
|
|
||||||
|
|
||||||
if cloud not in filters['clouds']:
|
|
||||||
filters['clouds'][cloud] = {
|
|
||||||
'cloud': cloud,
|
|
||||||
'cloud_name': i_cfg.cloud_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
filters['regions'] = {}
|
|
||||||
|
|
||||||
if arch not in filters['archs']:
|
|
||||||
filters['archs'][arch] = {
|
|
||||||
'arch': arch,
|
|
||||||
'arch_name': i_cfg.arch_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
if firmware not in filters['firmwares']:
|
|
||||||
filters['firmwares'][firmware] = {
|
|
||||||
'firmware': firmware,
|
|
||||||
'firmware_name': i_cfg.firmware_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
if bootstrap not in filters['bootstraps']:
|
|
||||||
filters['bootstraps'][bootstrap] = {
|
|
||||||
'bootstrap': bootstrap,
|
|
||||||
'bootstrap_name': i_cfg.bootstrap_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
if i_cfg.artifacts:
|
|
||||||
for region, image_id in {r: i_cfg.artifacts[r] for r in sorted(i_cfg.artifacts)}.items():
|
|
||||||
if region not in filters['regions']:
|
|
||||||
filters['regions'][region] = {
|
|
||||||
'region': region,
|
|
||||||
'clouds': [cloud],
|
|
||||||
}
|
|
||||||
|
|
||||||
if cloud not in filters['regions'][region]['clouds']:
|
|
||||||
filters['regions'][region]['clouds'].append(cloud)
|
|
||||||
|
|
||||||
versions[version] |= {
|
|
||||||
'version': version,
|
|
||||||
'release': release,
|
|
||||||
'end_of_life': i_cfg.end_of_life,
|
|
||||||
}
|
|
||||||
versions[version]['images'][image_name] |= {
|
|
||||||
'image_name': image_name,
|
|
||||||
'arch': arch,
|
|
||||||
'firmware': firmware,
|
|
||||||
'bootstrap': bootstrap,
|
|
||||||
'published': i_cfg.published.split('T')[0], # just the date
|
|
||||||
}
|
|
||||||
versions[version]['images'][image_name]['downloads'][cloud] |= {
|
|
||||||
'cloud': cloud,
|
|
||||||
'image_url': i_cfg.download_url,
|
|
||||||
}
|
|
||||||
versions[version]['images'][image_name]['regions'][region] |= {
|
|
||||||
'cloud': cloud,
|
|
||||||
'region': region,
|
|
||||||
'region_url': i_cfg.region_url(region, image_id),
|
|
||||||
'launch_url': i_cfg.launch_url(region, image_id),
|
|
||||||
}
|
|
||||||
|
|
||||||
log.info('Making data mustache-compatible')
|
|
||||||
|
|
||||||
# convert filters to mustache-compatible format
|
|
||||||
data['filters'] = {}
|
|
||||||
for f in ['clouds', 'regions', 'archs', 'firmwares', 'bootstraps']:
|
|
||||||
data['filters'][f] = [
|
|
||||||
filters[f][k] for k in filters[f] # order as they appear in work/images.yaml
|
|
||||||
]
|
|
||||||
|
|
||||||
for r in data['filters']['regions']:
|
|
||||||
c = r.pop('clouds')
|
|
||||||
r['clouds'] = [{'cloud': v} for v in c]
|
|
||||||
|
|
||||||
# convert versions to mustache-compatible format
|
|
||||||
data['versions'] = []
|
|
||||||
versions = undictfactory(versions)
|
|
||||||
for version in sorted(versions, reverse=True, key=lambda s: [int(u) for u in s.split('.')]):
|
|
||||||
images = versions[version].pop('images')
|
|
||||||
i = []
|
|
||||||
for image_name in images: # order as they appear in work/images.yaml
|
|
||||||
downloads = images[image_name].pop('downloads')
|
|
||||||
d = []
|
|
||||||
for download in downloads:
|
|
||||||
d.append(downloads[download])
|
|
||||||
|
|
||||||
images[image_name]['downloads'] = d
|
|
||||||
|
|
||||||
regions = images[image_name].pop('regions')
|
|
||||||
r = []
|
|
||||||
for region in sorted(regions):
|
|
||||||
r.append(regions[region])
|
|
||||||
|
|
||||||
images[image_name]['regions'] = r
|
|
||||||
i.append(images[image_name])
|
|
||||||
|
|
||||||
versions[version]['images'] = i
|
|
||||||
data['versions'].append(versions[version])
|
|
||||||
|
|
||||||
log.info('Dumping YAML')
|
|
||||||
yaml.dump(data, sys.stdout)
|
|
||||||
log.info('Done')
|
|
@ -1,584 +0,0 @@
|
|||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
import hashlib
|
|
||||||
import itertools
|
|
||||||
import logging
|
|
||||||
import mergedeep
|
|
||||||
import os
|
|
||||||
import pyhocon
|
|
||||||
import shutil
|
|
||||||
|
|
||||||
from copy import deepcopy
|
|
||||||
from datetime import datetime
|
|
||||||
from pathlib import Path
|
|
||||||
from ruamel.yaml import YAML
|
|
||||||
from subprocess import Popen, PIPE
|
|
||||||
from urllib.parse import urlparse
|
|
||||||
|
|
||||||
import clouds
|
|
||||||
|
|
||||||
|
|
||||||
class ImageConfigManager():
|
|
||||||
|
|
||||||
def __init__(self, conf_path, yaml_path, log=__name__, alpine=None):
|
|
||||||
self.conf_path = Path(conf_path)
|
|
||||||
self.yaml_path = Path(yaml_path)
|
|
||||||
self.log = logging.getLogger(log)
|
|
||||||
self.alpine = alpine
|
|
||||||
|
|
||||||
self.now = datetime.utcnow()
|
|
||||||
self._configs = {}
|
|
||||||
|
|
||||||
self.yaml = YAML()
|
|
||||||
self.yaml.register_class(ImageConfig)
|
|
||||||
self.yaml.explicit_start = True
|
|
||||||
# hide !ImageConfig tag from Packer
|
|
||||||
self.yaml.representer.org_represent_mapping = self.yaml.representer.represent_mapping
|
|
||||||
self.yaml.representer.represent_mapping = self._strip_yaml_tag_type
|
|
||||||
|
|
||||||
# load resolved YAML, if exists
|
|
||||||
if self.yaml_path.exists():
|
|
||||||
self._load_yaml()
|
|
||||||
else:
|
|
||||||
self._resolve()
|
|
||||||
|
|
||||||
def get(self, key=None):
|
|
||||||
if not key:
|
|
||||||
return self._configs
|
|
||||||
|
|
||||||
return self._configs[key]
|
|
||||||
|
|
||||||
# load already-resolved YAML configs, restoring ImageConfig objects
|
|
||||||
def _load_yaml(self):
|
|
||||||
self.log.info('Loading existing %s', self.yaml_path)
|
|
||||||
for key, config in self.yaml.load(self.yaml_path).items():
|
|
||||||
self._configs[key] = ImageConfig(key, config, log=self.log, yaml=self.yaml)
|
|
||||||
# TODO: also pull in additional per-image metatdata from the build process?
|
|
||||||
|
|
||||||
# save resolved configs to YAML
|
|
||||||
def _save_yaml(self):
|
|
||||||
self.log.info('Saving %s', self.yaml_path)
|
|
||||||
self.yaml.dump(self._configs, self.yaml_path)
|
|
||||||
|
|
||||||
# hide !ImageConfig tag from Packer
|
|
||||||
def _strip_yaml_tag_type(self, tag, mapping, flow_style=None):
|
|
||||||
if tag == '!ImageConfig':
|
|
||||||
tag = u'tag:yaml.org,2002:map'
|
|
||||||
|
|
||||||
return self.yaml.representer.org_represent_mapping(tag, mapping, flow_style=flow_style)
|
|
||||||
|
|
||||||
# resolve from HOCON configs
|
|
||||||
def _resolve(self):
|
|
||||||
self.log.info('Generating configs.yaml in work environment')
|
|
||||||
cfg = pyhocon.ConfigFactory.parse_file(self.conf_path)
|
|
||||||
# set version releases
|
|
||||||
for v, vcfg in cfg.Dimensions.version.items():
|
|
||||||
# version keys are quoted to protect dots
|
|
||||||
self._set_version_release(v.strip('"'), vcfg)
|
|
||||||
|
|
||||||
dimensions = list(cfg.Dimensions.keys())
|
|
||||||
self.log.debug('dimensions: %s', dimensions)
|
|
||||||
|
|
||||||
for dim_keys in (itertools.product(*cfg['Dimensions'].values())):
|
|
||||||
config_key = '-'.join(dim_keys).replace('"', '')
|
|
||||||
|
|
||||||
# dict of dimension -> dimension_key
|
|
||||||
dim_map = dict(zip(dimensions, dim_keys))
|
|
||||||
|
|
||||||
# replace version with release, and make image_key from that
|
|
||||||
release = cfg.Dimensions.version[dim_map['version']].release
|
|
||||||
(rel_map := dim_map.copy())['version'] = release
|
|
||||||
image_key = '-'.join(rel_map.values())
|
|
||||||
|
|
||||||
image_config = ImageConfig(
|
|
||||||
config_key,
|
|
||||||
{
|
|
||||||
'image_key': image_key,
|
|
||||||
'release': release
|
|
||||||
} | dim_map,
|
|
||||||
log=self.log,
|
|
||||||
yaml=self.yaml
|
|
||||||
)
|
|
||||||
|
|
||||||
# merge in the Default config
|
|
||||||
image_config._merge(cfg.Default)
|
|
||||||
skip = False
|
|
||||||
# merge in each dimension key's configs
|
|
||||||
for dim, dim_key in dim_map.items():
|
|
||||||
dim_cfg = deepcopy(cfg.Dimensions[dim][dim_key])
|
|
||||||
|
|
||||||
image_config._merge(dim_cfg)
|
|
||||||
|
|
||||||
# now that we're done with ConfigTree/dim_cfg, remove " from dim_keys
|
|
||||||
dim_keys = set(k.replace('"', '') for k in dim_keys)
|
|
||||||
|
|
||||||
# WHEN blocks inside WHEN blocks are considered "and" operations
|
|
||||||
while (when := image_config._pop('WHEN', None)):
|
|
||||||
for when_keys, when_conf in when.items():
|
|
||||||
# WHEN keys with spaces are considered "or" operations
|
|
||||||
if len(set(when_keys.split(' ')) & dim_keys) > 0:
|
|
||||||
image_config._merge(when_conf)
|
|
||||||
|
|
||||||
exclude = image_config._pop('EXCLUDE', None)
|
|
||||||
if exclude and set(exclude) & set(dim_keys):
|
|
||||||
self.log.debug('%s SKIPPED, %s excludes %s', config_key, dim_key, exclude)
|
|
||||||
skip = True
|
|
||||||
break
|
|
||||||
|
|
||||||
if eol := image_config._get('end_of_life', None):
|
|
||||||
if self.now > datetime.fromisoformat(eol):
|
|
||||||
self.log.warning('%s SKIPPED, %s end_of_life %s', config_key, dim_key, eol)
|
|
||||||
skip = True
|
|
||||||
break
|
|
||||||
|
|
||||||
if skip is True:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# merge in the Mandatory configs at the end
|
|
||||||
image_config._merge(cfg.Mandatory)
|
|
||||||
|
|
||||||
# clean stuff up
|
|
||||||
image_config._normalize()
|
|
||||||
image_config.qemu['iso_url'] = self.alpine.virt_iso_url(arch=image_config.arch)
|
|
||||||
|
|
||||||
# we've resolved everything, add tags attribute to config
|
|
||||||
self._configs[config_key] = image_config
|
|
||||||
|
|
||||||
self._save_yaml()
|
|
||||||
|
|
||||||
# set current version release
|
|
||||||
def _set_version_release(self, v, c):
|
|
||||||
info = self.alpine.version_info(v)
|
|
||||||
c.put('release', info['release'])
|
|
||||||
c.put('end_of_life', info['end_of_life'])
|
|
||||||
|
|
||||||
# release is also appended to name & description arrays
|
|
||||||
c.put('name', [c.release])
|
|
||||||
c.put('description', [c.release])
|
|
||||||
|
|
||||||
# update current config status
|
|
||||||
def refresh_state(self, step, only=[], skip=[], revise=False):
|
|
||||||
self.log.info('Refreshing State')
|
|
||||||
has_actions = False
|
|
||||||
for ic in self._configs.values():
|
|
||||||
# clear away any previous actions
|
|
||||||
if hasattr(ic, 'actions'):
|
|
||||||
delattr(ic, 'actions')
|
|
||||||
|
|
||||||
dim_keys = set(ic.config_key.split('-'))
|
|
||||||
if only and len(set(only) & dim_keys) != len(only):
|
|
||||||
self.log.debug("%s SKIPPED, doesn't match --only", ic.config_key)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if skip and len(set(skip) & dim_keys) > 0:
|
|
||||||
self.log.debug('%s SKIPPED, matches --skip', ic.config_key)
|
|
||||||
continue
|
|
||||||
|
|
||||||
ic.refresh_state(step, revise)
|
|
||||||
if not has_actions and len(ic.actions):
|
|
||||||
has_actions = True
|
|
||||||
|
|
||||||
# re-save with updated actions
|
|
||||||
self._save_yaml()
|
|
||||||
return has_actions
|
|
||||||
|
|
||||||
|
|
||||||
class ImageConfig():
|
|
||||||
|
|
||||||
CONVERT_CMD = {
|
|
||||||
'qcow2': ['ln', '-f'],
|
|
||||||
'vhd': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'vpc', '-o', 'force_size=on'],
|
|
||||||
}
|
|
||||||
# these tags may-or-may-not exist at various times
|
|
||||||
OPTIONAL_TAGS = [
|
|
||||||
'built', 'uploaded', 'imported', 'import_id', 'import_region', 'published', 'released'
|
|
||||||
]
|
|
||||||
|
|
||||||
def __init__(self, config_key, obj={}, log=None, yaml=None):
|
|
||||||
self._log = log
|
|
||||||
self._yaml = yaml
|
|
||||||
self.config_key = str(config_key)
|
|
||||||
tags = obj.pop('tags', None)
|
|
||||||
self.__dict__ |= self._deep_dict(obj)
|
|
||||||
# ensure tag values are str() when loading
|
|
||||||
if tags:
|
|
||||||
self.tags = tags
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def to_yaml(cls, representer, node):
|
|
||||||
d = {}
|
|
||||||
for k in node.__dict__:
|
|
||||||
# don't serialize attributes starting with _
|
|
||||||
if k.startswith('_'):
|
|
||||||
continue
|
|
||||||
|
|
||||||
d[k] = node.__getattribute__(k)
|
|
||||||
|
|
||||||
return representer.represent_mapping('!ImageConfig', d)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def v_version(self):
|
|
||||||
return 'edge' if self.version == 'edge' else 'v' + self.version
|
|
||||||
|
|
||||||
@property
|
|
||||||
def local_dir(self):
|
|
||||||
return Path('work/images') / self.cloud / self.image_key
|
|
||||||
|
|
||||||
@property
|
|
||||||
def local_path(self):
|
|
||||||
return self.local_dir / ('image.qcow2')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def artifacts_yaml(self):
|
|
||||||
return self.local_dir / 'artifacts.yaml'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image_name(self):
|
|
||||||
return self.name.format(**self.__dict__)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image_description(self):
|
|
||||||
return self.description.format(**self.__dict__)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image_file(self):
|
|
||||||
return '.'.join([self.image_name, self.image_format])
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image_path(self):
|
|
||||||
return self.local_dir / self.image_file
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image_metadata_file(self):
|
|
||||||
return '.'.join([self.image_name, 'yaml'])
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image_metadata_path(self):
|
|
||||||
return self.local_dir / self.image_metadata_file
|
|
||||||
|
|
||||||
def region_url(self, region, image_id):
|
|
||||||
return self.cloud_region_url.format(region=region, image_id=image_id, **self.__dict__)
|
|
||||||
|
|
||||||
def launch_url(self, region, image_id):
|
|
||||||
return self.cloud_launch_url.format(region=region, image_id=image_id, **self.__dict__)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def tags(self):
|
|
||||||
# stuff that really ought to be there
|
|
||||||
t = {
|
|
||||||
'arch': self.arch,
|
|
||||||
'bootstrap': self.bootstrap,
|
|
||||||
'cloud': self.cloud,
|
|
||||||
'description': self.image_description,
|
|
||||||
'end_of_life': self.end_of_life,
|
|
||||||
'firmware': self.firmware,
|
|
||||||
'image_key': self.image_key,
|
|
||||||
'name': self.image_name,
|
|
||||||
'project': self.project,
|
|
||||||
'release': self.release,
|
|
||||||
'revision': self.revision,
|
|
||||||
'version': self.version
|
|
||||||
}
|
|
||||||
# stuff that might not be there yet
|
|
||||||
for k in self.OPTIONAL_TAGS:
|
|
||||||
if self.__dict__.get(k, None):
|
|
||||||
t[k] = self.__dict__[k]
|
|
||||||
|
|
||||||
return Tags(t)
|
|
||||||
|
|
||||||
# recursively convert a ConfigTree object to a dict object
|
|
||||||
def _deep_dict(self, layer):
|
|
||||||
obj = deepcopy(layer)
|
|
||||||
if isinstance(layer, pyhocon.ConfigTree):
|
|
||||||
obj = dict(obj)
|
|
||||||
|
|
||||||
try:
|
|
||||||
for key, value in layer.items():
|
|
||||||
# some HOCON keys are quoted to preserve dots
|
|
||||||
if '"' in key:
|
|
||||||
obj.pop(key)
|
|
||||||
key = key.strip('"')
|
|
||||||
|
|
||||||
# version values were HOCON keys at one point, too
|
|
||||||
if key == 'version' and '"' in value:
|
|
||||||
value = value.strip('"')
|
|
||||||
|
|
||||||
obj[key] = self._deep_dict(value)
|
|
||||||
except AttributeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return obj
|
|
||||||
|
|
||||||
def _merge(self, obj={}):
|
|
||||||
mergedeep.merge(self.__dict__, self._deep_dict(obj), strategy=mergedeep.Strategy.ADDITIVE)
|
|
||||||
|
|
||||||
def _get(self, attr, default=None):
|
|
||||||
return self.__dict__.get(attr, default)
|
|
||||||
|
|
||||||
def _pop(self, attr, default=None):
|
|
||||||
return self.__dict__.pop(attr, default)
|
|
||||||
|
|
||||||
# make data ready for Packer ingestion
|
|
||||||
def _normalize(self):
|
|
||||||
# stringify arrays
|
|
||||||
self.name = '-'.join(self.name)
|
|
||||||
self.description = ' '.join(self.description)
|
|
||||||
self._resolve_motd()
|
|
||||||
self._stringify_repos()
|
|
||||||
self._stringify_packages()
|
|
||||||
self._stringify_services()
|
|
||||||
self._stringify_dict_keys('kernel_modules', ',')
|
|
||||||
self._stringify_dict_keys('kernel_options', ' ')
|
|
||||||
self._stringify_dict_keys('initfs_features', ' ')
|
|
||||||
|
|
||||||
def _resolve_motd(self):
|
|
||||||
# merge version/release notes, as apporpriate
|
|
||||||
if self.motd.get('version_notes', None) and self.motd.get('release_notes', None):
|
|
||||||
if self.version == 'edge':
|
|
||||||
# edge is, by definition, not released
|
|
||||||
self.motd.pop('version_notes', None)
|
|
||||||
self.motd.pop('release_notes', None)
|
|
||||||
|
|
||||||
elif self.release == self.version + '.0':
|
|
||||||
# no point in showing the same URL twice
|
|
||||||
self.motd.pop('release_notes')
|
|
||||||
|
|
||||||
else:
|
|
||||||
# combine version and release notes
|
|
||||||
self.motd['release_notes'] = self.motd.pop('version_notes') + '\n' + \
|
|
||||||
self.motd['release_notes']
|
|
||||||
|
|
||||||
# TODO: be rid of null values
|
|
||||||
self.motd = '\n\n'.join(self.motd.values()).format(**self.__dict__)
|
|
||||||
|
|
||||||
def _stringify_repos(self):
|
|
||||||
# stringify repos map
|
|
||||||
# <repo>: <tag> # @<tag> <repo> enabled
|
|
||||||
# <repo>: false # <repo> disabled (commented out)
|
|
||||||
# <repo>: true # <repo> enabled
|
|
||||||
# <repo>: null # skip <repo> entirely
|
|
||||||
# ...and interpolate {version}
|
|
||||||
self.repos = "\n".join(filter(None, (
|
|
||||||
f"@{v} {r}" if isinstance(v, str) else
|
|
||||||
f"#{r}" if v is False else
|
|
||||||
r if v is True else None
|
|
||||||
for r, v in self.repos.items()
|
|
||||||
))).format(version=self.version)
|
|
||||||
|
|
||||||
def _stringify_packages(self):
|
|
||||||
# resolve/stringify packages map
|
|
||||||
# <pkg>: true # add <pkg>
|
|
||||||
# <pkg>: <tag> # add <pkg>@<tag>
|
|
||||||
# <pkg>: --no-scripts # add --no-scripts <pkg>
|
|
||||||
# <pkg>: --no-scripts <tag> # add --no-scripts <pkg>@<tag>
|
|
||||||
# <pkg>: false # del <pkg>
|
|
||||||
# <pkg>: null # skip explicit add/del <pkg>
|
|
||||||
pkgs = {'add': '', 'del': '', 'noscripts': ''}
|
|
||||||
for p, v in self.packages.items():
|
|
||||||
k = 'add'
|
|
||||||
if isinstance(v, str):
|
|
||||||
if '--no-scripts' in v:
|
|
||||||
k = 'noscripts'
|
|
||||||
v = v.replace('--no-scripts', '')
|
|
||||||
v = v.strip()
|
|
||||||
if len(v):
|
|
||||||
p += f"@{v}"
|
|
||||||
elif v is False:
|
|
||||||
k = 'del'
|
|
||||||
elif v is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
pkgs[k] = p if len(pkgs[k]) == 0 else pkgs[k] + ' ' + p
|
|
||||||
|
|
||||||
self.packages = pkgs
|
|
||||||
|
|
||||||
def _stringify_services(self):
|
|
||||||
# stringify services map
|
|
||||||
# <level>:
|
|
||||||
# <svc>: true # enable <svc> at <level>
|
|
||||||
# <svc>: false # disable <svc> at <level>
|
|
||||||
# <svc>: null # skip explicit en/disable <svc> at <level>
|
|
||||||
self.services = {
|
|
||||||
'enable': ' '.join(filter(lambda x: not x.endswith('='), (
|
|
||||||
'{}={}'.format(lvl, ','.join(filter(None, (
|
|
||||||
s if v is True else None
|
|
||||||
for s, v in svcs.items()
|
|
||||||
))))
|
|
||||||
for lvl, svcs in self.services.items()
|
|
||||||
))),
|
|
||||||
'disable': ' '.join(filter(lambda x: not x.endswith('='), (
|
|
||||||
'{}={}'.format(lvl, ','.join(filter(None, (
|
|
||||||
s if v is False else None
|
|
||||||
for s, v in svcs.items()
|
|
||||||
))))
|
|
||||||
for lvl, svcs in self.services.items()
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
|
|
||||||
def _stringify_dict_keys(self, d, sep):
|
|
||||||
self.__dict__[d] = sep.join(filter(None, (
|
|
||||||
m if v is True else None
|
|
||||||
for m, v in self.__dict__[d].items()
|
|
||||||
)))
|
|
||||||
|
|
||||||
# TODO: this needs to be sorted out for 'upload' and 'release' steps
|
|
||||||
def refresh_state(self, step, revise=False):
|
|
||||||
log = self._log
|
|
||||||
actions = {}
|
|
||||||
revision = 0
|
|
||||||
remote_image = clouds.latest_build_image(self)
|
|
||||||
log.debug('\n%s', remote_image)
|
|
||||||
step_state = step == 'state'
|
|
||||||
|
|
||||||
# enable actions based on the specified step
|
|
||||||
if step in ['local', 'import', 'publish', 'state']:
|
|
||||||
actions['build'] = True
|
|
||||||
|
|
||||||
if step in ['import', 'publish', 'state']:
|
|
||||||
actions['import'] = True
|
|
||||||
|
|
||||||
if step in ['publish', 'state']:
|
|
||||||
# we will resolve publish destinations (if any) later
|
|
||||||
actions['publish'] = True
|
|
||||||
|
|
||||||
if revise:
|
|
||||||
if self.local_path.exists():
|
|
||||||
# remove previously built local image artifacts
|
|
||||||
log.warning('%s existing local image dir %s',
|
|
||||||
'Would remove' if step_state else 'Removing',
|
|
||||||
self.local_dir)
|
|
||||||
if not step_state:
|
|
||||||
shutil.rmtree(self.local_dir)
|
|
||||||
|
|
||||||
if remote_image and remote_image.published:
|
|
||||||
log.warning('%s image revision for %s',
|
|
||||||
'Would bump' if step_state else 'Bumping',
|
|
||||||
self.image_key)
|
|
||||||
revision = int(remote_image.revision) + 1
|
|
||||||
|
|
||||||
elif remote_image and remote_image.imported:
|
|
||||||
# remove existing imported (but unpublished) image
|
|
||||||
log.warning('%s unpublished remote image %s',
|
|
||||||
'Would remove' if step_state else 'Removing',
|
|
||||||
remote_image.import_id)
|
|
||||||
if not step_state:
|
|
||||||
clouds.delete_image(self, remote_image.import_id)
|
|
||||||
|
|
||||||
remote_image = None
|
|
||||||
|
|
||||||
elif remote_image:
|
|
||||||
if remote_image.imported:
|
|
||||||
# already imported, don't build/import again
|
|
||||||
log.debug('%s - already imported', self.image_key)
|
|
||||||
actions.pop('build', None)
|
|
||||||
actions.pop('import', None)
|
|
||||||
|
|
||||||
if remote_image.published:
|
|
||||||
# NOTE: re-publishing can update perms or push to new regions
|
|
||||||
log.debug('%s - already published', self.image_key)
|
|
||||||
|
|
||||||
if self.local_path.exists():
|
|
||||||
# local image's already built, don't rebuild
|
|
||||||
log.debug('%s - already locally built', self.image_key)
|
|
||||||
actions.pop('build', None)
|
|
||||||
|
|
||||||
# merge remote_image data into image state
|
|
||||||
if remote_image:
|
|
||||||
self.__dict__ |= dict(remote_image)
|
|
||||||
|
|
||||||
else:
|
|
||||||
self.__dict__ |= {
|
|
||||||
'revision': revision,
|
|
||||||
'imported': None,
|
|
||||||
'import_id': None,
|
|
||||||
'import_region': None,
|
|
||||||
'published': None,
|
|
||||||
}
|
|
||||||
|
|
||||||
# update artifacts, if we've got 'em
|
|
||||||
if self.artifacts_yaml.exists():
|
|
||||||
self.artifacts = self.yaml.load(self.artifacts_yaml)
|
|
||||||
|
|
||||||
else:
|
|
||||||
self.artifacts = None
|
|
||||||
|
|
||||||
self.actions = list(actions)
|
|
||||||
log.info('%s/%s = %s', self.cloud, self.image_name, self.actions)
|
|
||||||
|
|
||||||
self.state_updated = datetime.utcnow().isoformat()
|
|
||||||
|
|
||||||
def _run(self, cmd, errmsg=None, errvals=[]):
|
|
||||||
log = self._log
|
|
||||||
p = Popen(cmd, stdout=PIPE, stdin=PIPE, encoding='utf8')
|
|
||||||
out, err = p.communicate()
|
|
||||||
if p.returncode:
|
|
||||||
if log:
|
|
||||||
if errmsg:
|
|
||||||
log.error(errmsg, *errvals)
|
|
||||||
|
|
||||||
log.error('COMMAND: %s', ' '.join(cmd))
|
|
||||||
log.error('EXIT: %d', p.returncode)
|
|
||||||
log.error('STDOUT:\n%s', out)
|
|
||||||
log.error('STDERR:\n%s', err)
|
|
||||||
|
|
||||||
raise RuntimeError
|
|
||||||
|
|
||||||
return out, err
|
|
||||||
|
|
||||||
def _save_checksum(self, file):
|
|
||||||
self._log.info("Calculating checksum for '%s'", file)
|
|
||||||
sha256_hash = hashlib.sha256()
|
|
||||||
with open(file, 'rb') as f:
|
|
||||||
for block in iter(lambda: f.read(4096), b''):
|
|
||||||
sha256_hash.update(block)
|
|
||||||
|
|
||||||
with open(str(file) + '.sha256', 'w') as f:
|
|
||||||
print(sha256_hash.hexdigest(), file=f)
|
|
||||||
|
|
||||||
# convert local QCOW2 to format appropriate for a cloud
|
|
||||||
def convert_image(self):
|
|
||||||
self._log.info('Converting %s to %s', self.local_path, self.image_path)
|
|
||||||
self._run(
|
|
||||||
self.CONVERT_CMD[self.image_format] + [self.local_path, self.image_path],
|
|
||||||
errmsg='Unable to convert %s to %s', errvals=[self.local_path, self.image_path],
|
|
||||||
)
|
|
||||||
self._save_checksum(self.image_path)
|
|
||||||
self.built = datetime.utcnow().isoformat()
|
|
||||||
|
|
||||||
def save_metadata(self, upload=True):
|
|
||||||
os.makedirs(self.local_dir, exist_ok=True)
|
|
||||||
self._log.info('Saving image metadata')
|
|
||||||
self._yaml.dump(dict(self.tags), self.image_metadata_path)
|
|
||||||
self._save_checksum(self.image_metadata_path)
|
|
||||||
|
|
||||||
|
|
||||||
class DictObj(dict):
|
|
||||||
|
|
||||||
def __getattr__(self, key):
|
|
||||||
return self[key]
|
|
||||||
|
|
||||||
def __setattr__(self, key, value):
|
|
||||||
self[key] = value
|
|
||||||
|
|
||||||
def __delattr__(self, key):
|
|
||||||
del self[key]
|
|
||||||
|
|
||||||
|
|
||||||
class Tags(DictObj):
|
|
||||||
|
|
||||||
def __init__(self, d={}, from_list=None, key_name='Key', value_name='Value'):
|
|
||||||
for key, value in d.items():
|
|
||||||
self.__setattr__(key, value)
|
|
||||||
|
|
||||||
if from_list:
|
|
||||||
self.from_list(from_list, key_name, value_name)
|
|
||||||
|
|
||||||
def __setattr__(self, key, value):
|
|
||||||
self[key] = str(value)
|
|
||||||
|
|
||||||
def as_list(self, key_name='Key', value_name='Value'):
|
|
||||||
return [{key_name: k, value_name: v} for k, v in self.items()]
|
|
||||||
|
|
||||||
def from_list(self, list=[], key_name='Key', value_name='Value'):
|
|
||||||
for tag in list:
|
|
||||||
self.__setattr__(tag[key_name], tag[value_name])
|
|
@ -1,38 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
|
|
||||||
# Overlay for testing alpine-cloud-images
|
|
||||||
|
|
||||||
# start with the production alpine config
|
|
||||||
include required("alpine.conf")
|
|
||||||
|
|
||||||
# override specific things...
|
|
||||||
|
|
||||||
project = alpine-cloud-images__test
|
|
||||||
|
|
||||||
Default {
|
|
||||||
# unset before resetting
|
|
||||||
name = null
|
|
||||||
name = [ test ]
|
|
||||||
description = null
|
|
||||||
description = [ Alpine Test ]
|
|
||||||
}
|
|
||||||
|
|
||||||
Dimensions {
|
|
||||||
cloud {
|
|
||||||
# just test in these regions
|
|
||||||
aws.regions {
|
|
||||||
us-west-2 = true
|
|
||||||
us-east-1 = true
|
|
||||||
}
|
|
||||||
# adapters need to be written
|
|
||||||
#oci { include required("testing/oci.conf") }
|
|
||||||
#gcp { include required("testing/gcp.conf") }
|
|
||||||
#azure { include required("testing/azure.conf") }
|
|
||||||
#generic
|
|
||||||
#nocloud
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# test in private, and only in regions specified above
|
|
||||||
Mandatory.access.PUBLIC = false
|
|
||||||
Mandatory.regions.ALL = false
|
|
@ -1 +0,0 @@
|
|||||||
alpine-testing.conf
|
|
@ -1,4 +0,0 @@
|
|||||||
# vim: ts=2 et:
|
|
||||||
builder = qemu
|
|
||||||
|
|
||||||
# TBD
|
|
@ -1,42 +0,0 @@
|
|||||||
#!/bin/sh -eu
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
|
||||||
|
|
||||||
export \
|
|
||||||
TARGET=/mnt
|
|
||||||
|
|
||||||
|
|
||||||
die() {
|
|
||||||
printf '\033[1;7;31m FATAL: %s \033[0m\n' "$@" >&2 # bold reversed red
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
einfo() {
|
|
||||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup() {
|
|
||||||
# Sweep cruft out of the image that doesn't need to ship or will be
|
|
||||||
# re-generated when the image boots
|
|
||||||
rm -f \
|
|
||||||
"$TARGET/var/cache/apk/"* \
|
|
||||||
"$TARGET/etc/resolv.conf" \
|
|
||||||
"$TARGET/root/.ash_history" \
|
|
||||||
"$TARGET/etc/"*-
|
|
||||||
|
|
||||||
# unmount extra EFI mount
|
|
||||||
if [ "$FIRMWARE" = uefi ]; then
|
|
||||||
umount "$TARGET/boot/efi"
|
|
||||||
fi
|
|
||||||
|
|
||||||
umount \
|
|
||||||
"$TARGET/dev" \
|
|
||||||
"$TARGET/proc" \
|
|
||||||
"$TARGET/sys"
|
|
||||||
|
|
||||||
umount "$TARGET"
|
|
||||||
}
|
|
||||||
|
|
||||||
einfo "Cleaning up and unmounting image volume..."
|
|
||||||
cleanup
|
|
||||||
einfo "Done!"
|
|
@ -1,256 +0,0 @@
|
|||||||
#!/bin/sh -eu
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
|
||||||
|
|
||||||
export \
|
|
||||||
DEVICE=/dev/vda \
|
|
||||||
TARGET=/mnt \
|
|
||||||
SETUP=/tmp/setup.d
|
|
||||||
|
|
||||||
|
|
||||||
die() {
|
|
||||||
printf '\033[1;7;31m FATAL: %s \033[0m\n' "$@" >&2 # bold reversed red
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
einfo() {
|
|
||||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
|
||||||
}
|
|
||||||
|
|
||||||
# set up the builder's environment
|
|
||||||
setup_builder() {
|
|
||||||
einfo "Setting up Builder Instance"
|
|
||||||
setup-apkrepos -1 # main repo via dl-cdn
|
|
||||||
# ODO? also uncomment community repo?
|
|
||||||
# Always use latest versions within the release, security patches etc.
|
|
||||||
apk upgrade --no-cache --available
|
|
||||||
apk --no-cache add \
|
|
||||||
e2fsprogs \
|
|
||||||
dosfstools \
|
|
||||||
gettext \
|
|
||||||
lsblk \
|
|
||||||
parted
|
|
||||||
}
|
|
||||||
|
|
||||||
make_filesystem() {
|
|
||||||
einfo "Making the Filesystem"
|
|
||||||
root_dev=$DEVICE
|
|
||||||
|
|
||||||
# make sure we're using a blank block device
|
|
||||||
lsblk -P --fs "$DEVICE" >/dev/null 2>&1 || \
|
|
||||||
die "'$DEVICE' is not a valid block device"
|
|
||||||
if lsblk -P --fs "$DEVICE" | grep -vq 'FSTYPE=""'; then
|
|
||||||
die "Block device '$DEVICE' is not blank"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$FIRMWARE" = uefi ]; then
|
|
||||||
# EFI partition isn't optimally aligned, but is rarely used after boot
|
|
||||||
parted "$DEVICE" -- \
|
|
||||||
mklabel gpt \
|
|
||||||
mkpart EFI fat32 512KiB 1MiB \
|
|
||||||
mkpart / ext4 1MiB 100% \
|
|
||||||
set 1 esp on \
|
|
||||||
unit MiB print
|
|
||||||
|
|
||||||
root_dev="${DEVICE}2"
|
|
||||||
mkfs.fat -n EFI "${DEVICE}1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
mkfs.ext4 -O ^64bit -L / "$root_dev"
|
|
||||||
mkdir -p "$TARGET"
|
|
||||||
mount -t ext4 "$root_dev" "$TARGET"
|
|
||||||
|
|
||||||
if [ "$FIRMWARE" = uefi ]; then
|
|
||||||
mkdir -p "$TARGET/boot/efi"
|
|
||||||
mount -t vfat "${DEVICE}1" "$TARGET/boot/efi"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
install_base() {
|
|
||||||
einfo "Installing Alpine Base"
|
|
||||||
mkdir -p "$TARGET/etc/apk"
|
|
||||||
echo "$REPOS" > "$TARGET/etc/apk/repositories"
|
|
||||||
cp -a /etc/apk/keys "$TARGET/etc/apk"
|
|
||||||
# shellcheck disable=SC2086
|
|
||||||
apk --root "$TARGET" --initdb --no-cache add $PACKAGES_ADD
|
|
||||||
# shellcheck disable=SC2086
|
|
||||||
[ -z "$PACKAGES_NOSCRIPTS" ] || \
|
|
||||||
apk --root "$TARGET" --no-cache --no-scripts add $PACKAGES_NOSCRIPTS
|
|
||||||
# shellcheck disable=SC2086
|
|
||||||
[ -z "$PACKAGES_DEL" ] || \
|
|
||||||
apk --root "$TARGET" --no-cache del $PACKAGES_DEL
|
|
||||||
}
|
|
||||||
|
|
||||||
setup_chroot() {
|
|
||||||
mount -t proc none "$TARGET/proc"
|
|
||||||
mount --bind /dev "$TARGET/dev"
|
|
||||||
mount --bind /sys "$TARGET/sys"
|
|
||||||
|
|
||||||
# Needed for bootstrap, will be removed in the cleanup stage.
|
|
||||||
install -Dm644 /etc/resolv.conf "$TARGET/etc/resolv.conf"
|
|
||||||
}
|
|
||||||
|
|
||||||
install_bootloader() {
|
|
||||||
einfo "Installing Bootloader"
|
|
||||||
|
|
||||||
# create initfs
|
|
||||||
|
|
||||||
# shellcheck disable=SC2046
|
|
||||||
kernel=$(basename $(find "$TARGET/lib/modules/"* -maxdepth 0))
|
|
||||||
|
|
||||||
# ensure features can be found by mkinitfs
|
|
||||||
for FEATURE in $INITFS_FEATURES; do
|
|
||||||
# already taken care of?
|
|
||||||
[ -f "$TARGET/etc/mkinitfs/features.d/$FEATURE.modules" ] || \
|
|
||||||
[ -f "$TARGET/etc/mkinitfs/features.d/$FEATURE.files" ] && continue
|
|
||||||
# find the kernel module directory
|
|
||||||
module=$(chroot "$TARGET" /sbin/modinfo -k "$kernel" -n "$FEATURE")
|
|
||||||
[ -z "$module" ] && die "initfs_feature '$FEATURE' kernel module not found"
|
|
||||||
# replace everything after .ko with a *
|
|
||||||
echo "$module" | cut -d/ -f5- | sed -e 's/\.ko.*/.ko*/' \
|
|
||||||
> "$TARGET/etc/mkinitfs/features.d/$FEATURE.modules"
|
|
||||||
done
|
|
||||||
|
|
||||||
# TODO? this appends INITFS_FEATURES, we may want to allow removal someday?
|
|
||||||
sed -Ei "s/^features=\"([^\"]+)\"/features=\"\1 $INITFS_FEATURES\"/" \
|
|
||||||
"$TARGET/etc/mkinitfs/mkinitfs.conf"
|
|
||||||
|
|
||||||
chroot "$TARGET" /sbin/mkinitfs "$kernel"
|
|
||||||
|
|
||||||
if [ "$FIRMWARE" = uefi ]; then
|
|
||||||
install_grub_efi
|
|
||||||
else
|
|
||||||
install_extlinux
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
install_extlinux() {
|
|
||||||
# Use disk labels instead of UUID or devices paths so that this works across
|
|
||||||
# instance familes. UUID works for many instances but breaks on the NVME
|
|
||||||
# ones because EBS volumes are hidden behind NVME devices.
|
|
||||||
#
|
|
||||||
# Shorten timeout (1/10s), eliminating delays for instance launches.
|
|
||||||
#
|
|
||||||
# ttyS0 is for EC2 Console "Get system log" and "EC2 Serial Console"
|
|
||||||
# features, whereas tty0 is for "Get Instance screenshot" feature. Enabling
|
|
||||||
# the port early in extlinux gives the most complete output in the log.
|
|
||||||
#
|
|
||||||
# TODO: review for other clouds -- this may need to be cloud-specific.
|
|
||||||
sed -Ei -e "s|^[# ]*(root)=.*|\1=LABEL=/|" \
|
|
||||||
-e "s|^[# ]*(default_kernel_opts)=.*|\1=\"$KERNEL_OPTIONS\"|" \
|
|
||||||
-e "s|^[# ]*(serial_port)=.*|\1=ttyS0|" \
|
|
||||||
-e "s|^[# ]*(modules)=.*|\1=$KERNEL_MODULES|" \
|
|
||||||
-e "s|^[# ]*(default)=.*|\1=virt|" \
|
|
||||||
-e "s|^[# ]*(timeout)=.*|\1=1|" \
|
|
||||||
"$TARGET/etc/update-extlinux.conf"
|
|
||||||
|
|
||||||
chroot "$TARGET" /sbin/extlinux --install /boot
|
|
||||||
# TODO: is this really necessary? can we set all this stuff during --install?
|
|
||||||
chroot "$TARGET" /sbin/update-extlinux --warn-only
|
|
||||||
}
|
|
||||||
|
|
||||||
install_grub_efi() {
|
|
||||||
[ -d "/sys/firmware/efi" ] || die "/sys/firmware/efi does not exist"
|
|
||||||
|
|
||||||
case "$ARCH" in
|
|
||||||
x86_64) grub_target=x86_64-efi ; fwa=x64 ;;
|
|
||||||
aarch64) grub_target=arm64-efi ; fwa=aa64 ;;
|
|
||||||
*) die "ARCH=$ARCH is currently unsupported" ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# disable nvram so grub doesn't call efibootmgr
|
|
||||||
chroot "$TARGET" /usr/sbin/grub-install --target="$grub_target" --efi-directory=/boot/efi \
|
|
||||||
--bootloader-id=alpine --boot-directory=/boot --no-nvram
|
|
||||||
|
|
||||||
# fallback mode
|
|
||||||
install -D "$TARGET/boot/efi/EFI/alpine/grub$fwa.efi" "$TARGET/boot/efi/EFI/boot/boot$fwa.efi"
|
|
||||||
|
|
||||||
# install default grub config
|
|
||||||
envsubst < "$SETUP/grub.template" > "$SETUP/grub"
|
|
||||||
install -o root -g root -Dm644 -t "$TARGET/etc/default" \
|
|
||||||
"$SETUP/grub"
|
|
||||||
|
|
||||||
# generate/install new config
|
|
||||||
chroot "$TARGET" grub-mkconfig -o /boot/grub/grub.cfg
|
|
||||||
}
|
|
||||||
|
|
||||||
configure_system() {
|
|
||||||
einfo "Configuring System"
|
|
||||||
|
|
||||||
# default network configuration
|
|
||||||
install -o root -g root -Dm644 -t "$TARGET/etc/network" "$SETUP/interfaces"
|
|
||||||
|
|
||||||
# configure NTP server, if specified
|
|
||||||
[ -n "$NTP_SERVER" ] && \
|
|
||||||
sed -e 's/^pool /server /' -e "s/pool.ntp.org/$NTP_SERVER/g" \
|
|
||||||
-i "$TARGET/etc/chrony/chrony.conf"
|
|
||||||
|
|
||||||
# setup fstab
|
|
||||||
install -o root -g root -Dm644 -t "$TARGET/etc" "$SETUP/fstab"
|
|
||||||
# if we're using an EFI bootloader, add extra line for EFI partition
|
|
||||||
if [ "$FIRMWARE" = uefi ]; then
|
|
||||||
cat "$SETUP/fstab.grub-efi" >> "$TARGET/etc/fstab"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Disable getty for physical ttys, enable getty for serial ttyS0.
|
|
||||||
sed -Ei -e '/^tty[0-9]/s/^/#/' -e '/^#ttyS0:/s/^#//' "$TARGET/etc/inittab"
|
|
||||||
|
|
||||||
# setup sudo and/or doas
|
|
||||||
if grep -q '^sudo$' "$TARGET/etc/apk/world"; then
|
|
||||||
echo '%wheel ALL=(ALL) NOPASSWD: ALL' > "$TARGET/etc/sudoers.d/wheel"
|
|
||||||
fi
|
|
||||||
if grep -q '^doas$' "$TARGET/etc/apk/world"; then
|
|
||||||
echo 'permit nopass :wheel' > "$TARGET/etc/doas.d/wheel.conf"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# explicitly lock the root account
|
|
||||||
chroot "$TARGET" /bin/sh -c "/bin/echo 'root:*' | /usr/sbin/chpasswd -e"
|
|
||||||
chroot "$TARGET" /usr/bin/passwd -l root
|
|
||||||
|
|
||||||
# set up image user
|
|
||||||
user="${IMAGE_LOGIN:-alpine}"
|
|
||||||
chroot "$TARGET" /usr/sbin/addgroup "$user"
|
|
||||||
chroot "$TARGET" /usr/sbin/adduser -h "/home/$user" -s /bin/sh -G "$user" -D "$user"
|
|
||||||
chroot "$TARGET" /usr/sbin/addgroup "$user" wheel
|
|
||||||
chroot "$TARGET" /bin/sh -c "echo '$user:*' | /usr/sbin/chpasswd -e"
|
|
||||||
|
|
||||||
# modify PS1s in /etc/profile to add user
|
|
||||||
sed -Ei \
|
|
||||||
-e "s/(^PS1=')(\\$\\{HOSTNAME%)/\\1\\$\\USER@\\2/" \
|
|
||||||
-e "s/( PS1=')(\\\\h:)/\\1\\\\u@\\2/" \
|
|
||||||
-e "s/( PS1=')(%m:)/\\1%n@\\2/" \
|
|
||||||
"$TARGET"/etc/profile
|
|
||||||
|
|
||||||
# write /etc/motd
|
|
||||||
echo "$MOTD" > "$TARGET"/etc/motd
|
|
||||||
|
|
||||||
setup_services
|
|
||||||
}
|
|
||||||
|
|
||||||
# shellcheck disable=SC2046
|
|
||||||
setup_services() {
|
|
||||||
for lvl_svcs in $SERVICES_ENABLE; do
|
|
||||||
rc add $(echo "$lvl_svcs" | tr '=,' ' ')
|
|
||||||
done
|
|
||||||
for lvl_svcs in $SERVICES_DISABLE; do
|
|
||||||
rc del $(echo "$lvl_svcs" | tr '=,' ' ')
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
rc() {
|
|
||||||
op="$1" # add or del
|
|
||||||
runlevel="$2" # runlevel name
|
|
||||||
shift 2
|
|
||||||
services="$*" # names of services
|
|
||||||
|
|
||||||
for svc in $services; do
|
|
||||||
chroot "$TARGET" rc-update "$op" "$svc" "$runlevel"
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
setup_builder
|
|
||||||
make_filesystem
|
|
||||||
install_base
|
|
||||||
setup_chroot
|
|
||||||
install_bootloader
|
|
||||||
configure_system
|
|
@ -1,36 +0,0 @@
|
|||||||
#!/bin/sh -eu
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
|
||||||
|
|
||||||
TARGET=/mnt
|
|
||||||
|
|
||||||
einfo() {
|
|
||||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
|
||||||
}
|
|
||||||
|
|
||||||
einfo "Installing up cloud-init bootstrap components..."
|
|
||||||
|
|
||||||
# This adds the init scripts at the correct boot phases
|
|
||||||
chroot "$TARGET" /sbin/setup-cloud-init
|
|
||||||
|
|
||||||
# cloud-init locks our user by default which means alpine can't login from
|
|
||||||
# SSH. This seems like a bug in cloud-init that should be fixed but we can
|
|
||||||
# hack around it for now here.
|
|
||||||
if [ -f "$TARGET"/etc/cloud/cloud.cfg ]; then
|
|
||||||
sed -i '/lock_passwd:/s/True/False/' "$TARGET"/etc/cloud/cloud.cfg
|
|
||||||
fi
|
|
||||||
|
|
||||||
# configure the image for a particular cloud datasource
|
|
||||||
case "$CLOUD" in
|
|
||||||
aws)
|
|
||||||
DATASOURCE="Ec2"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Unsupported Cloud '$CLOUD'" >&2
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
printf '\n\n# Cloud-Init will use default configuration for this DataSource\n'
|
|
||||||
printf 'datasource_list: ["%s"]\n' "$DATASOURCE" >> "$TARGET"/etc/cloud/cloud.cfg
|
|
@ -1,21 +0,0 @@
|
|||||||
#!/bin/sh -eu
|
|
||||||
# vim: ts=4 et:
|
|
||||||
|
|
||||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
|
||||||
|
|
||||||
TARGET=/mnt
|
|
||||||
|
|
||||||
einfo() {
|
|
||||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
|
||||||
}
|
|
||||||
|
|
||||||
if [ "$VERSION" = "3.12" ]; then
|
|
||||||
# tiny-cloud-network requires ifupdown-ng, not in 3.12
|
|
||||||
einfo "Configuring Tiny EC2 Bootstrap..."
|
|
||||||
echo "EC2_USER=$IMAGE_LOGIN" > /etc/conf.d/tiny-ec2-bootstrap
|
|
||||||
else
|
|
||||||
einfo "Configuring Tiny Cloud..."
|
|
||||||
sed -i.bak -Ee "s/^#?CLOUD_USER=.*/CLOUD_USER=$IMAGE_LOGIN/" \
|
|
||||||
"$TARGET"/etc/conf.d/tiny-cloud
|
|
||||||
rm "$TARGET"/etc/conf.d/tiny-cloud.bak
|
|
||||||
fi
|
|
@ -1,2 +0,0 @@
|
|||||||
# <fs> <mountpoint> <type> <opts> <dump/pass>
|
|
||||||
LABEL=/ / ext4 defaults,noatime 1 1
|
|
@ -1 +0,0 @@
|
|||||||
LABEL=EFI /boot/efi vfat defaults,noatime,uid=0,gid=0,umask=077 0 0
|
|
@ -1,5 +0,0 @@
|
|||||||
GRUB_CMDLINE_LINUX_DEFAULT="modules=$KERNEL_MODULES $KERNEL_OPTIONS"
|
|
||||||
GRUB_DISABLE_RECOVERY=true
|
|
||||||
GRUB_DISABLE_SUBMENU=y
|
|
||||||
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
|
|
||||||
GRUB_TIMEOUT=0
|
|
@ -1,7 +0,0 @@
|
|||||||
# default alpine-cloud-images network configuration
|
|
||||||
|
|
||||||
auto lo
|
|
||||||
iface lo inet loopback
|
|
||||||
|
|
||||||
auto eth0
|
|
||||||
iface eth0 inet dhcp
|
|
Loading…
Reference in New Issue
Block a user