Merge commit '5d0009b5f8878e33a8e9b0ab12f806179e2916d6' as 'alpine-cloud-images'
This commit is contained in:
commit
3ba7b3dc1f
2
alpine-cloud-images/.flake8
Normal file
2
alpine-cloud-images/.flake8
Normal file
@ -0,0 +1,2 @@
|
||||
[flake8]
|
||||
ignore = E265,E266,E402,E501
|
7
alpine-cloud-images/.gitignore
vendored
Normal file
7
alpine-cloud-images/.gitignore
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
*~
|
||||
*.bak
|
||||
*.swp
|
||||
.DS_Store
|
||||
.vscode/
|
||||
/work/
|
||||
releases*yaml
|
314
alpine-cloud-images/CONFIGURATION.md
Normal file
314
alpine-cloud-images/CONFIGURATION.md
Normal file
@ -0,0 +1,314 @@
|
||||
# Configuration
|
||||
|
||||
All the configuration for building image variants is defined by multiple
|
||||
config files; the base configs for official Alpine Linux cloud images are in
|
||||
the [`configs/`](configs/) directory.
|
||||
|
||||
We use [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md) for
|
||||
configuration -- this primarily facilitates importing deeper configs from
|
||||
other files, but also allows the extension/concatenation of arrays and maps
|
||||
(which can be a useful feature for customization), and inline comments.
|
||||
|
||||
----
|
||||
## Resolving Work Environment Configs and Scripts
|
||||
|
||||
If `work/configs/` and `work/scripts/` don't exist, the `build` script will
|
||||
install the contents of the base [`configs/`](configs/) and [`scripts/`](scripts/)
|
||||
directories, and overlay additional `configs/` and `scripts/` subdirectories
|
||||
from `--custom` directories (if any).
|
||||
|
||||
Files cannot be installed over existing files, with one exception -- the
|
||||
[`configs/images.conf`](configs/images.conf) same-directory symlink. Because
|
||||
the `build` script _always_ loads `work/configs/images.conf`, this is the hook
|
||||
for "rolling your own" custom Alpine Linux cloud images.
|
||||
|
||||
The base [`configs/images.conf`](configs/images.conf) symlinks to
|
||||
[`alpine.conf`](configs/images.conf), but this can be overridden using a
|
||||
`--custom` directory containing a new `configs/images.conf` same-directory
|
||||
symlink pointing to its custom top-level config.
|
||||
|
||||
For example, the configs and scripts in the [`overlays/testing/`](overlays/testing/)
|
||||
directory can be resolved in a _clean_ work environment with...
|
||||
```
|
||||
./build configs --custom overlays/testing
|
||||
```
|
||||
This results in the `work/configs/images.conf` symlink to point to
|
||||
`work/configs/alpine-testing.conf` instead of `work/configs/alpine.conf`.
|
||||
|
||||
If multiple directories are specified with `--custom`, they are applied in
|
||||
the order given.
|
||||
|
||||
----
|
||||
## Top-Level Config File
|
||||
|
||||
Examples of top-level config files are [`configs/alpine.conf`](configs/alpine.conf)
|
||||
and [`overlays/testing/configs/alpine-testing.conf`](overlays/testing/configs/alpine-testing.conf).
|
||||
|
||||
There are three main blocks that need to exist (or be `import`ed into) the top
|
||||
level HOCON configuration, and are merged in this exact order:
|
||||
|
||||
### `Default`
|
||||
|
||||
All image variant configs start with this block's contents as a starting point.
|
||||
Arrays and maps can be appended by configs in `Dimensions` and `Mandatory`
|
||||
blocks.
|
||||
|
||||
### `Dimensions`
|
||||
|
||||
The sub-blocks in `Dimensions` define the "dimensions" a variant config is
|
||||
comprised of, and the different config values possible for that dimension.
|
||||
The default [`alpine.conf`](configs/alpine.conf) defines the following
|
||||
dimensional configs:
|
||||
|
||||
* `version` - Alpine Linux _x_._y_ (plus `edge`) versions
|
||||
* `arch` - machine architectures, `x86_64` or `aarch64`
|
||||
* `firmware` - supports launching via legacy BIOS or UEFI
|
||||
* `bootstrap` - the system/scripts responsible for setting up an instance
|
||||
during its initial launch
|
||||
* `cloud` - for specific cloud platforms
|
||||
|
||||
The specific dimensional configs for an image variant are merged in the order
|
||||
that the dimensions are listed.
|
||||
|
||||
### `Mandatory`
|
||||
|
||||
After a variant's dimensional configs have been applied, this is the last block
|
||||
that's merged to the image variant configuration. This block is the ultimate
|
||||
enforcer of any non-overrideable configuration across all variants, and can
|
||||
also provide the last element to array config items.
|
||||
|
||||
----
|
||||
## Dimensional Config Directives
|
||||
|
||||
Because a full cross-product across all dimensional configs may produce images
|
||||
variants that are not viable (i.e. `aarch64` simply does not support legacy
|
||||
`bios`), or may require further adjustments (i.e. the `aws` `aarch64` images
|
||||
require an additional kernel module from `3.15` forward, which aren't available
|
||||
in previous versions), we have two special directives which may appear in
|
||||
dimensional configs.
|
||||
|
||||
### `EXCLUDE` array
|
||||
|
||||
This directive provides an array of dimensional config keys which are
|
||||
incompatible with the current dimensional config. For example,
|
||||
[`configs/arch/aarch64.conf`](configs/arch/aarch64.conf) specifies...
|
||||
```
|
||||
# aarch64 is UEFI only
|
||||
EXCLUDE = [bios]
|
||||
```
|
||||
...which indicates that any image variant that includes both `aarch64` (the
|
||||
current dimensional config) and `bios` configuration should be skipped.
|
||||
|
||||
### `WHEN` block
|
||||
|
||||
This directive conditionally merges additional configuration ***IF*** the
|
||||
image variant also includes a specific dimensional config key (or keys). In
|
||||
order to handle more complex situations, `WHEN` blocks may be nested. For
|
||||
example, [`configs/cloud/aws.conf`](configs/cloud/aws.conf) has...
|
||||
```
|
||||
WHEN {
|
||||
aarch64 {
|
||||
# new AWS aarch64 default...
|
||||
kernel_modules.gpio_pl061 = true
|
||||
initfs_features.gpio_pl061 = true
|
||||
WHEN {
|
||||
"3.14 3.13 3.12" {
|
||||
# ...but not supported for older versions
|
||||
kernel_modules.gpio_pl061 = false
|
||||
initfs_features.gpio_pl061 = false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
This configures AWS `aarch64` images to use the `gpio_pl061` kernel module in
|
||||
order to cleanly shutdown/reboot instances from the web console, CLI, or SDK.
|
||||
However, this module is unavailable on older Alpine versions.
|
||||
|
||||
Spaces in `WHEN` block keys serve as an "OR" operator; nested `WHEN` blocks
|
||||
function as "AND" operators.
|
||||
|
||||
----
|
||||
## Config Settings
|
||||
|
||||
**Scalar** values can be simply overridden in later configs.
|
||||
|
||||
**Array** and **map** settings in later configs are merged with the previous
|
||||
values, _or entirely reset if it's first set to `null`_, for example...
|
||||
```
|
||||
some_array = [ thing ]
|
||||
# [...]
|
||||
some_array = null
|
||||
some_array = [ other_thing ]
|
||||
```
|
||||
|
||||
Mostly in order of appearance, as we walk through
|
||||
[`configs/alpine.conf`](configs/alpine.conf) and the deeper configs it
|
||||
imports...
|
||||
|
||||
### `project` string
|
||||
|
||||
This is a unique identifier for the whole collection of images being built.
|
||||
For the official Alpine Linux cloud images, this is set to
|
||||
`https://alpinelinux.org/cloud`.
|
||||
|
||||
When building custom images, you **MUST** override **AT LEAST** this setting to
|
||||
avoid image import and publishing collisions.
|
||||
|
||||
### `name` array
|
||||
|
||||
The ultimate contents of this array contribute to the overall naming of the
|
||||
resultant image. Almost all dimensional configs will add to the `name` array,
|
||||
with two notable exceptions: **version** configs' contribution to this array is
|
||||
determined when `work/images.yaml` is resolved, and is set to the current
|
||||
Alpine Linux release (_x.y.z_ or _YYYYMMDD_ for edge); also because
|
||||
**cloud** images are isolated from each other, it's redundant to include that
|
||||
in the image name.
|
||||
|
||||
### `description` array
|
||||
|
||||
Similar to the `name` array, the elements of this array contribute to the final
|
||||
image description. However, for the official Alpine configs, only the
|
||||
**version** dimension adds to this array, via the same mechanism that sets the
|
||||
revision for the `name` array.
|
||||
|
||||
### `motd` map
|
||||
|
||||
This setting controls the contents of what ultimately gets written into the
|
||||
variant image's `/etc/motd` file. Later configs can add additional messages,
|
||||
replace existing contents, or remove them entirely (by setting the value to
|
||||
`null`).
|
||||
|
||||
The `motd.release_notes` setting will be ignored if the Alpine release does
|
||||
not have a release notes web page associated with it.
|
||||
|
||||
### `scripts` array
|
||||
|
||||
These are the scripts that will be executed by Packer, in order, to do various
|
||||
setup tasks inside a variant's image. The `work/scripts/` directory contains
|
||||
all scripts, including those that may have been added via `build --custom`.
|
||||
|
||||
### `script_dirs` array
|
||||
|
||||
Directories (under `work/scripts/`) that contain additional data that the
|
||||
`scripts` will need. Packer will copy these to the VM responsible for setting
|
||||
up the variant image.
|
||||
|
||||
### `size` string
|
||||
|
||||
The size of the image disk, by default we use `1G` (1 GiB). This disk may (or
|
||||
may not) be further partitioned, based on other factors.
|
||||
|
||||
### `login` string
|
||||
|
||||
The image's primary login user, set to `alpine`.
|
||||
|
||||
### `repos` map
|
||||
|
||||
Defines the contents of the image's `/etc/apk/repositories` file. The map's
|
||||
key is the URL of the repo, and the value determines how that URL will be
|
||||
represented in the `repositories` file...
|
||||
| value | result |
|
||||
|-|-|
|
||||
| `null` | make no reference to this repo |
|
||||
| `false` | this repo is commented out (disabled) |
|
||||
| `true` | this repo is enabled for use |
|
||||
| _tag_ | enable this repo with `@`_`tag`_ |
|
||||
|
||||
### `packages` map
|
||||
|
||||
Defines what APK packages to add/delete. The map's key is the package
|
||||
name, and the value determines whether (or not) to install/uninstall the
|
||||
package...
|
||||
| value | result |
|
||||
|-|-|
|
||||
| `null` | don't add or delete |
|
||||
| `false` | explicitly delete |
|
||||
| `true` | add from default repos |
|
||||
| _tag_ | add from `@`_`tag`_ repo |
|
||||
| `--no-scripts` | add with `--no-scripts` option |
|
||||
| `--no-scripts` _tag_ | add from `@`_`tag`_ repo, with `--no-scripts` option |
|
||||
|
||||
### `services` map of maps
|
||||
|
||||
Defines what services are enabled/disabled at various runlevels. The first
|
||||
map's key is the runlevel, the second key is the service. The service value
|
||||
determines whether (or not) to enable/disable the service at that runlevel...
|
||||
| value | result |
|
||||
|-|-|
|
||||
| `null` | don't enable or disable |
|
||||
| `false` | explicitly disable |
|
||||
| `true` | explicitly enable |
|
||||
|
||||
### `kernel_modules` map
|
||||
|
||||
Defines what kernel modules are specified in the boot loader. The key is the
|
||||
kernel module, and the value determines whether or not it's in the final
|
||||
list...
|
||||
| value | result |
|
||||
|-|-|
|
||||
| `null` | skip |
|
||||
| `false` | skip |
|
||||
| `true` | include |
|
||||
|
||||
### `kernel_options` map
|
||||
|
||||
Defines what kernel options are specified on the kernel command line. The keys
|
||||
are the kernel options, the value determines whether or not it's in the final
|
||||
list...
|
||||
| value | result |
|
||||
|-|-|
|
||||
| `null` | skip |
|
||||
| `false` | skip |
|
||||
| `true` | include |
|
||||
|
||||
### `initfs_features` map
|
||||
|
||||
Defines what initfs features are included when making the image's initramfs
|
||||
file. The keys are the initfs features, and the values determine whether or
|
||||
not they're included in the final list...
|
||||
| value | result |
|
||||
|-|-|
|
||||
| `null` | skip |
|
||||
| `false` | skip |
|
||||
| `true` | include |
|
||||
|
||||
### `qemu.machine_type` string
|
||||
|
||||
The QEMU machine type to use when building local images. For x86_64, this is
|
||||
set to `null`, for aarch64, we use `virt`.
|
||||
|
||||
### `qemu.args` list of lists
|
||||
|
||||
Additional QEMU arguments. For x86_64, this is set to `null`; but aarch64
|
||||
requires several additional arguments to start an operational VM.
|
||||
|
||||
### `qemu.firmware` string
|
||||
|
||||
The path to the QEMU firmware (installed in `work/firmware/`). This is only
|
||||
used when creating UEFI images.
|
||||
|
||||
### `bootloader` string
|
||||
|
||||
The bootloader to use, currently `extlinux` or `grub-efi`.
|
||||
|
||||
### `access` map
|
||||
|
||||
When images are published, this determines who has access to those images.
|
||||
The key is the cloud account (or `PUBLIC`), and the value is whether or not
|
||||
access is granted, `true` or `false`/`null`.
|
||||
|
||||
### `regions` map
|
||||
|
||||
Determines where images should be published. The key is the region
|
||||
identifier (or `ALL`), and the value is whether or not to publish to that
|
||||
region, `true` or `false`/`null`.
|
||||
|
||||
### `encrypted` string
|
||||
|
||||
Determines whether the image will be encrypted when imported and published.
|
||||
Currently, only the **aws** cloud module supports this.
|
||||
|
||||
### `repo_keys` array
|
||||
|
||||
List of addtional repository keys to trust during the package installation phase.
|
||||
This allows pulling in custom apk packages by simple specifying the repository name in packages block.
|
19
alpine-cloud-images/LICENSE.txt
Normal file
19
alpine-cloud-images/LICENSE.txt
Normal file
@ -0,0 +1,19 @@
|
||||
Copyright (c) 2017-2022 Jake Buchholz Göktürk, Michael Crute
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
208
alpine-cloud-images/README.md
Normal file
208
alpine-cloud-images/README.md
Normal file
@ -0,0 +1,208 @@
|
||||
# Alpine Linux Cloud Image Builder
|
||||
|
||||
This repository contains the code and and configs for the build system used to
|
||||
create official Alpine Linux images for various cloud providers, in various
|
||||
configurations. This build system is flexible, enabling others to build their
|
||||
own customized images.
|
||||
|
||||
----
|
||||
## Pre-Built Offical Cloud Images
|
||||
|
||||
To get started with offical pre-built Alpine Linux cloud images, visit
|
||||
https://alpinelinux.org/cloud. Currently, we build official images for the
|
||||
following cloud platforms...
|
||||
* AWS
|
||||
|
||||
...we are working on also publishing offical images to other major cloud
|
||||
providers.
|
||||
|
||||
Each published image's name contains the Alpine version release, architecture,
|
||||
firmware, bootstrap, and image revision. These details (and more) are also
|
||||
tagged on the images...
|
||||
|
||||
| Tag | Description / Values |
|
||||
|-----|----------------------|
|
||||
| name | `alpine-`_`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-r`_`revision`_ |
|
||||
| project | `https://alpinelinux.org/cloud` |
|
||||
| image_key | _`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-`_`cloud`_ |
|
||||
| version | Alpine version (_`x.y`_ or `edge`) |
|
||||
| release | Alpine release (_`x.y.z`_ or _`YYYYMMDD`_ for edge) |
|
||||
| arch | architecture (`aarch64` or `x86_64`) |
|
||||
| firmware | boot mode (`bios` or `uefi`) |
|
||||
| bootstrap | initial bootstrap system (`tiny` = Tiny Cloud) |
|
||||
| cloud | provider short name (`aws`) |
|
||||
| revision | image revision number |
|
||||
| imported | image import timestamp |
|
||||
| import_id | imported image id |
|
||||
| import_region | imported image region |
|
||||
| published | image publication timestamp |
|
||||
| description | image description |
|
||||
|
||||
Although AWS does not allow cross-account filtering by tags, the image name can
|
||||
still be used to filter images. For example, to get a list of available Alpine
|
||||
3.x aarch64 images in AWS eu-west-2...
|
||||
```
|
||||
aws ec2 describe-images \
|
||||
--region eu-west-2 \
|
||||
--owners 538276064493 \
|
||||
--filters \
|
||||
Name=name,Values='alpine-3.*-aarch64-*' \
|
||||
Name=state,Values=available \
|
||||
--output text \
|
||||
--query 'reverse(sort_by(Images, &CreationDate))[].[ImageId,Name,CreationDate]'
|
||||
```
|
||||
To get just the most recent matching image, use...
|
||||
```
|
||||
--query 'max_by(Image, &CreationDate).[ImageId,Name,CreationDate]'
|
||||
```
|
||||
|
||||
----
|
||||
## Build System
|
||||
|
||||
The build system consists of a number of components:
|
||||
|
||||
* the primary `build` script
|
||||
* the `configs/` directory, defining the set of images to be built
|
||||
* the `scripts/` directory, containing scripts and related data used to set up
|
||||
image contents during provisioning
|
||||
* the Packer `alpine.pkr.hcl`, which orchestrates build, import, and publishing
|
||||
of images
|
||||
* the `cloud_helper.py` script that Packer runs in order to do cloud-specific
|
||||
import and publish operations
|
||||
|
||||
### Build Requirements
|
||||
* [Python](https://python.org) (3.9.7 is known to work)
|
||||
* [Packer](https://packer.io) (1.7.6 is known to work)
|
||||
* [QEMU](https://www.qemu.org) (6.1.0 is known to work)
|
||||
* cloud provider account(s)
|
||||
|
||||
### Cloud Credentials
|
||||
|
||||
By default, the build system relies on the cloud providers' Python API
|
||||
libraries to find and use the necessary credentials, usually via configuration
|
||||
under the user's home directory (i.e. `~/.aws/`, `~/.oci/`, etc.) or or via
|
||||
environment variables (i.e. `AWS_...`, `OCI_...`, etc.)
|
||||
|
||||
The credentials' user/role needs sufficient permission to query, import, and
|
||||
publish images -- the exact details will vary from cloud to cloud. _It is
|
||||
recommended that only the minimum required permissions are granted._
|
||||
|
||||
_We manage the credentials for publishing official Alpine images with an
|
||||
"identity broker" service, and retrieve those credentials via the
|
||||
`--use-broker` argument of the `build` script._
|
||||
|
||||
### The `build` Script
|
||||
|
||||
```
|
||||
usage: build [-h] [--debug] [--clean] [--pad-uefi-bin-arch ARCH [ARCH ...]]
|
||||
[--custom DIR [DIR ...]] [--skip KEY [KEY ...]] [--only KEY [KEY ...]]
|
||||
[--revise] [--use-broker] [--no-color] [--parallel N]
|
||||
[--vars FILE [FILE ...]]
|
||||
{configs,state,rollback,local,upload,import,publish,release}
|
||||
|
||||
positional arguments: (build up to and including this step)
|
||||
configs resolve image build configuration
|
||||
state refresh current image build state
|
||||
rollback remove existing local/uploaded/imported images if un-published/released
|
||||
local build images locally
|
||||
upload upload images and metadata to storage
|
||||
* import import local images to cloud provider default region (*)
|
||||
* publish set image permissions and publish to cloud regions (*)
|
||||
release mark images as being officially relased
|
||||
|
||||
(*) may not apply to or be implemented for all cloud providers
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--debug enable debug output
|
||||
--clean start with a clean work environment
|
||||
--pad-uefi-bin-arch ARCH [ARCH ...]
|
||||
pad out UEFI firmware to 64 MiB ('aarch64')
|
||||
--custom DIR [DIR ...] overlay custom directory in work environment
|
||||
--skip KEY [KEY ...] skip variants with dimension key(s)
|
||||
--only KEY [KEY ...] only variants with dimension key(s)
|
||||
--revise remove existing local/uploaded/imported images if
|
||||
un-published/released, or bump revision and rebuild
|
||||
--use-broker use the identity broker to get credentials
|
||||
--no-color turn off Packer color output
|
||||
--parallel N build N images in parallel
|
||||
--vars FILE [FILE ...] supply Packer with -vars-file(s) (default: [])
|
||||
```
|
||||
|
||||
The `build` script will automatically create a `work/` directory containing a
|
||||
Python virtual environment if one does not already exist. This directory also
|
||||
hosts other data related to building images. The `--clean` argument will
|
||||
remove everything in the `work/` directory except for things related to the
|
||||
Python virtual environment.
|
||||
|
||||
If `work/configs/` or `work/scripts/` directories do not yet exist, they will
|
||||
be populated with the base configuration and scripts from `configs/` and/or
|
||||
`scripts/` directories. If any custom overlay directories are specified with
|
||||
the `--custom` argument, their `configs/` and `scripts/` subdirectories are
|
||||
also added to `work/configs/` and `work/scripts/`.
|
||||
|
||||
The "build step" positional argument deterimines the last step the `build`
|
||||
script should execute -- all steps before this targeted step may also be
|
||||
executed. That is, `build local` will first execute the `configs` step (if
|
||||
necessary) and then the `state` step (always) before proceeding to the `local`
|
||||
step.
|
||||
|
||||
The `configs` step resolves configuration for all buildable images, and writes
|
||||
it to `work/images.yaml`, if it does not already exist.
|
||||
|
||||
The `state` step always checks the current state of the image builds,
|
||||
determines what actions need to be taken, and updates `work/images.yaml`. A
|
||||
subset of image builds can be targeted by using the `--skip` and `--only`
|
||||
arguments.
|
||||
|
||||
The `rollback` step, when used with `--revise` argument indicates that any
|
||||
_unpublished_ and _unreleased_ local, imported, or uploaded images should be
|
||||
removed and rebuilt.
|
||||
|
||||
As _published_ and _released_ images can't be removed, `--revise` can be used
|
||||
with `configs` or `state` to increment the _`revision`_ value to rebuild newly
|
||||
revised images.
|
||||
|
||||
`local`, `upload`, `import`, `publish`, and `release` steps are orchestrated by
|
||||
Packer. By default, each image will be processed serially; providing the
|
||||
`--parallel` argument with a value greater than 1 will parallelize operations.
|
||||
The degree to which you can parallelze `local` image builds will depend on the
|
||||
local build hardware -- as QEMU virtual machines are launched for each image
|
||||
being built. Image `upload`, `import`, `publish`, and `release` steps are much
|
||||
more lightweight, and can support higher parallelism.
|
||||
|
||||
The `local` step builds local images with QEMU, for those that are not already
|
||||
built locally or have already been imported. Images are converted to formats
|
||||
amenable for import into the cloud provider (if necessary) and checksums are
|
||||
generated.
|
||||
|
||||
The `upload` step uploads the local image, checksum, and metadata to the
|
||||
defined `storage_url`. The `import`, `publish`, and `release` steps will
|
||||
also upload updated image metadata.
|
||||
|
||||
The `import` step imports the local images into the cloud providers' default
|
||||
regions, unless they've already been imported. At this point the images are
|
||||
not available publicly, allowing for additional testing prior to publishing.
|
||||
|
||||
The `publish` step copies the image from the default region to other regions,
|
||||
if they haven't already been copied there. This step will always update
|
||||
image permissions, descriptions, tags, and deprecation date (if applicable)
|
||||
in all regions where the image has been published.
|
||||
|
||||
***NOTE:*** The `import` and `publish` steps are skipped for those cloud
|
||||
providers where this does not make sense (i.e. NoCloud) or for those which
|
||||
it has not yet been coded.
|
||||
|
||||
The `release` step marks the images as being fully released.
|
||||
|
||||
### The `cloud_helper.py` Script
|
||||
|
||||
This script is meant to be called only by Packer from its `post-processor`
|
||||
block.
|
||||
|
||||
----
|
||||
## Build Configuration
|
||||
|
||||
For more in-depth information about how the build system configuration works,
|
||||
how to create custom config overlays, and details about individual config
|
||||
settings, see [CONFIGURATION.md](CONFIGURATION.md).
|
201
alpine-cloud-images/alpine.pkr.hcl
Normal file
201
alpine-cloud-images/alpine.pkr.hcl
Normal file
@ -0,0 +1,201 @@
|
||||
# Alpine Cloud Images Packer Configuration
|
||||
|
||||
### Variables
|
||||
|
||||
# include debug output from provisioning/post-processing scripts
|
||||
variable "DEBUG" {
|
||||
default = 0
|
||||
}
|
||||
# indicates cloud_helper.py should be run with --use-broker
|
||||
variable "USE_BROKER" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
# tuneable QEMU VM parameters, based on perfomance of the local machine;
|
||||
# overrideable via build script --vars parameter referencing a Packer
|
||||
# ".vars.hcl" file containing alternate settings
|
||||
variable "qemu" {
|
||||
default = {
|
||||
boot_wait = {
|
||||
aarch64 = "1m"
|
||||
x86_64 = "1m"
|
||||
}
|
||||
cmd_wait = "5s"
|
||||
ssh_timeout = "1m"
|
||||
memory = 1024 # MiB
|
||||
}
|
||||
}
|
||||
|
||||
### Local Data
|
||||
|
||||
locals {
|
||||
# possible actions for the post-processor
|
||||
actions = [
|
||||
"local", "upload", "import", "publish", "release"
|
||||
]
|
||||
|
||||
debug_arg = var.DEBUG == 0 ? "" : "--debug"
|
||||
broker_arg = var.USE_BROKER == 0 ? "" : "--use-broker"
|
||||
|
||||
# randomly generated password
|
||||
password = uuidv4()
|
||||
|
||||
# resolve actionable build configs
|
||||
configs = { for b, cfg in yamldecode(file("work/images.yaml")):
|
||||
b => cfg if contains(keys(cfg), "actions")
|
||||
}
|
||||
}
|
||||
|
||||
### Build Sources
|
||||
|
||||
# Don't build
|
||||
source null alpine {
|
||||
communicator = "none"
|
||||
}
|
||||
|
||||
# Common to all QEMU builds
|
||||
source qemu alpine {
|
||||
# qemu machine
|
||||
headless = true
|
||||
memory = var.qemu.memory
|
||||
net_device = "virtio-net"
|
||||
disk_interface = "virtio"
|
||||
|
||||
# build environment
|
||||
boot_command = [
|
||||
"root<enter>",
|
||||
"setup-interfaces<enter><enter><enter><enter>",
|
||||
"ifup eth0<enter><wait${var.qemu.cmd_wait}>",
|
||||
"setup-sshd openssh<enter><wait${var.qemu.cmd_wait}>",
|
||||
"echo PermitRootLogin yes >> /etc/ssh/sshd_config<enter>",
|
||||
"service sshd restart<enter>",
|
||||
"echo 'root:${local.password}' | chpasswd<enter>",
|
||||
]
|
||||
ssh_username = "root"
|
||||
ssh_password = local.password
|
||||
ssh_timeout = var.qemu.ssh_timeout
|
||||
shutdown_command = "poweroff"
|
||||
}
|
||||
|
||||
build {
|
||||
name = "alpine"
|
||||
|
||||
## Builders
|
||||
|
||||
# QEMU builder
|
||||
dynamic "source" {
|
||||
for_each = { for b, c in local.configs:
|
||||
b => c if contains(c.actions, "local")
|
||||
}
|
||||
iterator = B
|
||||
labels = ["qemu.alpine"] # links us to the base source
|
||||
|
||||
content {
|
||||
name = B.key
|
||||
|
||||
# qemu machine
|
||||
qemu_binary = "qemu-system-${B.value.arch}"
|
||||
qemuargs = B.value.qemu.args
|
||||
machine_type = B.value.qemu.machine_type
|
||||
firmware = B.value.qemu.firmware
|
||||
|
||||
# build environment
|
||||
iso_url = B.value.qemu.iso_url
|
||||
iso_checksum = "file:${B.value.qemu.iso_url}.sha512"
|
||||
boot_wait = var.qemu.boot_wait[B.value.arch]
|
||||
|
||||
# results
|
||||
output_directory = "work/images/${B.value.cloud}/${B.value.image_key}"
|
||||
disk_size = B.value.size
|
||||
format = "qcow2"
|
||||
vm_name = "image.qcow2"
|
||||
}
|
||||
}
|
||||
|
||||
# Null builder (don't build, but we might do other actions)
|
||||
dynamic "source" {
|
||||
for_each = { for b, c in local.configs:
|
||||
b => c if !contains(c.actions, "local")
|
||||
}
|
||||
iterator = B
|
||||
labels = ["null.alpine"]
|
||||
content {
|
||||
name = B.key
|
||||
}
|
||||
}
|
||||
|
||||
## build provisioners
|
||||
|
||||
# install setup files
|
||||
dynamic "provisioner" {
|
||||
for_each = { for b, c in local.configs:
|
||||
b => c if contains(c.actions, "local")
|
||||
}
|
||||
iterator = B
|
||||
labels = ["file"]
|
||||
content {
|
||||
only = [ "qemu.${B.key}" ] # configs specific to one build
|
||||
|
||||
sources = [ for d in B.value.script_dirs: "work/scripts/${d}" ]
|
||||
destination = "/tmp/"
|
||||
}
|
||||
}
|
||||
|
||||
# run setup scripts
|
||||
dynamic "provisioner" {
|
||||
for_each = { for b, c in local.configs:
|
||||
b => c if contains(c.actions, "local")
|
||||
}
|
||||
iterator = B
|
||||
labels = ["shell"]
|
||||
content {
|
||||
only = [ "qemu.${B.key}" ] # configs specific to one build
|
||||
|
||||
scripts = [ for s in B.value.scripts: "work/scripts/${s}" ]
|
||||
use_env_var_file = true
|
||||
environment_vars = [
|
||||
"DEBUG=${var.DEBUG}",
|
||||
"ARCH=${B.value.arch}",
|
||||
"BOOTLOADER=${B.value.bootloader}",
|
||||
"BOOTSTRAP=${B.value.bootstrap}",
|
||||
"BUILD_NAME=${B.value.name}",
|
||||
"BUILD_REVISION=${B.value.revision}",
|
||||
"CLOUD=${B.value.cloud}",
|
||||
"END_OF_LIFE=${B.value.end_of_life}",
|
||||
"FIRMWARE=${B.value.firmware}",
|
||||
"IMAGE_LOGIN=${B.value.login}",
|
||||
"INITFS_FEATURES=${B.value.initfs_features}",
|
||||
"KERNEL_MODULES=${B.value.kernel_modules}",
|
||||
"KERNEL_OPTIONS=${B.value.kernel_options}",
|
||||
"MOTD=${B.value.motd}",
|
||||
"NTP_SERVER=${B.value.ntp_server}",
|
||||
"PACKAGES_ADD=${B.value.packages.add}",
|
||||
"PACKAGES_DEL=${B.value.packages.del}",
|
||||
"PACKAGES_NOSCRIPTS=${B.value.packages.noscripts}",
|
||||
"RELEASE=${B.value.release}",
|
||||
"REPOS=${B.value.repos}",
|
||||
"REPO_KEYS=${B.value.repo_keys}",
|
||||
"SERVICES_ENABLE=${B.value.services.enable}",
|
||||
"SERVICES_DISABLE=${B.value.services.disable}",
|
||||
"VERSION=${B.value.version}",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
## build post-processor
|
||||
|
||||
# import and/or publish cloud images
|
||||
dynamic "post-processor" {
|
||||
for_each = { for b, c in local.configs:
|
||||
b => c if length(setintersection(c.actions, local.actions)) > 0
|
||||
}
|
||||
iterator = B
|
||||
labels = ["shell-local"]
|
||||
content {
|
||||
only = [ "qemu.${B.key}", "null.${B.key}" ]
|
||||
inline = [ for action in local.actions:
|
||||
"./cloud_helper.py ${action} ${local.debug_arg} ${local.broker_arg} ${B.key}" if contains(B.value.actions, action)
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
114
alpine-cloud-images/alpine.py
Normal file
114
alpine-cloud-images/alpine.py
Normal file
@ -0,0 +1,114 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
import json
|
||||
import re
|
||||
from datetime import datetime, timedelta
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
class Alpine():
|
||||
|
||||
DEFAULT_RELEASES_URL = 'https://alpinelinux.org/releases.json'
|
||||
DEFAULT_POSTS_URL = 'https://alpinelinux.org/posts/'
|
||||
DEFAULT_CDN_URL = 'https://dl-cdn.alpinelinux.org/alpine'
|
||||
DEFAULT_WEB_TIMEOUT = 5
|
||||
|
||||
def __init__(self, releases_url=None, posts_url=None, cdn_url=None, web_timeout=None):
|
||||
self.now = datetime.utcnow()
|
||||
self.release_today = self.now.strftime('%Y%m%d')
|
||||
self.eol_tomorrow = (self.now + timedelta(days=1)).strftime('%F')
|
||||
self.latest = None
|
||||
self.versions = {}
|
||||
self.releases_url = releases_url or self.DEFAULT_RELEASES_URL
|
||||
self.posts_url = posts_url or self.DEFAULT_POSTS_URL
|
||||
self.web_timeout = web_timeout or self.DEFAULT_WEB_TIMEOUT
|
||||
self.cdn_url = cdn_url or self.DEFAULT_CDN_URL
|
||||
|
||||
# get all Alpine versions, and their EOL and latest release
|
||||
res = urlopen(self.releases_url, timeout=self.web_timeout)
|
||||
r = json.load(res)
|
||||
branches = sorted(
|
||||
r['release_branches'], reverse=True,
|
||||
key=lambda x: x.get('branch_date', '0000-00-00')
|
||||
)
|
||||
for b in branches:
|
||||
ver = b['rel_branch'].lstrip('v')
|
||||
if not self.latest:
|
||||
self.latest = ver
|
||||
|
||||
rel = None
|
||||
notes = None
|
||||
if releases := b.get('releases', None):
|
||||
r = sorted(
|
||||
releases, reverse=True, key=lambda x: x['date']
|
||||
)[0]
|
||||
rel = r['version']
|
||||
notes = r.get('notes', None)
|
||||
if notes:
|
||||
notes = self.posts_url + notes.removeprefix('posts/').replace('.md', '.html')
|
||||
|
||||
elif ver == 'edge':
|
||||
# edge "releases" is today's YYYYMMDD
|
||||
rel = self.release_today
|
||||
|
||||
self.versions[ver] = {
|
||||
'version': ver,
|
||||
'release': rel,
|
||||
'end_of_life': b.get('eol_date', self.eol_tomorrow),
|
||||
'arches': b.get('arches'),
|
||||
'notes': notes,
|
||||
}
|
||||
|
||||
def _ver(self, ver=None):
|
||||
if not ver or ver == 'latest' or ver == 'latest-stable':
|
||||
ver = self.latest
|
||||
|
||||
return ver
|
||||
|
||||
def repo_url(self, repo, arch, ver=None):
|
||||
ver = self._ver(ver)
|
||||
if ver != 'edge':
|
||||
ver = 'v' + ver
|
||||
|
||||
return f"{self.cdn_url}/{ver}/{repo}/{arch}"
|
||||
|
||||
def virt_iso_url(self, arch, ver=None):
|
||||
ver = self._ver(ver)
|
||||
rel = self.versions[ver]['release']
|
||||
return f"{self.cdn_url}/v{ver}/releases/{arch}/alpine-virt-{rel}-{arch}.iso"
|
||||
|
||||
def version_info(self, ver=None):
|
||||
ver = self._ver(ver)
|
||||
if ver not in self.versions:
|
||||
# perhaps a release candidate?
|
||||
apk_ver = self.apk_version('main', 'x86_64', 'alpine-base', ver=ver)
|
||||
rel = apk_ver.split('-')[0]
|
||||
ver = '.'.join(rel.split('.')[:2])
|
||||
self.versions[ver] = {
|
||||
'version': ver,
|
||||
'release': rel,
|
||||
'end_of_life': self.eol_tomorrow,
|
||||
'arches': self.versions['edge']['arches'], # reasonable assumption
|
||||
'notes': None,
|
||||
}
|
||||
|
||||
return self.versions[ver]
|
||||
|
||||
# TODO? maybe implement apk_info() to read from APKINDEX, but for now
|
||||
# this apk_version() seems faster and gets what we need
|
||||
|
||||
def apk_version(self, repo, arch, apk, ver=None):
|
||||
ver = self._ver(ver)
|
||||
repo_url = self.repo_url(repo, arch, ver=ver)
|
||||
apks_re = re.compile(f'"{apk}-(\\d.*)\\.apk"')
|
||||
res = urlopen(repo_url, timeout=self.web_timeout)
|
||||
for line in map(lambda x: x.decode('utf8'), res):
|
||||
if not line.startswith('<a href="'):
|
||||
continue
|
||||
|
||||
match = apks_re.search(line)
|
||||
if match:
|
||||
return match.group(1)
|
||||
|
||||
# didn't find it?
|
||||
raise RuntimeError(f"Unable to find {apk} APK via {repo_url}")
|
349
alpine-cloud-images/build
Executable file
349
alpine-cloud-images/build
Executable file
@ -0,0 +1,349 @@
|
||||
#!/usr/bin/env python3
|
||||
# vim: ts=4 et:
|
||||
|
||||
# Ensure we're using the Python virtual env with our installed dependencies
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
|
||||
sys.pycache_prefix = 'work/__pycache__'
|
||||
|
||||
# Create the work environment if it doesn't exist.
|
||||
if not os.path.exists('work'):
|
||||
import venv
|
||||
|
||||
PIP_LIBS = [
|
||||
'mergedeep',
|
||||
'pyhocon',
|
||||
'python-dateutil',
|
||||
'ruamel.yaml',
|
||||
]
|
||||
print('Work environment does not exist, creating...', file=sys.stderr)
|
||||
venv.create('work', with_pip=True)
|
||||
subprocess.run(['work/bin/pip', 'install', '-U', 'pip', 'wheel'])
|
||||
subprocess.run(['work/bin/pip', 'install', '-U', *PIP_LIBS])
|
||||
|
||||
# Re-execute using the right virtual environment, if necessary.
|
||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
||||
os.execv(venv_args[0], venv_args)
|
||||
|
||||
# We're now in the right Python environment...
|
||||
|
||||
import argparse
|
||||
import io
|
||||
import logging
|
||||
import shutil
|
||||
import time
|
||||
|
||||
from glob import glob
|
||||
from subprocess import Popen, PIPE
|
||||
from urllib.request import urlopen
|
||||
|
||||
import clouds
|
||||
from alpine import Alpine
|
||||
from image_config_manager import ImageConfigManager
|
||||
|
||||
|
||||
### Constants & Variables
|
||||
|
||||
STEPS = ['configs', 'state', 'rollback', 'local', 'upload', 'import', 'publish', 'release']
|
||||
LOGFORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
WORK_CLEAN = {'bin', 'include', 'lib', 'pyvenv.cfg', '__pycache__'}
|
||||
WORK_OVERLAYS = ['configs', 'scripts']
|
||||
UEFI_FIRMWARE = {
|
||||
'aarch64': {
|
||||
'apk': 'aavmf',
|
||||
'bin': 'usr/share/AAVMF/QEMU_EFI.fd',
|
||||
},
|
||||
'x86_64': {
|
||||
'apk': 'ovmf',
|
||||
'bin': 'usr/share/OVMF/OVMF.fd',
|
||||
}
|
||||
}
|
||||
alpine = Alpine()
|
||||
|
||||
|
||||
### Functions
|
||||
|
||||
# ensure list has unique values, preserving order
|
||||
def unique_list(x):
|
||||
d = {e: 1 for e in x}
|
||||
return list(d.keys())
|
||||
|
||||
|
||||
def remove_dupe_args():
|
||||
class RemoveDupeArgs(argparse.Action):
|
||||
def __call__(self, parser, args, values, option_string=None):
|
||||
setattr(args, self.dest, unique_list(values))
|
||||
|
||||
return RemoveDupeArgs
|
||||
|
||||
|
||||
def are_args_valid(checker):
|
||||
class AreArgsValid(argparse.Action):
|
||||
def __call__(self, parser, args, values, option_string=None):
|
||||
# remove duplicates
|
||||
values = unique_list(values)
|
||||
for x in values:
|
||||
if not checker(x):
|
||||
parser.error(f"{option_string} value is not a {self.metavar}: {x}")
|
||||
|
||||
setattr(args, self.dest, values)
|
||||
|
||||
return AreArgsValid
|
||||
|
||||
|
||||
def clean_work():
|
||||
log.info('Cleaning work environment')
|
||||
|
||||
for x in (set(os.listdir('work')) - WORK_CLEAN):
|
||||
x = os.path.join('work', x)
|
||||
log.debug('removing %s', x)
|
||||
if os.path.isdir(x) and not os.path.islink(x):
|
||||
shutil.rmtree(x)
|
||||
else:
|
||||
os.unlink(x)
|
||||
|
||||
|
||||
def is_images_conf(o, x):
|
||||
if not all([
|
||||
o == 'configs',
|
||||
x.endswith('/images.conf'),
|
||||
os.path.islink(x),
|
||||
]):
|
||||
return False
|
||||
|
||||
# must also link to file in the same directory
|
||||
x_link = os.path.normpath(os.readlink(x))
|
||||
return x_link == os.path.basename(x_link)
|
||||
|
||||
|
||||
def install_overlay(overlay):
|
||||
log.info("Installing '%s' overlay in work environment", overlay)
|
||||
dest_dir = os.path.join('work', overlay)
|
||||
os.makedirs(dest_dir, exist_ok=True)
|
||||
for src in unique_list(['.'] + args.custom):
|
||||
src_dir = os.path.join(src, overlay)
|
||||
if not os.path.exists(src_dir):
|
||||
log.debug('%s does not exist, skipping', src_dir)
|
||||
continue
|
||||
for x in glob(os.path.join(src_dir, '**'), recursive=True):
|
||||
x = x.removeprefix(src_dir + '/')
|
||||
src_x = os.path.join(src_dir, x)
|
||||
dest_x = os.path.join(dest_dir, x)
|
||||
|
||||
if is_images_conf(overlay, src_x):
|
||||
rel_x = os.readlink(src_x)
|
||||
if os.path.islink(dest_x):
|
||||
log.debug('overriding %s', dest_x)
|
||||
os.unlink(dest_x)
|
||||
|
||||
log.debug('ln -s %s %s', rel_x, dest_x)
|
||||
os.symlink(rel_x, dest_x)
|
||||
continue
|
||||
|
||||
if os.path.isdir(src_x):
|
||||
if not os.path.exists(dest_x):
|
||||
log.debug('makedirs %s', dest_x)
|
||||
os.makedirs(dest_x)
|
||||
|
||||
if os.path.isdir(dest_x):
|
||||
continue
|
||||
|
||||
if os.path.exists(dest_x):
|
||||
log.critical('Unallowable destination overwirte detected: %s', dest_x)
|
||||
sys.exit(1)
|
||||
|
||||
log.debug('cp -p %s %s', src_x, dest_x)
|
||||
shutil.copy(src_x, dest_x)
|
||||
|
||||
|
||||
def install_overlays():
|
||||
for overlay in WORK_OVERLAYS:
|
||||
if not os.path.isdir(os.path.join('work', overlay)):
|
||||
install_overlay(overlay)
|
||||
|
||||
else:
|
||||
log.info("Using existing '%s' in work environment", overlay)
|
||||
|
||||
|
||||
def install_qemu_firmware():
|
||||
firm_dir = 'work/firmware'
|
||||
if os.path.isdir(firm_dir):
|
||||
log.info('Using existing UEFI firmware in work environment')
|
||||
return
|
||||
|
||||
log.info('Installing UEFI firmware in work environment')
|
||||
|
||||
os.makedirs(firm_dir)
|
||||
for arch, a_cfg in UEFI_FIRMWARE.items():
|
||||
apk = a_cfg['apk']
|
||||
bin = a_cfg['bin']
|
||||
v = alpine.apk_version('community', arch, apk)
|
||||
apk_url = f"{alpine.repo_url('community', arch)}/{apk}-{v}.apk"
|
||||
data = urlopen(apk_url).read()
|
||||
|
||||
# Python tarfile library can't extract from APKs
|
||||
tar_cmd = ['tar', '-zxf', '-', '-C', firm_dir, bin]
|
||||
p = Popen(tar_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
|
||||
out, err = p.communicate(input=data)
|
||||
if p.returncode:
|
||||
log.critical('Unable to untar %s to get %s', apk_url, bin)
|
||||
log.error('%s = %s', p.returncode, ' '.join(tar_cmd))
|
||||
log.error('STDOUT:\n%s', out.decode('utf8'))
|
||||
log.error('STDERR:\n%s', err.decode('utf8'))
|
||||
sys.exit(1)
|
||||
|
||||
firm_bin = os.path.join(firm_dir, f"uefi-{arch}.bin")
|
||||
os.symlink(bin, firm_bin)
|
||||
if arch in args.pad_uefi_bin_arch:
|
||||
log.info('Padding "%s" to 67108864 bytes', firm_bin)
|
||||
subprocess.run(['truncate', '-s', '67108864', firm_bin])
|
||||
|
||||
|
||||
### Command Line & Logging
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
|
||||
# general options
|
||||
parser.add_argument(
|
||||
'--debug', action='store_true', help='enable debug output')
|
||||
parser.add_argument(
|
||||
'--clean', action='store_true', help='start with a clean work environment')
|
||||
parser.add_argument(
|
||||
'--pad-uefi-bin-arch', metavar='ARCH', nargs='+', action=remove_dupe_args(),
|
||||
default=['aarch64'], help='pad out UEFI firmware binaries to 64 MiB')
|
||||
# config options
|
||||
parser.add_argument(
|
||||
'--custom', metavar='DIR', nargs='+', action=are_args_valid(os.path.isdir),
|
||||
default=[], help='overlay custom directory in work environment')
|
||||
# state options
|
||||
parser.add_argument(
|
||||
'--skip', metavar='KEY', nargs='+', action=remove_dupe_args(),
|
||||
default=[], help='skip variants with dimension key(s)')
|
||||
parser.add_argument(
|
||||
'--only', metavar='KEY', nargs='+', action=remove_dupe_args(),
|
||||
default=[], help='only variants with dimension key(s)')
|
||||
parser.add_argument(
|
||||
'--revise', action='store_true',
|
||||
help='remove existing local/uploaded/imported image, or bump revision and '
|
||||
' rebuild if published or released')
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
# packer options
|
||||
parser.add_argument(
|
||||
'--no-color', action='store_true', help='turn off Packer color output')
|
||||
parser.add_argument(
|
||||
'--parallel', metavar='N', type=int, default=1,
|
||||
help='build N images in parallel')
|
||||
parser.add_argument(
|
||||
'--vars', metavar='FILE', nargs='+', action=are_args_valid(os.path.isfile),
|
||||
default=[], help='supply Packer with -vars-file(s)')
|
||||
# positional argument
|
||||
parser.add_argument(
|
||||
'step', choices=STEPS, help='build up to and including this step')
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger('build')
|
||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
||||
console = logging.StreamHandler()
|
||||
logfmt = logging.Formatter(LOGFORMAT, datefmt='%FT%TZ')
|
||||
logfmt.converter = time.gmtime
|
||||
console.setFormatter(logfmt)
|
||||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
if args.step == 'rollback':
|
||||
log.warning('"rollback" step enables --revise option')
|
||||
args.revise = True
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider(debug=args.debug)
|
||||
|
||||
### Setup Configs
|
||||
|
||||
latest = alpine.version_info()
|
||||
log.info('Latest Alpine version %s, release %s, and notes: %s', latest['version'], latest['release'], latest['notes'])
|
||||
if args.clean:
|
||||
clean_work()
|
||||
|
||||
# install overlay(s) if missing
|
||||
install_overlays()
|
||||
|
||||
image_configs = ImageConfigManager(
|
||||
conf_path='work/configs/images.conf',
|
||||
yaml_path='work/images.yaml',
|
||||
log='build',
|
||||
alpine=alpine,
|
||||
)
|
||||
|
||||
log.info('Configuration Complete')
|
||||
if args.step == 'configs':
|
||||
sys.exit(0)
|
||||
|
||||
### What needs doing?
|
||||
|
||||
if not image_configs.refresh_state(
|
||||
step=args.step, only=args.only, skip=args.skip, revise=args.revise):
|
||||
log.info('No pending actions to take at this time.')
|
||||
sys.exit(0)
|
||||
|
||||
if args.step == 'state' or args.step == 'rollback':
|
||||
sys.exit(0)
|
||||
|
||||
# install firmware if missing
|
||||
install_qemu_firmware()
|
||||
|
||||
### Build/Import/Publish with Packer
|
||||
|
||||
env = os.environ | {
|
||||
'TZ': 'UTC',
|
||||
'PACKER_CACHE_DIR': 'work/packer_cache'
|
||||
}
|
||||
|
||||
packer_cmd = [
|
||||
'packer', 'build', '-timestamp-ui',
|
||||
'-parallel-builds', str(args.parallel)
|
||||
]
|
||||
if args.no_color:
|
||||
packer_cmd.append('-color=false')
|
||||
|
||||
if args.use_broker:
|
||||
packer_cmd += ['-var', 'USE_BROKER=1']
|
||||
|
||||
if args.debug:
|
||||
# do not add '-debug', it will pause between steps
|
||||
packer_cmd += ['-var', 'DEBUG=1']
|
||||
|
||||
for var_file in args.vars:
|
||||
packer_cmd.append(f"-var-file={var_file}")
|
||||
|
||||
packer_cmd += ['.']
|
||||
log.info('Executing Packer...')
|
||||
log.debug(packer_cmd)
|
||||
out = io.StringIO()
|
||||
p = Popen(packer_cmd, stdout=PIPE, encoding='utf8', env=env)
|
||||
while p.poll() is None:
|
||||
text = p.stdout.readline()
|
||||
out.write(text)
|
||||
print(text, end="")
|
||||
|
||||
if p.returncode != 0:
|
||||
log.critical('Packer Failure')
|
||||
sys.exit(p.returncode)
|
||||
|
||||
log.info('Packer Completed')
|
||||
|
||||
# update final state in work/images.yaml
|
||||
# TODO: do we need to do all of this or just save all the image_configs?
|
||||
image_configs.refresh_state(
|
||||
step='final',
|
||||
only=args.only,
|
||||
skip=args.skip
|
||||
)
|
||||
|
||||
log.info('Build Finished')
|
98
alpine-cloud-images/cloud_helper.py
Executable file
98
alpine-cloud-images/cloud_helper.py
Executable file
@ -0,0 +1,98 @@
|
||||
#!/usr/bin/env python3
|
||||
# vim: ts=4 et:
|
||||
|
||||
# Ensure we're using the Python virtual env with our installed dependencies
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
NOTE = textwrap.dedent("""
|
||||
This script is meant to be run as a Packer post-processor, and Packer is only
|
||||
meant to be executed from the main 'build' script, which is responsible for
|
||||
setting up the work environment.
|
||||
""")
|
||||
|
||||
sys.pycache_prefix = 'work/__pycache__'
|
||||
|
||||
if not os.path.exists('work'):
|
||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
||||
print(NOTE, file=sys.stderr)
|
||||
exit(1)
|
||||
|
||||
# Re-execute using the right virtual environment, if necessary.
|
||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
||||
os.execv(venv_args[0], venv_args)
|
||||
|
||||
# We're now in the right Python environment
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
from ruamel.yaml import YAML
|
||||
|
||||
import clouds
|
||||
from image_config_manager import ImageConfigManager
|
||||
|
||||
|
||||
### Constants & Variables
|
||||
|
||||
ACTIONS = ['local', 'upload', 'import', 'publish', 'release']
|
||||
LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
|
||||
### Command Line & Logging
|
||||
|
||||
parser = argparse.ArgumentParser(description=NOTE)
|
||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
parser.add_argument('action', choices=ACTIONS)
|
||||
parser.add_argument('image_keys', metavar='IMAGE_KEY', nargs='+')
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger(args.action)
|
||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
||||
# log to STDOUT so that it's not all red when executed by packer
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
console.setFormatter(logging.Formatter(LOGFORMAT))
|
||||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider(debug=args.debug)
|
||||
|
||||
# load build configs
|
||||
configs = ImageConfigManager(
|
||||
conf_path='work/configs/images.conf',
|
||||
yaml_path='work/images.yaml',
|
||||
log=args.action
|
||||
)
|
||||
|
||||
yaml = YAML()
|
||||
yaml.explicit_start = True
|
||||
|
||||
for image_key in args.image_keys:
|
||||
image_config = configs.get(image_key)
|
||||
|
||||
if args.action == 'local':
|
||||
image_config.convert_image()
|
||||
|
||||
elif args.action == 'upload':
|
||||
if image_config.storage:
|
||||
image_config.upload_image()
|
||||
|
||||
elif args.action == 'import':
|
||||
clouds.import_image(image_config)
|
||||
|
||||
elif args.action == 'publish':
|
||||
clouds.publish_image(image_config)
|
||||
|
||||
elif args.action == 'release':
|
||||
pass
|
||||
# TODO: image_config.release_image() - configurable steps to take on remote host
|
||||
|
||||
# save per-image metadata
|
||||
image_config.save_metadata(args.action)
|
51
alpine-cloud-images/clouds/__init__.py
Normal file
51
alpine-cloud-images/clouds/__init__.py
Normal file
@ -0,0 +1,51 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
from . import aws, nocloud, azure, gcp, oci
|
||||
|
||||
ADAPTERS = {}
|
||||
|
||||
|
||||
def register(*mods):
|
||||
for mod in mods:
|
||||
cloud = mod.__name__.split('.')[-1]
|
||||
if p := mod.register(cloud):
|
||||
ADAPTERS[cloud] = p
|
||||
|
||||
|
||||
register(
|
||||
aws, # well-tested and fully supported
|
||||
nocloud, # beta, but supported
|
||||
azure, # alpha, needs testing, lacks import and publish
|
||||
gcp, # alpha, needs testing, lacks import and publish
|
||||
oci, # alpha, needs testing, lacks import and publish
|
||||
)
|
||||
|
||||
|
||||
# using a credential provider is optional, set across all adapters
|
||||
def set_credential_provider(debug=False):
|
||||
from .identity_broker_client import IdentityBrokerClient
|
||||
cred_provider = IdentityBrokerClient(debug=debug)
|
||||
for adapter in ADAPTERS.values():
|
||||
adapter.cred_provider = cred_provider
|
||||
|
||||
|
||||
### forward to the correct adapter
|
||||
|
||||
# TODO: latest_imported_tags(...)
|
||||
def get_latest_imported_tags(config):
|
||||
return ADAPTERS[config.cloud].get_latest_imported_tags(
|
||||
config.project,
|
||||
config.image_key
|
||||
)
|
||||
|
||||
|
||||
def import_image(config):
|
||||
return ADAPTERS[config.cloud].import_image(config)
|
||||
|
||||
|
||||
def delete_image(config, image_id):
|
||||
return ADAPTERS[config.cloud].delete_image(image_id)
|
||||
|
||||
|
||||
def publish_image(config):
|
||||
return ADAPTERS[config.cloud].publish_image(config)
|
397
alpine-cloud-images/clouds/aws.py
Normal file
397
alpine-cloud-images/clouds/aws.py
Normal file
@ -0,0 +1,397 @@
|
||||
# NOTE: not meant to be executed directly
|
||||
# vim: ts=4 et:
|
||||
|
||||
import logging
|
||||
import hashlib
|
||||
import os
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
from .interfaces.adapter import CloudAdapterInterface
|
||||
from image_tags import DictObj, ImageTags
|
||||
|
||||
|
||||
class AWSCloudAdapter(CloudAdapterInterface):
|
||||
IMAGE_INFO = [
|
||||
'revision', 'imported', 'import_id', 'import_region', 'published',
|
||||
]
|
||||
CRED_MAP = {
|
||||
'access_key': 'aws_access_key_id',
|
||||
'secret_key': 'aws_secret_access_key',
|
||||
'session_token': 'aws_session_token',
|
||||
}
|
||||
ARCH = {
|
||||
'aarch64': 'arm64',
|
||||
'x86_64': 'x86_64',
|
||||
}
|
||||
BOOT_MODE = {
|
||||
'bios': 'legacy-bios',
|
||||
'uefi': 'uefi',
|
||||
}
|
||||
|
||||
@property
|
||||
def sdk(self):
|
||||
# delayed import/install of SDK until we want to use it
|
||||
if not self._sdk:
|
||||
try:
|
||||
import boto3
|
||||
except ModuleNotFoundError:
|
||||
subprocess.run(['work/bin/pip', 'install', '-U', 'boto3'])
|
||||
import boto3
|
||||
|
||||
self._sdk = boto3
|
||||
|
||||
return self._sdk
|
||||
|
||||
def session(self, region=None):
|
||||
if region not in self._sessions:
|
||||
creds = {'region_name': region} | self.credentials(region)
|
||||
self._sessions[region] = self.sdk.session.Session(**creds)
|
||||
|
||||
return self._sessions[region]
|
||||
|
||||
@property
|
||||
def regions(self):
|
||||
if self.cred_provider:
|
||||
return self.cred_provider.get_regions(self.cloud)
|
||||
|
||||
# list of all subscribed regions
|
||||
return {r['RegionName']: True for r in self.session().client('ec2').describe_regions()['Regions']}
|
||||
|
||||
@property
|
||||
def default_region(self):
|
||||
if self.cred_provider:
|
||||
return self.cred_provider.get_default_region(self.cloud)
|
||||
|
||||
# rely on our env or ~/.aws config for the default
|
||||
return None
|
||||
|
||||
def credentials(self, region=None):
|
||||
if not self.cred_provider:
|
||||
# use the cloud SDK's default credential discovery
|
||||
return {}
|
||||
|
||||
creds = self.cred_provider.get_credentials(self.cloud, region)
|
||||
# return dict suitable to use for session()
|
||||
return {self.CRED_MAP[k]: v for k, v in creds.items() if k in self.CRED_MAP}
|
||||
|
||||
def _get_images_with_tags(self, project, image_key, tags={}, region=None):
|
||||
ec2r = self.session(region).resource('ec2')
|
||||
req = {'Owners': ['self'], 'Filters': []}
|
||||
tags |= {
|
||||
'project': project,
|
||||
'image_key': image_key,
|
||||
}
|
||||
for k, v in tags.items():
|
||||
req['Filters'].append({'Name': f"tag:{k}", 'Values': [str(v)]})
|
||||
|
||||
return sorted(
|
||||
ec2r.images.filter(**req), key=lambda k: k.creation_date, reverse=True)
|
||||
|
||||
# necessary cloud-agnostic image info
|
||||
# TODO: still necessary? maybe just incoroporate into new latest_imported_tags()?
|
||||
def _image_info(self, i):
|
||||
tags = ImageTags(from_list=i.tags)
|
||||
return DictObj({k: tags.get(k, None) for k in self.IMAGE_INFO})
|
||||
|
||||
# get the latest imported image's tags for a given build key
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
images = self._get_images_with_tags(
|
||||
project=project,
|
||||
image_key=image_key,
|
||||
)
|
||||
if images:
|
||||
# first one is the latest
|
||||
return ImageTags(from_list=images[0].tags)
|
||||
|
||||
return None
|
||||
|
||||
# import an image
|
||||
# NOTE: requires 'vmimport' role with read/write of <s3_bucket>.* and its objects
|
||||
def import_image(self, ic):
|
||||
log = logging.getLogger('import')
|
||||
description = ic.image_description
|
||||
|
||||
session = self.session()
|
||||
s3r = session.resource('s3')
|
||||
ec2c = session.client('ec2')
|
||||
ec2r = session.resource('ec2')
|
||||
|
||||
bucket_name = 'alpine-cloud-images.' + hashlib.sha1(os.urandom(40)).hexdigest()
|
||||
|
||||
bucket = s3r.Bucket(bucket_name)
|
||||
log.info('Creating S3 bucket %s', bucket.name)
|
||||
bucket.create(
|
||||
CreateBucketConfiguration={'LocationConstraint': ec2c.meta.region_name}
|
||||
)
|
||||
bucket.wait_until_exists()
|
||||
s3_url = f"s3://{bucket.name}/{ic.image_file}"
|
||||
|
||||
try:
|
||||
log.info('Uploading %s to %s', ic.image_path, s3_url)
|
||||
bucket.upload_file(str(ic.image_path), ic.image_file)
|
||||
|
||||
# import snapshot from S3
|
||||
log.info('Importing EC2 snapshot from %s', s3_url)
|
||||
ss_import_opts = {
|
||||
'DiskContainer': {
|
||||
'Description': description, # https://github.com/boto/boto3/issues/2286
|
||||
'Format': 'VHD',
|
||||
'Url': s3_url,
|
||||
},
|
||||
'Encrypted': True if ic.encrypted else False,
|
||||
# NOTE: TagSpecifications -- doesn't work with ResourceType: snapshot?
|
||||
}
|
||||
if type(ic.encrypted) is str:
|
||||
ss_import_opts['KmsKeyId'] = ic.encrypted
|
||||
|
||||
ss_import = ec2c.import_snapshot(**ss_import_opts)
|
||||
ss_task_id = ss_import['ImportTaskId']
|
||||
while True:
|
||||
ss_task = ec2c.describe_import_snapshot_tasks(
|
||||
ImportTaskIds=[ss_task_id]
|
||||
)
|
||||
task_detail = ss_task['ImportSnapshotTasks'][0]['SnapshotTaskDetail']
|
||||
if task_detail['Status'] not in ['pending', 'active', 'completed']:
|
||||
msg = f"Bad EC2 snapshot import: {task_detail['Status']} - {task_detail['StatusMessage']}"
|
||||
log.error(msg)
|
||||
raise RuntimeError(msg)
|
||||
|
||||
if task_detail['Status'] == 'completed':
|
||||
snapshot_id = task_detail['SnapshotId']
|
||||
break
|
||||
|
||||
time.sleep(15)
|
||||
except Exception:
|
||||
log.error('Unable to import snapshot from S3:', exc_info=True)
|
||||
raise
|
||||
finally:
|
||||
# always cleanup S3, even if there was an exception raised
|
||||
log.info('Cleaning up %s', s3_url)
|
||||
bucket.Object(ic.image_file).delete()
|
||||
bucket.delete()
|
||||
|
||||
# tag snapshot
|
||||
snapshot = ec2r.Snapshot(snapshot_id)
|
||||
try:
|
||||
log.info('Tagging EC2 snapshot %s', snapshot_id)
|
||||
tags = ic.tags
|
||||
tags.Name = tags.name # because AWS is special
|
||||
snapshot.create_tags(Tags=tags.as_list())
|
||||
except Exception:
|
||||
log.error('Unable to tag snapshot:', exc_info=True)
|
||||
log.info('Removing snapshot')
|
||||
snapshot.delete()
|
||||
raise
|
||||
|
||||
# register AMI
|
||||
try:
|
||||
log.info('Registering EC2 AMI from snapshot %s', snapshot_id)
|
||||
img = ec2c.register_image(
|
||||
Architecture=self.ARCH[ic.arch],
|
||||
BlockDeviceMappings=[{
|
||||
'DeviceName': '/dev/xvda',
|
||||
'Ebs': {
|
||||
'SnapshotId': snapshot_id,
|
||||
'VolumeType': 'gp3'
|
||||
}
|
||||
}],
|
||||
Description=description,
|
||||
EnaSupport=True,
|
||||
Name=ic.image_name,
|
||||
RootDeviceName='/dev/xvda',
|
||||
SriovNetSupport='simple',
|
||||
VirtualizationType='hvm',
|
||||
BootMode=self.BOOT_MODE[ic.firmware],
|
||||
)
|
||||
except Exception:
|
||||
log.error('Unable to register image:', exc_info=True)
|
||||
log.info('Removing snapshot')
|
||||
snapshot.delete()
|
||||
raise
|
||||
|
||||
image_id = img['ImageId']
|
||||
image = ec2r.Image(image_id)
|
||||
|
||||
try:
|
||||
# tag image (adds imported tag)
|
||||
log.info('Tagging EC2 AMI %s', image_id)
|
||||
tags.imported = datetime.utcnow().isoformat()
|
||||
tags.import_id = image_id
|
||||
tags.import_region = ec2c.meta.region_name
|
||||
image.create_tags(Tags=tags.as_list())
|
||||
except Exception:
|
||||
log.error('Unable to tag image:', exc_info=True)
|
||||
log.info('Removing image and snapshot')
|
||||
image.delete()
|
||||
snapshot.delete()
|
||||
raise
|
||||
|
||||
# update ImageConfig with imported tag values, minus special AWS 'Name'
|
||||
tags.pop('Name', None)
|
||||
ic.__dict__ |= tags
|
||||
|
||||
# delete an (unpublished) image
|
||||
def delete_image(self, image_id):
|
||||
log = logging.getLogger('build')
|
||||
ec2r = self.session().resource('ec2')
|
||||
image = ec2r.Image(image_id)
|
||||
snapshot_id = image.block_device_mappings[0]['Ebs']['SnapshotId']
|
||||
snapshot = ec2r.Snapshot(snapshot_id)
|
||||
log.info('Deregistering %s', image_id)
|
||||
image.deregister()
|
||||
log.info('Deleting %s', snapshot_id)
|
||||
snapshot.delete()
|
||||
|
||||
# publish an image
|
||||
def publish_image(self, ic):
|
||||
log = logging.getLogger('publish')
|
||||
source_image = self.get_latest_imported_tags(
|
||||
ic.project,
|
||||
ic.image_key,
|
||||
)
|
||||
if not source_image:
|
||||
log.error('No source image for %s', ic.image_key)
|
||||
raise RuntimeError('Missing source imamge')
|
||||
|
||||
source_id = source_image.import_id
|
||||
source_region = source_image.import_region
|
||||
log.info('Publishing source: %s/%s', source_region, source_id)
|
||||
source = self.session().resource('ec2').Image(source_id)
|
||||
|
||||
# we may be updating tags, get them from image config
|
||||
tags = ic.tags
|
||||
|
||||
# sort out published image access permissions
|
||||
perms = {'groups': [], 'users': []}
|
||||
if ic.access.get('PUBLIC', None):
|
||||
perms['groups'] = ['all']
|
||||
else:
|
||||
for k, v in ic.access.items():
|
||||
if v:
|
||||
log.debug('users: %s', k)
|
||||
perms['users'].append(str(k))
|
||||
|
||||
log.debug('perms: %s', perms)
|
||||
|
||||
# resolve destination regions
|
||||
regions = self.regions
|
||||
if ic.regions.pop('ALL', None):
|
||||
log.info('Publishing to ALL available regions')
|
||||
else:
|
||||
# clear ALL out of the way if it's still there
|
||||
ic.regions.pop('ALL', None)
|
||||
regions = {r: regions[r] for r in ic.regions}
|
||||
|
||||
publishing = {}
|
||||
for r in regions.keys():
|
||||
if not regions[r]:
|
||||
log.warning('Skipping unsubscribed AWS region %s', r)
|
||||
continue
|
||||
|
||||
images = self._get_images_with_tags(
|
||||
region=r,
|
||||
project=ic.project,
|
||||
image_key=ic.image_key,
|
||||
tags={'revision': ic.revision}
|
||||
)
|
||||
if images:
|
||||
image = images[0]
|
||||
log.info('%s: Already exists as %s', r, image.id)
|
||||
else:
|
||||
ec2c = self.session(r).client('ec2')
|
||||
copy_image_opts = {
|
||||
'Description': source.description,
|
||||
'Name': source.name,
|
||||
'SourceImageId': source_id,
|
||||
'SourceRegion': source_region,
|
||||
'Encrypted': True if ic.encrypted else False,
|
||||
}
|
||||
if type(ic.encrypted) is str:
|
||||
copy_image_opts['KmsKeyId'] = ic.encrypted
|
||||
|
||||
try:
|
||||
res = ec2c.copy_image(**copy_image_opts)
|
||||
except Exception:
|
||||
log.warning('Skipping %s, unable to copy image:', r, exc_info=True)
|
||||
continue
|
||||
|
||||
image_id = res['ImageId']
|
||||
log.info('%s: Publishing to %s', r, image_id)
|
||||
image = self.session(r).resource('ec2').Image(image_id)
|
||||
|
||||
publishing[r] = image
|
||||
|
||||
artifacts = {}
|
||||
copy_wait = 180
|
||||
while len(artifacts) < len(publishing):
|
||||
for r, image in publishing.items():
|
||||
if r not in artifacts:
|
||||
image.reload()
|
||||
if image.state == 'available':
|
||||
# tag image
|
||||
log.info('%s: Adding tags to %s', r, image.id)
|
||||
image_tags = ImageTags(from_list=image.tags)
|
||||
fresh = False
|
||||
if 'published' not in image_tags:
|
||||
fresh = True
|
||||
|
||||
if fresh:
|
||||
tags.published = datetime.utcnow().isoformat()
|
||||
|
||||
tags.Name = tags.name # because AWS is special
|
||||
image.create_tags(Tags=tags.as_list())
|
||||
|
||||
# tag image's snapshot, too
|
||||
snapshot = self.session(r).resource('ec2').Snapshot(
|
||||
image.block_device_mappings[0]['Ebs']['SnapshotId']
|
||||
)
|
||||
snapshot.create_tags(Tags=tags.as_list())
|
||||
|
||||
# update image description to match description in tags
|
||||
log.info('%s: Updating description to %s', r, tags.description)
|
||||
image.modify_attribute(
|
||||
Description={'Value': tags.description},
|
||||
)
|
||||
|
||||
# apply launch perms
|
||||
if perms['groups'] or perms['users']:
|
||||
log.info('%s: Applying launch perms to %s', r, image.id)
|
||||
image.reset_attribute(Attribute='launchPermission')
|
||||
image.modify_attribute(
|
||||
Attribute='launchPermission',
|
||||
OperationType='add',
|
||||
UserGroups=perms['groups'],
|
||||
UserIds=perms['users'],
|
||||
)
|
||||
|
||||
# set up AMI deprecation
|
||||
ec2c = image.meta.client
|
||||
log.info('%s: Setting EOL deprecation time on %s', r, image.id)
|
||||
try:
|
||||
ec2c.enable_image_deprecation(
|
||||
ImageId=image.id,
|
||||
DeprecateAt=f"{tags.end_of_life}T23:59:00Z"
|
||||
)
|
||||
except Exception:
|
||||
log.warning('Unable to set EOL Deprecation on %s image:', r, exc_info=True)
|
||||
|
||||
artifacts[r] = image.id
|
||||
|
||||
if image.state == 'failed':
|
||||
log.error('%s: %s - %s - %s', r, image.id, image.state, image.state_reason)
|
||||
artifacts[r] = None
|
||||
|
||||
remaining = len(publishing) - len(artifacts)
|
||||
if remaining > 0:
|
||||
log.info('Waiting %ds for %d images to complete', copy_wait, remaining)
|
||||
time.sleep(copy_wait)
|
||||
copy_wait = 30
|
||||
|
||||
ic.artifacts = artifacts
|
||||
|
||||
|
||||
def register(cloud, cred_provider=None):
|
||||
return AWSCloudAdapter(cloud, cred_provider)
|
22
alpine-cloud-images/clouds/azure.py
Normal file
22
alpine-cloud-images/clouds/azure.py
Normal file
@ -0,0 +1,22 @@
|
||||
from .interfaces.adapter import CloudAdapterInterface
|
||||
|
||||
# NOTE: This stub allows images to be built locally and uploaded to storage,
|
||||
# but code for automated importing and publishing of images for this cloud
|
||||
# publisher has not yet been written.
|
||||
|
||||
class AzureCloudAdapter(CloudAdapterInterface):
|
||||
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
return None
|
||||
|
||||
def import_image(self, ic):
|
||||
pass
|
||||
|
||||
def delete_image(self, config, image_id):
|
||||
pass
|
||||
|
||||
def publish_image(self, ic):
|
||||
pass
|
||||
|
||||
def register(cloud, cred_provider=None):
|
||||
return AzureCloudAdapter(cloud, cred_provider)
|
22
alpine-cloud-images/clouds/gcp.py
Normal file
22
alpine-cloud-images/clouds/gcp.py
Normal file
@ -0,0 +1,22 @@
|
||||
from .interfaces.adapter import CloudAdapterInterface
|
||||
|
||||
# NOTE: This stub allows images to be built locally and uploaded to storage,
|
||||
# but code for automated importing and publishing of images for this cloud
|
||||
# publisher has not yet been written.
|
||||
|
||||
class GCPCloudAdapter(CloudAdapterInterface):
|
||||
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
return None
|
||||
|
||||
def import_image(self, ic):
|
||||
pass
|
||||
|
||||
def delete_image(self, config, image_id):
|
||||
pass
|
||||
|
||||
def publish_image(self, ic):
|
||||
pass
|
||||
|
||||
def register(cloud, cred_provider=None):
|
||||
return GCPCloudAdapter(cloud, cred_provider)
|
135
alpine-cloud-images/clouds/identity_broker_client.py
Normal file
135
alpine-cloud-images/clouds/identity_broker_client.py
Normal file
@ -0,0 +1,135 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import urllib.error
|
||||
|
||||
from datetime import datetime
|
||||
from email.utils import parsedate
|
||||
from urllib.request import Request, urlopen
|
||||
|
||||
|
||||
class IdentityBrokerClient:
|
||||
"""Client for identity broker
|
||||
|
||||
Export IDENTITY_BROKER_ENDPOINT to override the default broker endpoint.
|
||||
Export IDENTITY_BROKER_API_KEY to specify an API key for the broker.
|
||||
|
||||
See README_BROKER.md for more information and a spec.
|
||||
"""
|
||||
|
||||
_DEFAULT_ENDPOINT = 'https://aws-access.crute.us/api/account'
|
||||
_DEFAULT_ACCOUNT = 'alpine-amis-user'
|
||||
_LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
def __init__(self, endpoint=None, key=None, account=None, debug=False):
|
||||
# log to STDOUT so that it's not all red when executed by Packer
|
||||
self._logger = logging.getLogger('identity-broker')
|
||||
self._logger.setLevel(logging.DEBUG if debug else logging.INFO)
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
console.setFormatter(logging.Formatter(self._LOGFORMAT))
|
||||
self._logger.addHandler(console)
|
||||
|
||||
self._endpoint = os.environ.get('IDENTITY_BROKER_ENDPOINT') or endpoint \
|
||||
or self._DEFAULT_ENDPOINT
|
||||
self._key = os.environ.get('IDENTITY_BROKER_API_KEY') or key
|
||||
self._account = account or self._DEFAULT_ACCOUNT
|
||||
if not self._key:
|
||||
raise Exception('No identity broker key found')
|
||||
|
||||
self._headers = {
|
||||
'Accept': 'application/vnd.broker.v2+json',
|
||||
'X-API-Key': self._key
|
||||
}
|
||||
self._cache = {}
|
||||
self._expires = {}
|
||||
self._default_region = {}
|
||||
|
||||
def _is_cache_valid(self, path):
|
||||
if path not in self._cache:
|
||||
return False
|
||||
|
||||
# path is subject to expiry AND its time has passed
|
||||
if self._expires[path] and self._expires[path] < datetime.utcnow():
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _get(self, path):
|
||||
self._logger.debug("request: %s", path)
|
||||
if not self._is_cache_valid(path):
|
||||
while True: # to handle rate limits
|
||||
try:
|
||||
res = urlopen(Request(path, headers=self._headers))
|
||||
except urllib.error.HTTPError as ex:
|
||||
if ex.status == 401:
|
||||
raise Exception('Expired or invalid identity broker token')
|
||||
|
||||
if ex.status == 406:
|
||||
raise Exception('Invalid or malformed identity broker token')
|
||||
|
||||
# TODO: will this be entirely handled by the 401 above?
|
||||
if ex.headers.get('Location') == '/logout':
|
||||
raise Exception('Identity broker token is expired')
|
||||
|
||||
if ex.status == 429:
|
||||
self._logger.warning(
|
||||
'Rate-limited by identity broker, sleeping 30 seconds')
|
||||
time.sleep(30)
|
||||
continue
|
||||
|
||||
raise ex
|
||||
|
||||
if res.status not in {200, 429}:
|
||||
raise Exception(res.reason)
|
||||
|
||||
# never expires without valid RFC 1123 Expires header
|
||||
if expires := res.getheader('Expires'):
|
||||
expires = parsedate(expires)
|
||||
# convert RFC 1123 to datetime, if parsed successfully
|
||||
expires = datetime(*expires[:6])
|
||||
|
||||
self._expires[path] = expires
|
||||
self._cache[path] = json.load(res)
|
||||
break
|
||||
|
||||
self._logger.debug("response: %s", self._cache[path])
|
||||
return self._cache[path]
|
||||
|
||||
def get_credentials_url(self, vendor):
|
||||
accounts = self._get(self._endpoint)
|
||||
if vendor not in accounts:
|
||||
raise Exception(f'No {vendor} credentials found')
|
||||
|
||||
for account in accounts[vendor]:
|
||||
if account['short_name'] == self._account:
|
||||
return account['credentials_url']
|
||||
|
||||
raise Exception('No account credentials found')
|
||||
|
||||
def get_regions(self, vendor):
|
||||
out = {}
|
||||
|
||||
for region in self._get(self.get_credentials_url(vendor)):
|
||||
if region['enabled']:
|
||||
out[region['name']] = region['credentials_url']
|
||||
|
||||
if region['default']:
|
||||
self._default_region[vendor] = region['name']
|
||||
|
||||
return out
|
||||
|
||||
def get_default_region(self, vendor):
|
||||
if vendor not in self._default_region:
|
||||
self.get_regions(vendor)
|
||||
|
||||
return self._default_region.get(vendor)
|
||||
|
||||
def get_credentials(self, vendor, region=None):
|
||||
if not region:
|
||||
region = self.get_default_region(vendor)
|
||||
|
||||
return self._get(self.get_regions(vendor)[region])
|
0
alpine-cloud-images/clouds/interfaces/__init__.py
Normal file
0
alpine-cloud-images/clouds/interfaces/__init__.py
Normal file
40
alpine-cloud-images/clouds/interfaces/adapter.py
Normal file
40
alpine-cloud-images/clouds/interfaces/adapter.py
Normal file
@ -0,0 +1,40 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
class CloudAdapterInterface:
|
||||
|
||||
def __init__(self, cloud, cred_provider=None):
|
||||
self._sdk = None
|
||||
self._sessions = {}
|
||||
self.cloud = cloud
|
||||
self.cred_provider = cred_provider
|
||||
self._default_region = None
|
||||
|
||||
@property
|
||||
def sdk(self):
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def regions(self):
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def default_region(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def credentials(self, region=None):
|
||||
raise NotImplementedError
|
||||
|
||||
def session(self, region=None):
|
||||
raise NotImplementedError
|
||||
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
raise NotImplementedError
|
||||
|
||||
def import_image(self, config):
|
||||
raise NotImplementedError
|
||||
|
||||
def delete_image(self, config, image_id):
|
||||
raise NotImplementedError
|
||||
|
||||
def publish_image(self, config):
|
||||
raise NotImplementedError
|
21
alpine-cloud-images/clouds/nocloud.py
Normal file
21
alpine-cloud-images/clouds/nocloud.py
Normal file
@ -0,0 +1,21 @@
|
||||
from .interfaces.adapter import CloudAdapterInterface
|
||||
|
||||
# NOTE: NoCloud images are never imported or published because there's
|
||||
# no actual cloud provider associated with them.
|
||||
|
||||
class NoCloudAdapter(CloudAdapterInterface):
|
||||
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
return None
|
||||
|
||||
def import_image(self, ic):
|
||||
pass
|
||||
|
||||
def delete_image(self, config, image_id):
|
||||
pass
|
||||
|
||||
def publish_image(self, ic):
|
||||
pass
|
||||
|
||||
def register(cloud, cred_provider=None):
|
||||
return NoCloudAdapter(cloud, cred_provider)
|
22
alpine-cloud-images/clouds/oci.py
Normal file
22
alpine-cloud-images/clouds/oci.py
Normal file
@ -0,0 +1,22 @@
|
||||
from .interfaces.adapter import CloudAdapterInterface
|
||||
|
||||
# NOTE: This stub allows images to be built locally and uploaded to storage,
|
||||
# but code for automated importing and publishing of images for this cloud
|
||||
# publisher has not yet been written.
|
||||
|
||||
class OCICloudAdapter(CloudAdapterInterface):
|
||||
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
return None
|
||||
|
||||
def import_image(self, ic):
|
||||
pass
|
||||
|
||||
def delete_image(self, config, image_id):
|
||||
pass
|
||||
|
||||
def publish_image(self, ic):
|
||||
pass
|
||||
|
||||
def register(cloud, cred_provider=None):
|
||||
return OCICloudAdapter(cloud, cred_provider)
|
105
alpine-cloud-images/configs/alpine.conf
Normal file
105
alpine-cloud-images/configs/alpine.conf
Normal file
@ -0,0 +1,105 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
# NOTE: If you are using alpine-cloud-images to build public cloud images
|
||||
# for something/someone other than Alpine Linux, you *MUST* override
|
||||
# *AT LEAST* the 'project' setting with a unique identifier string value
|
||||
# via a "config overlay" to avoid image import and publishing collisions.
|
||||
|
||||
project = "https://alpinelinux.org/cloud"
|
||||
|
||||
# all build configs start with these
|
||||
Default {
|
||||
project = ${project}
|
||||
|
||||
# image name/description components
|
||||
name = [ alpine ]
|
||||
description = [ Alpine Linux ]
|
||||
|
||||
motd {
|
||||
welcome = "Welcome to Alpine!"
|
||||
|
||||
wiki = [
|
||||
"The Alpine Wiki contains a large amount of how-to guides and general"
|
||||
"information about administrating Alpine systems."
|
||||
"See <https://wiki.alpinelinux.org>."
|
||||
]
|
||||
|
||||
release_notes = [
|
||||
"Alpine release notes:"
|
||||
"* <{release_notes}>"
|
||||
]
|
||||
}
|
||||
|
||||
# initial provisioning script and data directory
|
||||
scripts = [ setup ]
|
||||
script_dirs = [ setup.d ]
|
||||
|
||||
size = 1G
|
||||
login = alpine
|
||||
|
||||
image_format = qcow2
|
||||
|
||||
# these paths are subject to change, as image downloads are developed
|
||||
storage_url = "ssh://tomalok@dev.alpinelinux.org/public_html/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}"
|
||||
#storage_url = "file://~jake/tmp/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}"
|
||||
download_url = "https://dev.alpinelinux.org/~tomalok/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}" # development
|
||||
#download_url = "https://dl-cdn.alpinelinux.org/alpine/{v_version}/cloud/{cloud}/{arch}"
|
||||
|
||||
# image access
|
||||
access.PUBLIC = true
|
||||
|
||||
# image publication
|
||||
regions.ALL = true
|
||||
}
|
||||
|
||||
# profile build matrix
|
||||
Dimensions {
|
||||
version {
|
||||
"3.17" { include required("version/3.17.conf") }
|
||||
"3.16" { include required("version/3.16.conf") }
|
||||
"3.15" { include required("version/3.15.conf") }
|
||||
"3.14" { include required("version/3.14.conf") }
|
||||
edge { include required("version/edge.conf") }
|
||||
}
|
||||
arch {
|
||||
x86_64 { include required("arch/x86_64.conf") }
|
||||
aarch64 { include required("arch/aarch64.conf") }
|
||||
}
|
||||
firmware {
|
||||
bios { include required("firmware/bios.conf") }
|
||||
uefi { include required("firmware/uefi.conf") }
|
||||
}
|
||||
bootstrap {
|
||||
tiny { include required("bootstrap/tiny.conf") }
|
||||
cloudinit { include required("bootstrap/cloudinit.conf") }
|
||||
}
|
||||
cloud {
|
||||
aws { include required("cloud/aws.conf") }
|
||||
nocloud { include required("cloud/nocloud.conf") }
|
||||
# these are considered "alpha"
|
||||
azure { include required("cloud/azure.conf") }
|
||||
gcp { include required("cloud/gcp.conf") }
|
||||
oci { include required("cloud/oci.conf") }
|
||||
}
|
||||
}
|
||||
|
||||
# all build configs merge these at the very end
|
||||
Mandatory {
|
||||
name = [ "r{revision}" ]
|
||||
description = [ "- https://alpinelinux.org/cloud" ]
|
||||
encrypted = false
|
||||
|
||||
# final motd message
|
||||
motd.motd_change = "You may change this message by editing /etc/motd."
|
||||
|
||||
# final provisioning script
|
||||
scripts = [ cleanup ]
|
||||
|
||||
# TODO: remove this after testing
|
||||
#access.PUBLIC = false
|
||||
#regions {
|
||||
# ALL = false
|
||||
# us-west-2 = true
|
||||
# us-east-1 = true
|
||||
#}
|
||||
}
|
15
alpine-cloud-images/configs/arch/aarch64.conf
Normal file
15
alpine-cloud-images/configs/arch/aarch64.conf
Normal file
@ -0,0 +1,15 @@
|
||||
# vim: ts=2 et:
|
||||
name = [aarch64]
|
||||
arch_name = aarch64
|
||||
|
||||
# aarch64 is UEFI only
|
||||
EXCLUDE = [bios]
|
||||
|
||||
qemu.machine_type = virt
|
||||
qemu.args = [
|
||||
[-cpu, cortex-a57],
|
||||
[-boot, d],
|
||||
[-device, virtio-gpu-pci],
|
||||
[-device, usb-ehci],
|
||||
[-device, usb-kbd],
|
||||
]
|
6
alpine-cloud-images/configs/arch/x86_64.conf
Normal file
6
alpine-cloud-images/configs/arch/x86_64.conf
Normal file
@ -0,0 +1,6 @@
|
||||
# vim: ts=2 et:
|
||||
name = [x86_64]
|
||||
arch_name = x86_64
|
||||
|
||||
qemu.machine_type = null
|
||||
qemu.args = null
|
17
alpine-cloud-images/configs/bootstrap/cloudinit.conf
Normal file
17
alpine-cloud-images/configs/bootstrap/cloudinit.conf
Normal file
@ -0,0 +1,17 @@
|
||||
# vim: ts=2 et:
|
||||
name = [cloudinit]
|
||||
bootstrap_name = cloud-init
|
||||
bootstrap_url = "https://cloud-init.io"
|
||||
|
||||
# start cloudinit images with 3.15
|
||||
EXCLUDE = ["3.12", "3.13", "3.14"]
|
||||
|
||||
packages {
|
||||
cloud-init = true
|
||||
dhclient = true # offically supported, for now
|
||||
openssh-server-pam = true
|
||||
e2fsprogs-extra = true # for resize2fs
|
||||
}
|
||||
services.default.cloud-init-hotplugd = true
|
||||
|
||||
scripts = [ setup-cloudinit ]
|
35
alpine-cloud-images/configs/bootstrap/tiny.conf
Normal file
35
alpine-cloud-images/configs/bootstrap/tiny.conf
Normal file
@ -0,0 +1,35 @@
|
||||
# vim: ts=2 et:
|
||||
name = [tiny]
|
||||
bootstrap_name = Tiny Cloud
|
||||
bootstrap_url = "https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud"
|
||||
|
||||
services {
|
||||
sysinit.tiny-cloud-early = true
|
||||
default.tiny-cloud = true
|
||||
default.tiny-cloud-final = true
|
||||
}
|
||||
|
||||
WHEN {
|
||||
aws {
|
||||
packages.tiny-cloud-aws = true
|
||||
WHEN {
|
||||
"3.12" {
|
||||
# tiny-cloud-network requires ifupdown-ng (unavailable in 3.12)
|
||||
packages.tiny-cloud-aws = null
|
||||
services.sysinit.tiny-cloud-early = null
|
||||
services.default.tiny-cloud = null
|
||||
services.default.tiny-cloud-final = null
|
||||
# fall back to tiny-ec2-bootstrap instead
|
||||
packages.tiny-ec2-bootstrap = true
|
||||
services.default.tiny-ec2-bootstrap = true
|
||||
}
|
||||
}
|
||||
}
|
||||
# other per-cloud packages
|
||||
nocloud.packages.tiny-cloud-nocloud = true
|
||||
azure.packages.tiny-cloud-azure = true
|
||||
gcp.packages.tiny-cloud-gcp = true
|
||||
oci.packages.tiny-cloud-oci = true
|
||||
}
|
||||
|
||||
scripts = [ setup-tiny ]
|
40
alpine-cloud-images/configs/cloud/aws.conf
Normal file
40
alpine-cloud-images/configs/cloud/aws.conf
Normal file
@ -0,0 +1,40 @@
|
||||
# vim: ts=2 et:
|
||||
cloud_name = Amazon Web Services
|
||||
image_format = vhd
|
||||
|
||||
kernel_modules {
|
||||
ena = true
|
||||
nvme = true
|
||||
}
|
||||
kernel_options {
|
||||
"nvme_core.io_timeout=4294967295" = true
|
||||
}
|
||||
initfs_features {
|
||||
ena = true
|
||||
nvme = true
|
||||
}
|
||||
|
||||
# TODO: what about IPv6-only networks?
|
||||
# maybe we only set it for <= 3.17, and leave it to dhcpcd?
|
||||
ntp_server = 169.254.169.123
|
||||
|
||||
access.PUBLIC = true
|
||||
regions.ALL = true
|
||||
|
||||
cloud_region_url = "https://{region}.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId={image_id}",
|
||||
cloud_launch_url = "https://{region}.console.aws.amazon.com/ec2/home#launchAmi={image_id}"
|
||||
|
||||
WHEN {
|
||||
aarch64 {
|
||||
# new AWS aarch64 default...
|
||||
kernel_modules.gpio_pl061 = true
|
||||
initfs_features.gpio_pl061 = true
|
||||
WHEN {
|
||||
"3.14 3.13 3.12" {
|
||||
# ...but not supported for older versions
|
||||
kernel_modules.gpio_pl061 = false
|
||||
initfs_features.gpio_pl061 = false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
9
alpine-cloud-images/configs/cloud/azure.conf
Normal file
9
alpine-cloud-images/configs/cloud/azure.conf
Normal file
@ -0,0 +1,9 @@
|
||||
# vim: ts=2 et:
|
||||
cloud_name = Microsoft Azure (alpha)
|
||||
image_format = vhd
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
||||
# TODO: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/time-sync
|
||||
ntp_server = ""
|
15
alpine-cloud-images/configs/cloud/gcp.conf
Normal file
15
alpine-cloud-images/configs/cloud/gcp.conf
Normal file
@ -0,0 +1,15 @@
|
||||
# vim: ts=2 et:
|
||||
cloud_name = Google Cloud Platform (alpha)
|
||||
# TODO: https://cloud.google.com/compute/docs/import/importing-virtual-disks
|
||||
# Mentions "VHD" but also mentions "..." if that also includes QCOW2, then
|
||||
# we should use that instead. The "Manual Import" section on the sidebar
|
||||
# has a "Manually import boot disks" subpage which also mentions importing
|
||||
# compressed raw images... We would prefer to avoid that if possible.
|
||||
image_format = vhd
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
||||
# TODO: https://cloud.google.com/compute/docs/instances/configure-ntp
|
||||
# (metadata.google.internal)
|
||||
ntp_server = ""
|
8
alpine-cloud-images/configs/cloud/nocloud.conf
Normal file
8
alpine-cloud-images/configs/cloud/nocloud.conf
Normal file
@ -0,0 +1,8 @@
|
||||
# vim: ts=2 et:
|
||||
cloud_name = NoCloud
|
||||
image_format = qcow2
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
||||
ntp_server = ""
|
8
alpine-cloud-images/configs/cloud/oci.conf
Normal file
8
alpine-cloud-images/configs/cloud/oci.conf
Normal file
@ -0,0 +1,8 @@
|
||||
# vim: ts=2 et:
|
||||
cloud_name = Oracle Cloud Infrastructure (alpha)
|
||||
image_format = qcow2
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
||||
ntp_server = "169.254.169.254"
|
7
alpine-cloud-images/configs/firmware/bios.conf
Normal file
7
alpine-cloud-images/configs/firmware/bios.conf
Normal file
@ -0,0 +1,7 @@
|
||||
# vim: ts=2 et:
|
||||
name = [bios]
|
||||
firmware_name = BIOS
|
||||
|
||||
bootloader = extlinux
|
||||
packages.syslinux = --no-scripts
|
||||
qemu.firmware = null
|
18
alpine-cloud-images/configs/firmware/uefi.conf
Normal file
18
alpine-cloud-images/configs/firmware/uefi.conf
Normal file
@ -0,0 +1,18 @@
|
||||
# vim: ts=2 et:
|
||||
name = [uefi]
|
||||
firmware_name = UEFI
|
||||
|
||||
bootloader = grub-efi
|
||||
packages {
|
||||
grub-efi = --no-scripts
|
||||
dosfstools = true
|
||||
}
|
||||
|
||||
WHEN {
|
||||
aarch64 {
|
||||
qemu.firmware = work/firmware/uefi-aarch64.bin
|
||||
}
|
||||
x86_64 {
|
||||
qemu.firmware = work/firmware/uefi-x86_64.bin
|
||||
}
|
||||
}
|
1
alpine-cloud-images/configs/images.conf
Symbolic link
1
alpine-cloud-images/configs/images.conf
Symbolic link
@ -0,0 +1 @@
|
||||
alpine.conf
|
5
alpine-cloud-images/configs/version/3.12.conf
Normal file
5
alpine-cloud-images/configs/version/3.12.conf
Normal file
@ -0,0 +1,5 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("base/1.conf")
|
||||
|
||||
# NOTE: EOL 2022-05-01
|
3
alpine-cloud-images/configs/version/3.13.conf
Normal file
3
alpine-cloud-images/configs/version/3.13.conf
Normal file
@ -0,0 +1,3 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("base/2.conf")
|
3
alpine-cloud-images/configs/version/3.14.conf
Normal file
3
alpine-cloud-images/configs/version/3.14.conf
Normal file
@ -0,0 +1,3 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("base/2.conf")
|
7
alpine-cloud-images/configs/version/3.15.conf
Normal file
7
alpine-cloud-images/configs/version/3.15.conf
Normal file
@ -0,0 +1,7 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("base/3.conf")
|
||||
|
||||
motd {
|
||||
sudo_deprecated = "NOTE: 'sudo' has been deprecated, please use 'doas' instead."
|
||||
}
|
7
alpine-cloud-images/configs/version/3.16.conf
Normal file
7
alpine-cloud-images/configs/version/3.16.conf
Normal file
@ -0,0 +1,7 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("base/4.conf")
|
||||
|
||||
motd {
|
||||
sudo_removed = "NOTE: 'sudo' is no longer installed by default, please use 'doas' instead."
|
||||
}
|
7
alpine-cloud-images/configs/version/3.17.conf
Normal file
7
alpine-cloud-images/configs/version/3.17.conf
Normal file
@ -0,0 +1,7 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("base/4.conf")
|
||||
|
||||
motd {
|
||||
sudo_removed = "NOTE: 'sudo' is not installed by default, please use 'doas' instead."
|
||||
}
|
60
alpine-cloud-images/configs/version/base/1.conf
Normal file
60
alpine-cloud-images/configs/version/base/1.conf
Normal file
@ -0,0 +1,60 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
repos {
|
||||
"https://dl-cdn.alpinelinux.org/alpine/v{version}/main" = true
|
||||
"https://dl-cdn.alpinelinux.org/alpine/v{version}/community" = true
|
||||
"https://dl-cdn.alpinelinux.org/alpine/v{version}/testing" = false
|
||||
}
|
||||
|
||||
packages {
|
||||
alpine-base = true
|
||||
linux-virt = true
|
||||
alpine-mirrors = true
|
||||
chrony = true
|
||||
e2fsprogs = true
|
||||
openssh = true
|
||||
sudo = true
|
||||
tzdata = true
|
||||
}
|
||||
|
||||
services {
|
||||
sysinit {
|
||||
devfs = true
|
||||
dmesg = true
|
||||
hwdrivers = true
|
||||
mdev = true
|
||||
}
|
||||
boot {
|
||||
acpid = true
|
||||
bootmisc = true
|
||||
hostname = true
|
||||
hwclock = true
|
||||
modules = true
|
||||
swap = true
|
||||
sysctl = true
|
||||
syslog = true
|
||||
}
|
||||
default {
|
||||
chronyd = true
|
||||
networking = true
|
||||
sshd = true
|
||||
}
|
||||
shutdown {
|
||||
killprocs = true
|
||||
mount-ro = true
|
||||
savecache = true
|
||||
}
|
||||
}
|
||||
|
||||
kernel_modules {
|
||||
sd-mod = true
|
||||
usb-storage = true
|
||||
ext4 = true
|
||||
}
|
||||
|
||||
kernel_options {
|
||||
"console=ttyS0,115200n8" = true
|
||||
}
|
||||
|
||||
initfs_features {
|
||||
}
|
8
alpine-cloud-images/configs/version/base/2.conf
Normal file
8
alpine-cloud-images/configs/version/base/2.conf
Normal file
@ -0,0 +1,8 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("1.conf")
|
||||
|
||||
packages {
|
||||
# drop old alpine-mirrors
|
||||
alpine-mirrors = null
|
||||
}
|
8
alpine-cloud-images/configs/version/base/3.conf
Normal file
8
alpine-cloud-images/configs/version/base/3.conf
Normal file
@ -0,0 +1,8 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("2.conf")
|
||||
|
||||
packages {
|
||||
# doas will officially replace sudo in 3.16
|
||||
doas = true
|
||||
}
|
8
alpine-cloud-images/configs/version/base/4.conf
Normal file
8
alpine-cloud-images/configs/version/base/4.conf
Normal file
@ -0,0 +1,8 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("3.conf")
|
||||
|
||||
packages {
|
||||
# doas officially replaces sudo in 3.16
|
||||
sudo = false
|
||||
}
|
8
alpine-cloud-images/configs/version/base/5.conf
Normal file
8
alpine-cloud-images/configs/version/base/5.conf
Normal file
@ -0,0 +1,8 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("4.conf")
|
||||
|
||||
packages {
|
||||
# start using dhcpcd for improved IPv6 experience
|
||||
dhcpcd = true
|
||||
}
|
15
alpine-cloud-images/configs/version/edge.conf
Normal file
15
alpine-cloud-images/configs/version/edge.conf
Normal file
@ -0,0 +1,15 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
include required("base/5.conf")
|
||||
|
||||
motd {
|
||||
sudo_removed = "NOTE: 'sudo' is not installed by default, please use 'doas' instead."
|
||||
}
|
||||
|
||||
# clear out inherited repos
|
||||
repos = null
|
||||
repos {
|
||||
"https://dl-cdn.alpinelinux.org/alpine/edge/main" = true
|
||||
"https://dl-cdn.alpinelinux.org/alpine/edge/community" = true
|
||||
"https://dl-cdn.alpinelinux.org/alpine/edge/testing" = true
|
||||
}
|
215
alpine-cloud-images/gen_mksite_releases.py
Executable file
215
alpine-cloud-images/gen_mksite_releases.py
Executable file
@ -0,0 +1,215 @@
|
||||
#!/usr/bin/env python3
|
||||
# vim: ts=4 et:
|
||||
|
||||
# TODO: perhaps integrate into "./build release"
|
||||
|
||||
# Ensure we're using the Python virtual env with our installed dependencies
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
NOTE = textwrap.dedent("""
|
||||
This script's output provides a mustache-ready datasource to alpine-mksite
|
||||
(https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite) and should be
|
||||
run after the main 'build' script has published ALL images.
|
||||
STDOUT from this script should be saved as 'cloud/releases.yaml' in the
|
||||
above alpine-mksite repo.
|
||||
""")
|
||||
|
||||
sys.pycache_prefix = 'work/__pycache__'
|
||||
|
||||
if not os.path.exists('work'):
|
||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
||||
print(NOTE, file=sys.stderr)
|
||||
exit(1)
|
||||
|
||||
# Re-execute using the right virtual environment, if necessary.
|
||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
||||
os.execv(venv_args[0], venv_args)
|
||||
|
||||
# We're now in the right Python environment
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
|
||||
from collections import defaultdict
|
||||
from ruamel.yaml import YAML
|
||||
|
||||
import clouds
|
||||
from image_config_manager import ImageConfigManager
|
||||
|
||||
|
||||
### Constants & Variables
|
||||
|
||||
LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
|
||||
### Functions
|
||||
|
||||
# allows us to set values deep within an object that might not be fully defined
|
||||
def dictfactory():
|
||||
return defaultdict(dictfactory)
|
||||
|
||||
|
||||
# undo dictfactory() objects to normal objects
|
||||
def undictfactory(o):
|
||||
if isinstance(o, defaultdict):
|
||||
o = {k: undictfactory(v) for k, v in o.items()}
|
||||
return o
|
||||
|
||||
|
||||
### Command Line & Logging
|
||||
|
||||
parser = argparse.ArgumentParser(description=NOTE)
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger('gen_mksite_releases')
|
||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
||||
console = logging.StreamHandler(sys.stderr)
|
||||
console.setFormatter(logging.Formatter(LOGFORMAT))
|
||||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider()
|
||||
|
||||
# load build configs
|
||||
configs = ImageConfigManager(
|
||||
conf_path='work/configs/images.conf',
|
||||
yaml_path='work/images.yaml',
|
||||
log='gen_mksite_releases'
|
||||
)
|
||||
# make sure images.yaml is up-to-date with reality
|
||||
configs.refresh_state('final')
|
||||
|
||||
yaml = YAML()
|
||||
|
||||
filters = dictfactory()
|
||||
versions = dictfactory()
|
||||
data = {}
|
||||
|
||||
log.info('Transforming image data')
|
||||
for i_key, i_cfg in configs.get().items():
|
||||
if not i_cfg.published:
|
||||
continue
|
||||
|
||||
version = i_cfg.version
|
||||
if version == 'edge':
|
||||
continue
|
||||
|
||||
image_name = i_cfg.image_name
|
||||
release = i_cfg.release
|
||||
arch = i_cfg.arch
|
||||
firmware = i_cfg.firmware
|
||||
bootstrap = i_cfg.bootstrap
|
||||
cloud = i_cfg.cloud
|
||||
|
||||
if cloud not in filters['clouds']:
|
||||
filters['clouds'][cloud] = {
|
||||
'cloud': cloud,
|
||||
'cloud_name': i_cfg.cloud_name,
|
||||
}
|
||||
|
||||
filters['regions'] = {}
|
||||
|
||||
if arch not in filters['archs']:
|
||||
filters['archs'][arch] = {
|
||||
'arch': arch,
|
||||
'arch_name': i_cfg.arch_name,
|
||||
}
|
||||
|
||||
if firmware not in filters['firmwares']:
|
||||
filters['firmwares'][firmware] = {
|
||||
'firmware': firmware,
|
||||
'firmware_name': i_cfg.firmware_name,
|
||||
}
|
||||
|
||||
if bootstrap not in filters['bootstraps']:
|
||||
filters['bootstraps'][bootstrap] = {
|
||||
'bootstrap': bootstrap,
|
||||
'bootstrap_name': i_cfg.bootstrap_name,
|
||||
}
|
||||
|
||||
if i_cfg.artifacts:
|
||||
for region, image_id in {r: i_cfg.artifacts[r] for r in sorted(i_cfg.artifacts)}.items():
|
||||
if region not in filters['regions']:
|
||||
filters['regions'][region] = {
|
||||
'region': region,
|
||||
'clouds': [cloud],
|
||||
}
|
||||
|
||||
if cloud not in filters['regions'][region]['clouds']:
|
||||
filters['regions'][region]['clouds'].append(cloud)
|
||||
|
||||
versions[version] |= {
|
||||
'version': version,
|
||||
'release': release,
|
||||
'end_of_life': i_cfg.end_of_life,
|
||||
}
|
||||
versions[version]['images'][image_name] |= {
|
||||
'image_name': image_name,
|
||||
'arch': arch,
|
||||
'firmware': firmware,
|
||||
'bootstrap': bootstrap,
|
||||
'published': i_cfg.published.split('T')[0], # just the date
|
||||
}
|
||||
versions[version]['images'][image_name]['downloads'][cloud] |= {
|
||||
'cloud': cloud,
|
||||
'image_format': i_cfg.image_format,
|
||||
'image_url': i_cfg.download_url + '/' + (i_cfg.image_name)
|
||||
}
|
||||
versions[version]['images'][image_name]['regions'][region] |= {
|
||||
'cloud': cloud,
|
||||
'region': region,
|
||||
'region_url': i_cfg.region_url(region, image_id),
|
||||
'launch_url': i_cfg.launch_url(region, image_id),
|
||||
}
|
||||
|
||||
log.info('Making data mustache-compatible')
|
||||
|
||||
# convert filters to mustache-compatible format
|
||||
data['filters'] = {}
|
||||
for f in ['clouds', 'regions', 'archs', 'firmwares', 'bootstraps']:
|
||||
data['filters'][f] = [
|
||||
filters[f][k] for k in filters[f] # order as they appear in work/images.yaml
|
||||
]
|
||||
|
||||
for r in data['filters']['regions']:
|
||||
c = r.pop('clouds')
|
||||
r['clouds'] = [{'cloud': v} for v in c]
|
||||
|
||||
# convert versions to mustache-compatible format
|
||||
data['versions'] = []
|
||||
versions = undictfactory(versions)
|
||||
for version in sorted(versions, reverse=True, key=lambda s: [int(u) for u in s.split('.')]):
|
||||
images = versions[version].pop('images')
|
||||
i = []
|
||||
for image_name in images: # order as they appear in work/images.yaml
|
||||
downloads = images[image_name].pop('downloads')
|
||||
d = []
|
||||
for download in downloads:
|
||||
d.append(downloads[download])
|
||||
|
||||
images[image_name]['downloads'] = d
|
||||
|
||||
regions = images[image_name].pop('regions')
|
||||
r = []
|
||||
for region in sorted(regions):
|
||||
r.append(regions[region])
|
||||
|
||||
images[image_name]['regions'] = r
|
||||
i.append(images[image_name])
|
||||
|
||||
versions[version]['images'] = i
|
||||
data['versions'].append(versions[version])
|
||||
|
||||
log.info('Dumping YAML')
|
||||
yaml.dump(data, sys.stdout)
|
||||
log.info('Done')
|
465
alpine-cloud-images/image_config.py
Normal file
465
alpine-cloud-images/image_config.py
Normal file
@ -0,0 +1,465 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
import hashlib
|
||||
import mergedeep
|
||||
import os
|
||||
import pyhocon
|
||||
import shutil
|
||||
|
||||
from copy import deepcopy
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
import clouds
|
||||
from image_storage import ImageStorage, run
|
||||
from image_tags import ImageTags
|
||||
|
||||
|
||||
class ImageConfig():
|
||||
|
||||
CONVERT_CMD = {
|
||||
'qcow2': ['ln', '-f'],
|
||||
'vhd': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'vpc', '-o', 'force_size=on'],
|
||||
}
|
||||
# these tags may-or-may-not exist at various times
|
||||
OPTIONAL_TAGS = [
|
||||
'built', 'uploaded', 'imported', 'import_id', 'import_region', 'published', 'released'
|
||||
]
|
||||
STEPS = [
|
||||
'local', 'upload', 'import', 'publish', 'release'
|
||||
]
|
||||
|
||||
def __init__(self, config_key, obj={}, log=None, yaml=None):
|
||||
self._log = log
|
||||
self._yaml = yaml
|
||||
self._storage = None
|
||||
self.config_key = str(config_key)
|
||||
tags = obj.pop('tags', None)
|
||||
self.__dict__ |= self._deep_dict(obj)
|
||||
# ensure tag values are str() when loading
|
||||
if tags:
|
||||
self.tags = tags
|
||||
|
||||
@classmethod
|
||||
def to_yaml(cls, representer, node):
|
||||
d = {}
|
||||
for k in node.__dict__:
|
||||
# don't serialize attributes starting with _
|
||||
if k.startswith('_'):
|
||||
continue
|
||||
|
||||
d[k] = node.__getattribute__(k)
|
||||
|
||||
return representer.represent_mapping('!ImageConfig', d)
|
||||
|
||||
@property
|
||||
def v_version(self):
|
||||
return 'edge' if self.version == 'edge' else 'v' + self.version
|
||||
|
||||
@property
|
||||
def local_dir(self):
|
||||
return Path('work/images') / self.cloud / self.image_key
|
||||
|
||||
@property
|
||||
def local_image(self):
|
||||
return self.local_dir / ('image.qcow2')
|
||||
|
||||
@property
|
||||
def image_name(self):
|
||||
return self.name.format(**self.__dict__)
|
||||
|
||||
@property
|
||||
def image_description(self):
|
||||
return self.description.format(**self.__dict__)
|
||||
|
||||
@property
|
||||
def image_file(self):
|
||||
return '.'.join([self.image_name, self.image_format])
|
||||
|
||||
@property
|
||||
def image_path(self):
|
||||
return self.local_dir / self.image_file
|
||||
|
||||
@property
|
||||
def metadata_file(self):
|
||||
return '.'.join([self.image_name, 'yaml'])
|
||||
|
||||
def region_url(self, region, image_id):
|
||||
return self.cloud_region_url.format(region=region, image_id=image_id, **self.__dict__)
|
||||
|
||||
def launch_url(self, region, image_id):
|
||||
return self.cloud_launch_url.format(region=region, image_id=image_id, **self.__dict__)
|
||||
|
||||
@property
|
||||
def tags(self):
|
||||
# stuff that really ought to be there
|
||||
t = {
|
||||
'arch': self.arch,
|
||||
'bootstrap': self.bootstrap,
|
||||
'cloud': self.cloud,
|
||||
'description': self.image_description,
|
||||
'end_of_life': self.end_of_life,
|
||||
'firmware': self.firmware,
|
||||
'image_key': self.image_key,
|
||||
'name': self.image_name,
|
||||
'project': self.project,
|
||||
'release': self.release,
|
||||
'revision': self.revision,
|
||||
'version': self.version
|
||||
}
|
||||
# stuff that might not be there yet
|
||||
for k in self.OPTIONAL_TAGS:
|
||||
if self.__dict__.get(k, None):
|
||||
t[k] = self.__dict__[k]
|
||||
|
||||
return ImageTags(t)
|
||||
|
||||
# recursively convert a ConfigTree object to a dict object
|
||||
def _deep_dict(self, layer):
|
||||
obj = deepcopy(layer)
|
||||
if isinstance(layer, pyhocon.ConfigTree):
|
||||
obj = dict(obj)
|
||||
|
||||
try:
|
||||
for key, value in layer.items():
|
||||
# some HOCON keys are quoted to preserve dots
|
||||
if '"' in key:
|
||||
obj.pop(key)
|
||||
key = key.strip('"')
|
||||
|
||||
# version values were HOCON keys at one point, too
|
||||
if key == 'version' and '"' in value:
|
||||
value = value.strip('"')
|
||||
|
||||
obj[key] = self._deep_dict(value)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
return obj
|
||||
|
||||
def _merge(self, obj={}):
|
||||
mergedeep.merge(self.__dict__, self._deep_dict(obj), strategy=mergedeep.Strategy.ADDITIVE)
|
||||
|
||||
def _get(self, attr, default=None):
|
||||
return self.__dict__.get(attr, default)
|
||||
|
||||
def _pop(self, attr, default=None):
|
||||
return self.__dict__.pop(attr, default)
|
||||
|
||||
# make data ready for Packer ingestion
|
||||
def _normalize(self):
|
||||
# stringify arrays
|
||||
self.name = '-'.join(self.name)
|
||||
self.description = ' '.join(self.description)
|
||||
self.repo_keys = ' '.join(self.repo_keys)
|
||||
self._resolve_motd()
|
||||
self._resolve_urls()
|
||||
self._stringify_repos()
|
||||
self._stringify_packages()
|
||||
self._stringify_services()
|
||||
self._stringify_dict_keys('kernel_modules', ',')
|
||||
self._stringify_dict_keys('kernel_options', ' ')
|
||||
self._stringify_dict_keys('initfs_features', ' ')
|
||||
|
||||
def _resolve_motd(self):
|
||||
# merge release notes, as apporpriate
|
||||
if 'release_notes' not in self.motd or not self.release_notes:
|
||||
self.motd.pop('release_notes', None)
|
||||
|
||||
motd = {}
|
||||
for k, v in self.motd.items():
|
||||
if v is None:
|
||||
continue
|
||||
|
||||
# join list values with newlines
|
||||
if type(v) is list:
|
||||
v = "\n".join(v)
|
||||
|
||||
motd[k] = v
|
||||
|
||||
self.motd = '\n\n'.join(motd.values()).format(**self.__dict__)
|
||||
|
||||
def _resolve_urls(self):
|
||||
if 'storage_url' in self.__dict__:
|
||||
self.storage_url = self.storage_url.format(v_version=self.v_version, **self.__dict__)
|
||||
|
||||
if 'download_url' in self.__dict__:
|
||||
self.download_url = self.download_url.format(v_version=self.v_version, **self.__dict__)
|
||||
|
||||
def _stringify_repos(self):
|
||||
# stringify repos map
|
||||
# <repo>: <tag> # @<tag> <repo> enabled
|
||||
# <repo>: false # <repo> disabled (commented out)
|
||||
# <repo>: true # <repo> enabled
|
||||
# <repo>: null # skip <repo> entirely
|
||||
# ...and interpolate {version}
|
||||
self.repos = "\n".join(filter(None, (
|
||||
f"@{v} {r}" if isinstance(v, str) else
|
||||
f"#{r}" if v is False else
|
||||
r if v is True else None
|
||||
for r, v in self.repos.items()
|
||||
))).format(version=self.version)
|
||||
|
||||
def _stringify_packages(self):
|
||||
# resolve/stringify packages map
|
||||
# <pkg>: true # add <pkg>
|
||||
# <pkg>: <tag> # add <pkg>@<tag>
|
||||
# <pkg>: --no-scripts # add --no-scripts <pkg>
|
||||
# <pkg>: --no-scripts <tag> # add --no-scripts <pkg>@<tag>
|
||||
# <pkg>: false # del <pkg>
|
||||
# <pkg>: null # skip explicit add/del <pkg>
|
||||
pkgs = {'add': '', 'del': '', 'noscripts': ''}
|
||||
for p, v in self.packages.items():
|
||||
k = 'add'
|
||||
if isinstance(v, str):
|
||||
if '--no-scripts' in v:
|
||||
k = 'noscripts'
|
||||
v = v.replace('--no-scripts', '')
|
||||
v = v.strip()
|
||||
if len(v):
|
||||
p += f"@{v}"
|
||||
elif v is False:
|
||||
k = 'del'
|
||||
elif v is None:
|
||||
continue
|
||||
|
||||
pkgs[k] = p if len(pkgs[k]) == 0 else pkgs[k] + ' ' + p
|
||||
|
||||
self.packages = pkgs
|
||||
|
||||
def _stringify_services(self):
|
||||
# stringify services map
|
||||
# <level>:
|
||||
# <svc>: true # enable <svc> at <level>
|
||||
# <svc>: false # disable <svc> at <level>
|
||||
# <svc>: null # skip explicit en/disable <svc> at <level>
|
||||
self.services = {
|
||||
'enable': ' '.join(filter(lambda x: not x.endswith('='), (
|
||||
'{}={}'.format(lvl, ','.join(filter(None, (
|
||||
s if v is True else None
|
||||
for s, v in svcs.items()
|
||||
))))
|
||||
for lvl, svcs in self.services.items()
|
||||
))),
|
||||
'disable': ' '.join(filter(lambda x: not x.endswith('='), (
|
||||
'{}={}'.format(lvl, ','.join(filter(None, (
|
||||
s if v is False else None
|
||||
for s, v in svcs.items()
|
||||
))))
|
||||
for lvl, svcs in self.services.items()
|
||||
)))
|
||||
}
|
||||
|
||||
def _stringify_dict_keys(self, d, sep):
|
||||
self.__dict__[d] = sep.join(filter(None, (
|
||||
m if v is True else None
|
||||
for m, v in self.__dict__[d].items()
|
||||
)))
|
||||
|
||||
def _is_step_or_earlier(self, s, step):
|
||||
log = self._log
|
||||
if step == 'state':
|
||||
return True
|
||||
|
||||
if step not in self.STEPS:
|
||||
return False
|
||||
|
||||
return self.STEPS.index(s) <= self.STEPS.index(step)
|
||||
|
||||
|
||||
# TODO: this needs to be sorted out for 'upload' and 'release' steps
|
||||
def refresh_state(self, step, revise=False):
|
||||
log = self._log
|
||||
actions = {}
|
||||
revision = 0
|
||||
step_state = step == 'state'
|
||||
step_rollback = step == 'rollback'
|
||||
undo = {}
|
||||
|
||||
# enable initial set of possible actions based on specified step
|
||||
for s in self.STEPS:
|
||||
if self._is_step_or_earlier(s, step):
|
||||
actions[s] = True
|
||||
|
||||
# pick up any updated image metadata
|
||||
self.load_metadata()
|
||||
|
||||
# TODO: check storage and/or cloud - use this instead of remote_image
|
||||
# latest_revision = self.get_latest_revision()
|
||||
|
||||
if (step_rollback or revise) and self.local_image.exists():
|
||||
undo['local'] = True
|
||||
|
||||
|
||||
|
||||
if step_rollback:
|
||||
if self.local_image.exists():
|
||||
undo['local'] = True
|
||||
|
||||
if not self.published or self.released:
|
||||
if self.uploaded:
|
||||
undo['upload'] = True
|
||||
|
||||
if self.imported:
|
||||
undo['import'] = True
|
||||
|
||||
# TODO: rename to 'remote_tags'?
|
||||
# if we load remote tags into state automatically, shouldn't that info already be in self?
|
||||
remote_image = clouds.get_latest_imported_tags(self)
|
||||
log.debug('\n%s', remote_image)
|
||||
|
||||
if revise:
|
||||
if self.local_image.exists():
|
||||
# remove previously built local image artifacts
|
||||
log.warning('%s existing local image dir %s',
|
||||
'Would remove' if step_state else 'Removing',
|
||||
self.local_dir)
|
||||
if not step_state:
|
||||
shutil.rmtree(self.local_dir)
|
||||
|
||||
if remote_image and remote_image.get('published', None):
|
||||
log.warning('%s image revision for %s',
|
||||
'Would bump' if step_state else 'Bumping',
|
||||
self.image_key)
|
||||
revision = int(remote_image.revision) + 1
|
||||
|
||||
elif remote_image and remote_image.get('imported', None):
|
||||
# remove existing imported (but unpublished) image
|
||||
log.warning('%s unpublished remote image %s',
|
||||
'Would remove' if step_state else 'Removing',
|
||||
remote_image.import_id)
|
||||
if not step_state:
|
||||
clouds.delete_image(self, remote_image.import_id)
|
||||
|
||||
remote_image = None
|
||||
|
||||
elif remote_image:
|
||||
if remote_image.get('imported', None):
|
||||
# already imported, don't build/upload/import again
|
||||
log.debug('%s - already imported', self.image_key)
|
||||
actions.pop('local', None)
|
||||
actions.pop('upload', None)
|
||||
actions.pop('import', None)
|
||||
|
||||
if remote_image.get('published', None):
|
||||
# NOTE: re-publishing can update perms or push to new regions
|
||||
log.debug('%s - already published', self.image_key)
|
||||
|
||||
if self.local_image.exists():
|
||||
# local image's already built, don't rebuild
|
||||
log.debug('%s - already locally built', self.image_key)
|
||||
actions.pop('local', None)
|
||||
|
||||
else:
|
||||
self.built = None
|
||||
|
||||
# merge remote_image data into image state
|
||||
if remote_image:
|
||||
self.__dict__ |= dict(remote_image)
|
||||
|
||||
else:
|
||||
self.__dict__ |= {
|
||||
'revision': revision,
|
||||
'uploaded': None,
|
||||
'imported': None,
|
||||
'import_id': None,
|
||||
'import_region': None,
|
||||
'published': None,
|
||||
'artifacts': None,
|
||||
'released': None,
|
||||
}
|
||||
|
||||
# remove remaining actions not possible based on specified step
|
||||
for s in self.STEPS:
|
||||
if not self._is_step_or_earlier(s, step):
|
||||
actions.pop(s, None)
|
||||
|
||||
self.actions = list(actions)
|
||||
log.info('%s/%s = %s', self.cloud, self.image_name, self.actions)
|
||||
|
||||
self.state_updated = datetime.utcnow().isoformat()
|
||||
|
||||
@property
|
||||
def storage(self):
|
||||
if self._storage is None:
|
||||
self._storage = ImageStorage(self.local_dir, self.storage_url, log=self._log)
|
||||
|
||||
return self._storage
|
||||
|
||||
def _save_checksum(self, file):
|
||||
self._log.info("Calculating checksum for '%s'", file)
|
||||
sha256_hash = hashlib.sha256()
|
||||
sha512_hash = hashlib.sha512()
|
||||
with open(file, 'rb') as f:
|
||||
for block in iter(lambda: f.read(4096), b''):
|
||||
sha256_hash.update(block)
|
||||
sha512_hash.update(block)
|
||||
|
||||
with open(str(file) + '.sha256', 'w') as f:
|
||||
print(sha256_hash.hexdigest(), file=f)
|
||||
|
||||
with open(str(file) + '.sha512', 'w') as f:
|
||||
print(sha512_hash.hexdigest(), file=f)
|
||||
|
||||
# convert local QCOW2 to format appropriate for a cloud
|
||||
def convert_image(self):
|
||||
self._log.info('Converting %s to %s', self.local_image, self.image_path)
|
||||
run(
|
||||
self.CONVERT_CMD[self.image_format] + [self.local_image, self.image_path],
|
||||
log=self._log, errmsg='Unable to convert %s to %s',
|
||||
errvals=[self.local_image, self.image_path]
|
||||
)
|
||||
self._save_checksum(self.image_path)
|
||||
self.built = datetime.utcnow().isoformat()
|
||||
|
||||
def upload_image(self):
|
||||
self.storage.store(
|
||||
self.image_file,
|
||||
self.image_file + '.sha256',
|
||||
self.image_file + '.sha512'
|
||||
)
|
||||
self.uploaded = datetime.utcnow().isoformat()
|
||||
|
||||
def save_metadata(self, action):
|
||||
os.makedirs(self.local_dir, exist_ok=True)
|
||||
self._log.info('Saving image metadata')
|
||||
# TODO: save metadata updated timestamp as metadata?
|
||||
# TODO: def self.metadata to return what we consider metadata?
|
||||
metadata = dict(self.tags)
|
||||
self.metadata_updated = datetime.utcnow().isoformat()
|
||||
metadata |= {
|
||||
'artifacts': self._get('artifacts', None),
|
||||
'metadata_updated': self.metadata_updated
|
||||
}
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
self._yaml.dump(metadata, metadata_path)
|
||||
self._save_checksum(metadata_path)
|
||||
if action != 'local' and self.storage:
|
||||
self.storage.store(
|
||||
self.metadata_file,
|
||||
self.metadata_file + '.sha256',
|
||||
self.metadata_file + '.sha512'
|
||||
)
|
||||
|
||||
def load_metadata(self):
|
||||
# TODO: what if we have fresh configs, but the image is already uploaded/imported?
|
||||
# we'll need to get revision first somehow
|
||||
if 'revision' not in self.__dict__:
|
||||
return
|
||||
|
||||
# TODO: revision = '*' for now - or only if unknown?
|
||||
|
||||
# get a list of local matching <name>-r*.yaml?
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
if metadata_path.exists():
|
||||
self._log.info('Loading image metadata from %s', metadata_path)
|
||||
self.__dict__ |= self._yaml.load(metadata_path).items()
|
||||
|
||||
# get a list of storage matching <name>-r*.yaml
|
||||
#else:
|
||||
# retrieve metadata (and image?) from storage_url
|
||||
# else:
|
||||
# retrieve metadata from imported image
|
||||
|
||||
# if there's no stored metadata, we are in transition,
|
||||
# get a list of imported images matching <name>-r*.yaml
|
178
alpine-cloud-images/image_config_manager.py
Normal file
178
alpine-cloud-images/image_config_manager.py
Normal file
@ -0,0 +1,178 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
import itertools
|
||||
import logging
|
||||
import pyhocon
|
||||
|
||||
from copy import deepcopy
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from ruamel.yaml import YAML
|
||||
|
||||
from image_config import ImageConfig
|
||||
|
||||
|
||||
|
||||
class ImageConfigManager():
|
||||
|
||||
def __init__(self, conf_path, yaml_path, log=__name__, alpine=None):
|
||||
self.conf_path = Path(conf_path)
|
||||
self.yaml_path = Path(yaml_path)
|
||||
self.log = logging.getLogger(log)
|
||||
self.alpine = alpine
|
||||
|
||||
self.now = datetime.utcnow()
|
||||
self._configs = {}
|
||||
|
||||
self.yaml = YAML()
|
||||
self.yaml.register_class(ImageConfig)
|
||||
self.yaml.explicit_start = True
|
||||
# hide !ImageConfig tag from Packer
|
||||
self.yaml.representer.org_represent_mapping = self.yaml.representer.represent_mapping
|
||||
self.yaml.representer.represent_mapping = self._strip_yaml_tag_type
|
||||
|
||||
# load resolved YAML, if exists
|
||||
if self.yaml_path.exists():
|
||||
self._load_yaml()
|
||||
else:
|
||||
self._resolve()
|
||||
|
||||
def get(self, key=None):
|
||||
if not key:
|
||||
return self._configs
|
||||
|
||||
return self._configs[key]
|
||||
|
||||
# load already-resolved YAML configs, restoring ImageConfig objects
|
||||
def _load_yaml(self):
|
||||
self.log.info('Loading existing %s', self.yaml_path)
|
||||
for key, config in self.yaml.load(self.yaml_path).items():
|
||||
self._configs[key] = ImageConfig(key, config, log=self.log, yaml=self.yaml)
|
||||
|
||||
# save resolved configs to YAML
|
||||
def _save_yaml(self):
|
||||
self.log.info('Saving %s', self.yaml_path)
|
||||
self.yaml.dump(self._configs, self.yaml_path)
|
||||
|
||||
# hide !ImageConfig tag from Packer
|
||||
def _strip_yaml_tag_type(self, tag, mapping, flow_style=None):
|
||||
if tag == '!ImageConfig':
|
||||
tag = u'tag:yaml.org,2002:map'
|
||||
|
||||
return self.yaml.representer.org_represent_mapping(tag, mapping, flow_style=flow_style)
|
||||
|
||||
# resolve from HOCON configs
|
||||
def _resolve(self):
|
||||
self.log.info('Generating configs.yaml in work environment')
|
||||
cfg = pyhocon.ConfigFactory.parse_file(self.conf_path)
|
||||
# set version releases
|
||||
for v, vcfg in cfg.Dimensions.version.items():
|
||||
# version keys are quoted to protect dots
|
||||
self._set_version_release(v.strip('"'), vcfg)
|
||||
|
||||
dimensions = list(cfg.Dimensions.keys())
|
||||
self.log.debug('dimensions: %s', dimensions)
|
||||
|
||||
for dim_keys in (itertools.product(*cfg['Dimensions'].values())):
|
||||
config_key = '-'.join(dim_keys).replace('"', '')
|
||||
|
||||
# dict of dimension -> dimension_key
|
||||
dim_map = dict(zip(dimensions, dim_keys))
|
||||
|
||||
# replace version with release, and make image_key from that
|
||||
release = cfg.Dimensions.version[dim_map['version']].release
|
||||
(rel_map := dim_map.copy())['version'] = release
|
||||
image_key = '-'.join(rel_map.values())
|
||||
|
||||
image_config = ImageConfig(
|
||||
config_key,
|
||||
{
|
||||
'image_key': image_key,
|
||||
'release': release
|
||||
} | dim_map,
|
||||
log=self.log,
|
||||
yaml=self.yaml
|
||||
)
|
||||
|
||||
# merge in the Default config
|
||||
image_config._merge(cfg.Default)
|
||||
skip = False
|
||||
# merge in each dimension key's configs
|
||||
for dim, dim_key in dim_map.items():
|
||||
dim_cfg = deepcopy(cfg.Dimensions[dim][dim_key])
|
||||
|
||||
image_config._merge(dim_cfg)
|
||||
|
||||
# now that we're done with ConfigTree/dim_cfg, remove " from dim_keys
|
||||
dim_keys = set(k.replace('"', '') for k in dim_keys)
|
||||
|
||||
# WHEN blocks inside WHEN blocks are considered "and" operations
|
||||
while (when := image_config._pop('WHEN', None)):
|
||||
for when_keys, when_conf in when.items():
|
||||
# WHEN keys with spaces are considered "or" operations
|
||||
if len(set(when_keys.split(' ')) & dim_keys) > 0:
|
||||
image_config._merge(when_conf)
|
||||
|
||||
exclude = image_config._pop('EXCLUDE', None)
|
||||
if exclude and set(exclude) & set(dim_keys):
|
||||
self.log.debug('%s SKIPPED, %s excludes %s', config_key, dim_key, exclude)
|
||||
skip = True
|
||||
break
|
||||
|
||||
if eol := image_config._get('end_of_life', None):
|
||||
if self.now > datetime.fromisoformat(eol):
|
||||
self.log.warning('%s SKIPPED, %s end_of_life %s', config_key, dim_key, eol)
|
||||
skip = True
|
||||
break
|
||||
|
||||
if skip is True:
|
||||
continue
|
||||
|
||||
# merge in the Mandatory configs at the end
|
||||
image_config._merge(cfg.Mandatory)
|
||||
|
||||
# clean stuff up
|
||||
image_config._normalize()
|
||||
image_config.qemu['iso_url'] = self.alpine.virt_iso_url(arch=image_config.arch)
|
||||
|
||||
# we've resolved everything, add tags attribute to config
|
||||
self._configs[config_key] = image_config
|
||||
|
||||
self._save_yaml()
|
||||
|
||||
# set current version release
|
||||
def _set_version_release(self, v, c):
|
||||
info = self.alpine.version_info(v)
|
||||
c.put('release', info['release'])
|
||||
c.put('end_of_life', info['end_of_life'])
|
||||
c.put('release_notes', info['notes'])
|
||||
|
||||
# release is also appended to name & description arrays
|
||||
c.put('name', [c.release])
|
||||
c.put('description', [c.release])
|
||||
|
||||
# update current config status
|
||||
def refresh_state(self, step, only=[], skip=[], revise=False):
|
||||
self.log.info('Refreshing State')
|
||||
has_actions = False
|
||||
for ic in self._configs.values():
|
||||
# clear away any previous actions
|
||||
if hasattr(ic, 'actions'):
|
||||
delattr(ic, 'actions')
|
||||
|
||||
dim_keys = set(ic.config_key.split('-'))
|
||||
if only and len(set(only) & dim_keys) != len(only):
|
||||
self.log.debug("%s SKIPPED, doesn't match --only", ic.config_key)
|
||||
continue
|
||||
|
||||
if skip and len(set(skip) & dim_keys) > 0:
|
||||
self.log.debug('%s SKIPPED, matches --skip', ic.config_key)
|
||||
continue
|
||||
|
||||
ic.refresh_state(step, revise)
|
||||
if not has_actions and len(ic.actions):
|
||||
has_actions = True
|
||||
|
||||
# re-save with updated actions
|
||||
self._save_yaml()
|
||||
return has_actions
|
183
alpine-cloud-images/image_storage.py
Normal file
183
alpine-cloud-images/image_storage.py
Normal file
@ -0,0 +1,183 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
import shutil
|
||||
import os
|
||||
|
||||
from glob import glob
|
||||
from pathlib import Path
|
||||
from subprocess import Popen, PIPE
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from image_tags import DictObj
|
||||
|
||||
|
||||
def run(cmd, log, errmsg=None, errvals=[]):
|
||||
# ensure command and error values are lists of strings
|
||||
cmd = [str(c) for c in cmd]
|
||||
errvals = [str(ev) for ev in errvals]
|
||||
|
||||
log.debug('COMMAND: %s', ' '.join(cmd))
|
||||
p = Popen(cmd, stdout=PIPE, stdin=PIPE, encoding='utf8')
|
||||
out, err = p.communicate()
|
||||
if p.returncode:
|
||||
if errmsg:
|
||||
log.error(errmsg, *errvals)
|
||||
|
||||
log.error('COMMAND: %s', ' '.join(cmd))
|
||||
log.error('EXIT: %d', p.returncode)
|
||||
log.error('STDOUT:\n%s', out)
|
||||
log.error('STDERR:\n%s', err)
|
||||
raise RuntimeError
|
||||
|
||||
return out, err
|
||||
|
||||
|
||||
class ImageStorage():
|
||||
|
||||
def __init__(self, local, storage_url, log):
|
||||
self.log = log
|
||||
self.local = local
|
||||
self.url = storage_url.removesuffix('/')
|
||||
url = urlparse(self.url)
|
||||
if url.scheme not in ['', 'file', 'ssh']:
|
||||
self.log.error('Storage with "%s" scheme is unsupported', url.scheme)
|
||||
raise RuntimeError
|
||||
|
||||
if url.scheme in ['', 'file']:
|
||||
self.scheme = 'file'
|
||||
self.remote = Path(url.netloc + url.path).expanduser()
|
||||
|
||||
else:
|
||||
self.scheme = 'ssh'
|
||||
self.host = url.hostname
|
||||
self.remote = Path(url.path[1:]) # drop leading / -- use // for absolute path
|
||||
self.ssh = DictObj({
|
||||
'port': ['-p', url.port] if url.port else [],
|
||||
'user': ['-l', url.username] if url.username else [],
|
||||
})
|
||||
self.scp = DictObj({
|
||||
'port': ['-P', url.port] if url.port else [],
|
||||
'user': url.username + '@' if url.username else '',
|
||||
})
|
||||
|
||||
def store(self, *files):
|
||||
log = self.log
|
||||
if not files:
|
||||
log.debug('No files to store')
|
||||
return
|
||||
|
||||
src = self.local
|
||||
dest = self.remote
|
||||
if self.scheme == 'file':
|
||||
dest.mkdir(parents=True, exist_ok=True)
|
||||
for file in files:
|
||||
log.info('Storing %s', dest / file)
|
||||
shutil.copy2(src / file, dest / file)
|
||||
|
||||
return
|
||||
|
||||
url = self.url
|
||||
host = self.host
|
||||
ssh = self.ssh
|
||||
scp = self.scp
|
||||
run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'mkdir', '-p', dest],
|
||||
log=log, errmsg='Unable to ensure existence of %s', errvals=[url]
|
||||
)
|
||||
src_files = []
|
||||
for file in files:
|
||||
log.info('Storing %s', url + '/' + file)
|
||||
src_files.append(src / file)
|
||||
|
||||
run(
|
||||
['scp'] + scp.port + src_files + [scp.user + ':'.join([host, str(dest)])],
|
||||
log=log, errmsg='Failed to store files'
|
||||
)
|
||||
|
||||
def retrieve(self, *files):
|
||||
log = self.log
|
||||
if not files:
|
||||
log.debug('No files to retrieve')
|
||||
return
|
||||
|
||||
src = self.remote
|
||||
dest = self.local
|
||||
dest.mkdir(parents=True, exist_ok=True)
|
||||
if self.scheme == 'file':
|
||||
for file in files:
|
||||
log.info('Retrieving %s', src / file)
|
||||
shutil.copy2(src / file, dest / file)
|
||||
|
||||
return
|
||||
|
||||
url = self.url
|
||||
host = self.host
|
||||
scp = self.scp
|
||||
src_files = []
|
||||
for file in files:
|
||||
log.info('Retrieving %s', url + '/' + file)
|
||||
src_files.append(scp.user + ':'.join([host, str(src / file)]))
|
||||
|
||||
run(
|
||||
['scp'] + scp.port + src_files + [dest],
|
||||
log=log, errmsg='Failed to retrieve files'
|
||||
)
|
||||
|
||||
# TODO: optional files=[]?
|
||||
def list(self, match=None):
|
||||
log = self.log
|
||||
path = self.remote
|
||||
if not match:
|
||||
match = '*'
|
||||
|
||||
files = []
|
||||
if self.scheme == 'file':
|
||||
path.mkdir(parents=True, exist_ok=True)
|
||||
log.info('Listing of %s files in %s', match, path)
|
||||
files = sorted(glob(str(path / match)), key=os.path.getmtime, reverse=True)
|
||||
|
||||
else:
|
||||
url = self.url
|
||||
host = self.host
|
||||
ssh = self.ssh
|
||||
log.info('Listing %s files at %s', match, url)
|
||||
run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'mkdir', '-p', path],
|
||||
log=log, errmsg='Unable to create path'
|
||||
)
|
||||
out, _ = run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'ls', '-1drt', path / match],
|
||||
log=log, errmsg='Failed to list files'
|
||||
)
|
||||
files = out.splitlines()
|
||||
|
||||
return [os.path.basename(f) for f in files]
|
||||
|
||||
def remove(self, files):
|
||||
log = self.log
|
||||
if not files:
|
||||
log.debug('No files to remove')
|
||||
return
|
||||
|
||||
dest = self.remote
|
||||
if self.scheme == 'file':
|
||||
for file in files:
|
||||
path = dest / file
|
||||
log.info('Removing %s', path)
|
||||
if path.exists():
|
||||
path.unlink()
|
||||
|
||||
return
|
||||
|
||||
url = self.url
|
||||
host = self.host
|
||||
ssh = self.ssh
|
||||
dest_files = []
|
||||
for file in files:
|
||||
log.info('Removing %s', url + '/' + file)
|
||||
dest_files.append(dest / file)
|
||||
|
||||
run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'rm', '-f'] + dest_files,
|
||||
log=log, errmsg='Failed to remove files'
|
||||
)
|
32
alpine-cloud-images/image_tags.py
Normal file
32
alpine-cloud-images/image_tags.py
Normal file
@ -0,0 +1,32 @@
|
||||
# vim: ts=4 et:
|
||||
|
||||
class DictObj(dict):
|
||||
|
||||
def __getattr__(self, key):
|
||||
return self[key]
|
||||
|
||||
def __setattr__(self, key, value):
|
||||
self[key] = value
|
||||
|
||||
def __delattr__(self, key):
|
||||
del self[key]
|
||||
|
||||
|
||||
class ImageTags(DictObj):
|
||||
|
||||
def __init__(self, d={}, from_list=None, key_name='Key', value_name='Value'):
|
||||
for key, value in d.items():
|
||||
self.__setattr__(key, value)
|
||||
|
||||
if from_list:
|
||||
self.from_list(from_list, key_name, value_name)
|
||||
|
||||
def __setattr__(self, key, value):
|
||||
self[key] = str(value)
|
||||
|
||||
def as_list(self, key_name='Key', value_name='Value'):
|
||||
return [{key_name: k, value_name: v} for k, v in self.items()]
|
||||
|
||||
def from_list(self, list=[], key_name='Key', value_name='Value'):
|
||||
for tag in list:
|
||||
self.__setattr__(tag[key_name], tag[value_name])
|
@ -0,0 +1,43 @@
|
||||
# vim: ts=2 et:
|
||||
|
||||
# Overlay for testing alpine-cloud-images
|
||||
|
||||
# start with the production alpine config
|
||||
include required("alpine.conf")
|
||||
|
||||
# override specific things...
|
||||
|
||||
project = alpine-cloud-images__test
|
||||
|
||||
Default {
|
||||
# unset before resetting
|
||||
name = null
|
||||
name = [ test ]
|
||||
description = null
|
||||
description = [ Alpine Test ]
|
||||
}
|
||||
|
||||
Dimensions {
|
||||
cloud {
|
||||
# add a machine type dimension
|
||||
machine {
|
||||
vm { include required("machine/vm.conf") }
|
||||
metal { include required("machine/metal.conf") }
|
||||
}
|
||||
# just test in these regions
|
||||
aws.regions {
|
||||
us-west-2 = true
|
||||
us-east-1 = true
|
||||
}
|
||||
# adapters need to be written
|
||||
#oci { include required("testing/oci.conf") }
|
||||
#gcp { include required("testing/gcp.conf") }
|
||||
#azure { include required("testing/azure.conf") }
|
||||
#generic
|
||||
#nocloud
|
||||
}
|
||||
}
|
||||
|
||||
# test in private, and only in regions specified above
|
||||
Mandatory.access.PUBLIC = false
|
||||
Mandatory.regions.ALL = false
|
1
alpine-cloud-images/overlays/testing/configs/images.conf
Symbolic link
1
alpine-cloud-images/overlays/testing/configs/images.conf
Symbolic link
@ -0,0 +1 @@
|
||||
alpine-testing.conf
|
@ -0,0 +1,9 @@
|
||||
# bare metal
|
||||
|
||||
name = ["metal"]
|
||||
machine_name = "Bare Metal"
|
||||
|
||||
packages.linux-virt = null
|
||||
packages.linux-lts = true
|
||||
|
||||
# TODO: other kernel_modules, kernel_options, or initfs_features?
|
@ -0,0 +1,4 @@
|
||||
#name = [vm] # don't append anything to the name
|
||||
machine_name = "Virtual"
|
||||
|
||||
# all image defaults are for virutal machines
|
@ -0,0 +1,4 @@
|
||||
# vim: ts=2 et:
|
||||
builder = qemu
|
||||
|
||||
# TBD
|
42
alpine-cloud-images/scripts/cleanup
Normal file
42
alpine-cloud-images/scripts/cleanup
Normal file
@ -0,0 +1,42 @@
|
||||
#!/bin/sh -eu
|
||||
# vim: ts=4 et:
|
||||
|
||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
||||
|
||||
export \
|
||||
TARGET=/mnt
|
||||
|
||||
|
||||
die() {
|
||||
printf '\033[1;7;31m FATAL: %s \033[0m\n' "$@" >&2 # bold reversed red
|
||||
exit 1
|
||||
}
|
||||
einfo() {
|
||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
# Sweep cruft out of the image that doesn't need to ship or will be
|
||||
# re-generated when the image boots
|
||||
rm -f \
|
||||
"$TARGET/var/cache/apk/"* \
|
||||
"$TARGET/etc/resolv.conf" \
|
||||
"$TARGET/root/.ash_history" \
|
||||
"$TARGET/etc/"*-
|
||||
|
||||
# unmount extra EFI mount
|
||||
if [ "$FIRMWARE" = uefi ]; then
|
||||
umount "$TARGET/boot/efi"
|
||||
fi
|
||||
|
||||
umount \
|
||||
"$TARGET/dev" \
|
||||
"$TARGET/proc" \
|
||||
"$TARGET/sys"
|
||||
|
||||
umount "$TARGET"
|
||||
}
|
||||
|
||||
einfo "Cleaning up and unmounting image volume..."
|
||||
cleanup
|
||||
einfo "Done!"
|
262
alpine-cloud-images/scripts/setup
Executable file
262
alpine-cloud-images/scripts/setup
Executable file
@ -0,0 +1,262 @@
|
||||
#!/bin/sh -eu
|
||||
# vim: ts=4 et:
|
||||
|
||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
||||
|
||||
export \
|
||||
DEVICE=/dev/vda \
|
||||
TARGET=/mnt \
|
||||
SETUP=/tmp/setup.d
|
||||
|
||||
|
||||
die() {
|
||||
printf '\033[1;7;31m FATAL: %s \033[0m\n' "$@" >&2 # bold reversed red
|
||||
exit 1
|
||||
}
|
||||
einfo() {
|
||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
||||
}
|
||||
|
||||
# set up the builder's environment
|
||||
setup_builder() {
|
||||
einfo "Setting up Builder Instance"
|
||||
setup-apkrepos -1 # main repo via dl-cdn
|
||||
# ODO? also uncomment community repo?
|
||||
# Always use latest versions within the release, security patches etc.
|
||||
apk upgrade --no-cache --available
|
||||
apk --no-cache add \
|
||||
e2fsprogs \
|
||||
dosfstools \
|
||||
gettext \
|
||||
lsblk \
|
||||
parted
|
||||
}
|
||||
|
||||
make_filesystem() {
|
||||
einfo "Making the Filesystem"
|
||||
root_dev=$DEVICE
|
||||
|
||||
# make sure we're using a blank block device
|
||||
lsblk -P --fs "$DEVICE" >/dev/null 2>&1 || \
|
||||
die "'$DEVICE' is not a valid block device"
|
||||
if lsblk -P --fs "$DEVICE" | grep -vq 'FSTYPE=""'; then
|
||||
die "Block device '$DEVICE' is not blank"
|
||||
fi
|
||||
|
||||
if [ "$FIRMWARE" = uefi ]; then
|
||||
# EFI partition isn't optimally aligned, but is rarely used after boot
|
||||
parted "$DEVICE" -- \
|
||||
mklabel gpt \
|
||||
mkpart EFI fat32 512KiB 1MiB \
|
||||
mkpart / ext4 1MiB 100% \
|
||||
set 1 esp on \
|
||||
unit MiB print
|
||||
|
||||
root_dev="${DEVICE}2"
|
||||
mkfs.fat -n EFI "${DEVICE}1"
|
||||
fi
|
||||
|
||||
mkfs.ext4 -O ^64bit -L / "$root_dev"
|
||||
mkdir -p "$TARGET"
|
||||
mount -t ext4 "$root_dev" "$TARGET"
|
||||
|
||||
if [ "$FIRMWARE" = uefi ]; then
|
||||
mkdir -p "$TARGET/boot/efi"
|
||||
mount -t vfat "${DEVICE}1" "$TARGET/boot/efi"
|
||||
fi
|
||||
}
|
||||
|
||||
install_base() {
|
||||
einfo "Installing Alpine Base"
|
||||
mkdir -p "$TARGET/etc/apk"
|
||||
echo "$REPOS" > "$TARGET/etc/apk/repositories"
|
||||
cp -a /etc/apk/keys "$TARGET/etc/apk"
|
||||
|
||||
# shellcheck disable=SC2086
|
||||
for key in $REPO_KEYS; do
|
||||
wget -q $key -P "$TARGET/etc/apk/keys"
|
||||
done
|
||||
|
||||
# shellcheck disable=SC2086
|
||||
apk --root "$TARGET" --initdb --no-cache add $PACKAGES_ADD
|
||||
# shellcheck disable=SC2086
|
||||
[ -z "$PACKAGES_NOSCRIPTS" ] || \
|
||||
apk --root "$TARGET" --no-cache --no-scripts add $PACKAGES_NOSCRIPTS
|
||||
# shellcheck disable=SC2086
|
||||
[ -z "$PACKAGES_DEL" ] || \
|
||||
apk --root "$TARGET" --no-cache del $PACKAGES_DEL
|
||||
}
|
||||
|
||||
setup_chroot() {
|
||||
mount -t proc none "$TARGET/proc"
|
||||
mount --bind /dev "$TARGET/dev"
|
||||
mount --bind /sys "$TARGET/sys"
|
||||
|
||||
# Needed for bootstrap, will be removed in the cleanup stage.
|
||||
install -Dm644 /etc/resolv.conf "$TARGET/etc/resolv.conf"
|
||||
}
|
||||
|
||||
install_bootloader() {
|
||||
einfo "Installing Bootloader"
|
||||
|
||||
# create initfs
|
||||
|
||||
# shellcheck disable=SC2046
|
||||
kernel=$(basename $(find "$TARGET/lib/modules/"* -maxdepth 0))
|
||||
|
||||
# ensure features can be found by mkinitfs
|
||||
for FEATURE in $INITFS_FEATURES; do
|
||||
# already taken care of?
|
||||
[ -f "$TARGET/etc/mkinitfs/features.d/$FEATURE.modules" ] || \
|
||||
[ -f "$TARGET/etc/mkinitfs/features.d/$FEATURE.files" ] && continue
|
||||
# find the kernel module directory
|
||||
module=$(chroot "$TARGET" /sbin/modinfo -k "$kernel" -n "$FEATURE")
|
||||
[ -z "$module" ] && die "initfs_feature '$FEATURE' kernel module not found"
|
||||
# replace everything after .ko with a *
|
||||
echo "$module" | cut -d/ -f5- | sed -e 's/\.ko.*/.ko*/' \
|
||||
> "$TARGET/etc/mkinitfs/features.d/$FEATURE.modules"
|
||||
done
|
||||
|
||||
# TODO? this appends INITFS_FEATURES, we may want to allow removal someday?
|
||||
sed -Ei "s/^features=\"([^\"]+)\"/features=\"\1 $INITFS_FEATURES\"/" \
|
||||
"$TARGET/etc/mkinitfs/mkinitfs.conf"
|
||||
|
||||
chroot "$TARGET" /sbin/mkinitfs "$kernel"
|
||||
|
||||
if [ "$FIRMWARE" = uefi ]; then
|
||||
install_grub_efi
|
||||
else
|
||||
install_extlinux
|
||||
fi
|
||||
}
|
||||
|
||||
install_extlinux() {
|
||||
# Use disk labels instead of UUID or devices paths so that this works across
|
||||
# instance familes. UUID works for many instances but breaks on the NVME
|
||||
# ones because EBS volumes are hidden behind NVME devices.
|
||||
#
|
||||
# Shorten timeout (1/10s), eliminating delays for instance launches.
|
||||
#
|
||||
# ttyS0 is for EC2 Console "Get system log" and "EC2 Serial Console"
|
||||
# features, whereas tty0 is for "Get Instance screenshot" feature. Enabling
|
||||
# the port early in extlinux gives the most complete output in the log.
|
||||
#
|
||||
# TODO: review for other clouds -- this may need to be cloud-specific.
|
||||
sed -Ei -e "s|^[# ]*(root)=.*|\1=LABEL=/|" \
|
||||
-e "s|^[# ]*(default_kernel_opts)=.*|\1=\"$KERNEL_OPTIONS\"|" \
|
||||
-e "s|^[# ]*(serial_port)=.*|\1=ttyS0|" \
|
||||
-e "s|^[# ]*(modules)=.*|\1=$KERNEL_MODULES|" \
|
||||
-e "s|^[# ]*(default)=.*|\1=virt|" \
|
||||
-e "s|^[# ]*(timeout)=.*|\1=1|" \
|
||||
"$TARGET/etc/update-extlinux.conf"
|
||||
|
||||
chroot "$TARGET" /sbin/extlinux --install /boot
|
||||
# TODO: is this really necessary? can we set all this stuff during --install?
|
||||
chroot "$TARGET" /sbin/update-extlinux --warn-only
|
||||
}
|
||||
|
||||
install_grub_efi() {
|
||||
[ -d "/sys/firmware/efi" ] || die "/sys/firmware/efi does not exist"
|
||||
|
||||
case "$ARCH" in
|
||||
x86_64) grub_target=x86_64-efi ; fwa=x64 ;;
|
||||
aarch64) grub_target=arm64-efi ; fwa=aa64 ;;
|
||||
*) die "ARCH=$ARCH is currently unsupported" ;;
|
||||
esac
|
||||
|
||||
# disable nvram so grub doesn't call efibootmgr
|
||||
chroot "$TARGET" /usr/sbin/grub-install --target="$grub_target" --efi-directory=/boot/efi \
|
||||
--bootloader-id=alpine --boot-directory=/boot --no-nvram
|
||||
|
||||
# fallback mode
|
||||
install -D "$TARGET/boot/efi/EFI/alpine/grub$fwa.efi" "$TARGET/boot/efi/EFI/boot/boot$fwa.efi"
|
||||
|
||||
# install default grub config
|
||||
envsubst < "$SETUP/grub.template" > "$SETUP/grub"
|
||||
install -o root -g root -Dm644 -t "$TARGET/etc/default" \
|
||||
"$SETUP/grub"
|
||||
|
||||
# generate/install new config
|
||||
chroot "$TARGET" grub-mkconfig -o /boot/grub/grub.cfg
|
||||
}
|
||||
|
||||
configure_system() {
|
||||
einfo "Configuring System"
|
||||
|
||||
# default network configuration
|
||||
install -o root -g root -Dm644 -t "$TARGET/etc/network" "$SETUP/interfaces"
|
||||
|
||||
# configure NTP server, if specified
|
||||
[ -n "$NTP_SERVER" ] && \
|
||||
sed -e 's/^pool /server /' -e "s/pool.ntp.org/$NTP_SERVER/g" \
|
||||
-i "$TARGET/etc/chrony/chrony.conf"
|
||||
|
||||
# setup fstab
|
||||
install -o root -g root -Dm644 -t "$TARGET/etc" "$SETUP/fstab"
|
||||
# if we're using an EFI bootloader, add extra line for EFI partition
|
||||
if [ "$FIRMWARE" = uefi ]; then
|
||||
cat "$SETUP/fstab.grub-efi" >> "$TARGET/etc/fstab"
|
||||
fi
|
||||
|
||||
# Disable getty for physical ttys, enable getty for serial ttyS0.
|
||||
sed -Ei -e '/^tty[0-9]/s/^/#/' -e '/^#ttyS0:/s/^#//' "$TARGET/etc/inittab"
|
||||
|
||||
# setup sudo and/or doas
|
||||
if grep -q '^sudo$' "$TARGET/etc/apk/world"; then
|
||||
echo '%wheel ALL=(ALL) NOPASSWD: ALL' > "$TARGET/etc/sudoers.d/wheel"
|
||||
fi
|
||||
if grep -q '^doas$' "$TARGET/etc/apk/world"; then
|
||||
echo 'permit nopass :wheel' > "$TARGET/etc/doas.d/wheel.conf"
|
||||
fi
|
||||
|
||||
# explicitly lock the root account
|
||||
chroot "$TARGET" /bin/sh -c "/bin/echo 'root:*' | /usr/sbin/chpasswd -e"
|
||||
chroot "$TARGET" /usr/bin/passwd -l root
|
||||
|
||||
# set up image user
|
||||
user="${IMAGE_LOGIN:-alpine}"
|
||||
chroot "$TARGET" /usr/sbin/addgroup "$user"
|
||||
chroot "$TARGET" /usr/sbin/adduser -h "/home/$user" -s /bin/sh -G "$user" -D "$user"
|
||||
chroot "$TARGET" /usr/sbin/addgroup "$user" wheel
|
||||
chroot "$TARGET" /bin/sh -c "echo '$user:*' | /usr/sbin/chpasswd -e"
|
||||
|
||||
# modify PS1s in /etc/profile to add user
|
||||
sed -Ei \
|
||||
-e "s/(^PS1=')(\\$\\{HOSTNAME%)/\\1\\$\\USER@\\2/" \
|
||||
-e "s/( PS1=')(\\\\h:)/\\1\\\\u@\\2/" \
|
||||
-e "s/( PS1=')(%m:)/\\1%n@\\2/" \
|
||||
"$TARGET"/etc/profile
|
||||
|
||||
# write /etc/motd
|
||||
echo "$MOTD" > "$TARGET"/etc/motd
|
||||
|
||||
setup_services
|
||||
}
|
||||
|
||||
# shellcheck disable=SC2046
|
||||
setup_services() {
|
||||
for lvl_svcs in $SERVICES_ENABLE; do
|
||||
rc add $(echo "$lvl_svcs" | tr '=,' ' ')
|
||||
done
|
||||
for lvl_svcs in $SERVICES_DISABLE; do
|
||||
rc del $(echo "$lvl_svcs" | tr '=,' ' ')
|
||||
done
|
||||
}
|
||||
|
||||
rc() {
|
||||
op="$1" # add or del
|
||||
runlevel="$2" # runlevel name
|
||||
shift 2
|
||||
services="$*" # names of services
|
||||
|
||||
for svc in $services; do
|
||||
chroot "$TARGET" rc-update "$op" "$svc" "$runlevel"
|
||||
done
|
||||
}
|
||||
|
||||
setup_builder
|
||||
make_filesystem
|
||||
install_base
|
||||
setup_chroot
|
||||
install_bootloader
|
||||
configure_system
|
48
alpine-cloud-images/scripts/setup-cloudinit
Executable file
48
alpine-cloud-images/scripts/setup-cloudinit
Executable file
@ -0,0 +1,48 @@
|
||||
#!/bin/sh -eu
|
||||
# vim: ts=4 et:
|
||||
|
||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
||||
|
||||
TARGET=/mnt
|
||||
|
||||
einfo() {
|
||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
||||
}
|
||||
|
||||
einfo "Installing up cloud-init bootstrap components..."
|
||||
|
||||
# This adds the init scripts at the correct boot phases
|
||||
chroot "$TARGET" /sbin/setup-cloud-init
|
||||
|
||||
# cloud-init locks our user by default which means alpine can't login from
|
||||
# SSH. This seems like a bug in cloud-init that should be fixed but we can
|
||||
# hack around it for now here.
|
||||
if [ -f "$TARGET"/etc/cloud/cloud.cfg ]; then
|
||||
sed -i '/lock_passwd:/s/True/False/' "$TARGET"/etc/cloud/cloud.cfg
|
||||
fi
|
||||
|
||||
# configure the image for a particular cloud datasource
|
||||
case "$CLOUD" in
|
||||
aws)
|
||||
DATASOURCE="Ec2"
|
||||
;;
|
||||
nocloud)
|
||||
DATASOURCE="NoCloud"
|
||||
;;
|
||||
azure)
|
||||
DATASOURCE="Azure"
|
||||
;;
|
||||
gcp)
|
||||
DATASOURCE="GCE"
|
||||
;;
|
||||
oci)
|
||||
DATASOURCE="Oracle"
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported Cloud '$CLOUD'" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
printf '\n\n# Cloud-Init will use default configuration for this DataSource\n'
|
||||
printf 'datasource_list: ["%s"]\n' "$DATASOURCE" >> "$TARGET"/etc/cloud/cloud.cfg
|
21
alpine-cloud-images/scripts/setup-tiny
Executable file
21
alpine-cloud-images/scripts/setup-tiny
Executable file
@ -0,0 +1,21 @@
|
||||
#!/bin/sh -eu
|
||||
# vim: ts=4 et:
|
||||
|
||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
||||
|
||||
TARGET=/mnt
|
||||
|
||||
einfo() {
|
||||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
||||
}
|
||||
|
||||
if [ "$VERSION" = "3.12" ]; then
|
||||
# tiny-cloud-network requires ifupdown-ng, not in 3.12
|
||||
einfo "Configuring Tiny EC2 Bootstrap..."
|
||||
echo "EC2_USER=$IMAGE_LOGIN" > /etc/conf.d/tiny-ec2-bootstrap
|
||||
else
|
||||
einfo "Configuring Tiny Cloud..."
|
||||
sed -i.bak -Ee "s/^#?CLOUD_USER=.*/CLOUD_USER=$IMAGE_LOGIN/" \
|
||||
"$TARGET"/etc/conf.d/tiny-cloud
|
||||
rm "$TARGET"/etc/conf.d/tiny-cloud.bak
|
||||
fi
|
2
alpine-cloud-images/scripts/setup.d/fstab
Normal file
2
alpine-cloud-images/scripts/setup.d/fstab
Normal file
@ -0,0 +1,2 @@
|
||||
# <fs> <mountpoint> <type> <opts> <dump/pass>
|
||||
LABEL=/ / ext4 defaults,noatime 1 1
|
1
alpine-cloud-images/scripts/setup.d/fstab.grub-efi
Normal file
1
alpine-cloud-images/scripts/setup.d/fstab.grub-efi
Normal file
@ -0,0 +1 @@
|
||||
LABEL=EFI /boot/efi vfat defaults,noatime,uid=0,gid=0,umask=077 0 0
|
5
alpine-cloud-images/scripts/setup.d/grub.template
Normal file
5
alpine-cloud-images/scripts/setup.d/grub.template
Normal file
@ -0,0 +1,5 @@
|
||||
GRUB_CMDLINE_LINUX_DEFAULT="modules=$KERNEL_MODULES $KERNEL_OPTIONS"
|
||||
GRUB_DISABLE_RECOVERY=true
|
||||
GRUB_DISABLE_SUBMENU=y
|
||||
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
|
||||
GRUB_TIMEOUT=0
|
7
alpine-cloud-images/scripts/setup.d/interfaces
Normal file
7
alpine-cloud-images/scripts/setup.d/interfaces
Normal file
@ -0,0 +1,7 @@
|
||||
# default alpine-cloud-images network configuration
|
||||
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
|
||||
auto eth0
|
||||
iface eth0 inet dhcp
|
@ -0,0 +1,44 @@
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "VisualEditor0",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ec2:CopySnapshot",
|
||||
"ec2:Describe*",
|
||||
"ec2:ModifySnapshotAttribute",
|
||||
"ec2:RegisterImage",
|
||||
"kms:CreateGrant",
|
||||
"kms:Decrypt",
|
||||
"kms:DescribeKey",
|
||||
"kms:Encrypt",
|
||||
"kms:GenerateDataKey*",
|
||||
"kms:ReEncrypt*",
|
||||
"license-manager:GetLicenseConfiguration",
|
||||
"license-manager:ListLicenseSpecificationsForResource",
|
||||
"license-manager:UpdateLicenseSpecificationsForResource"
|
||||
],
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Sid": "VisualEditor1",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetBucketAcl",
|
||||
"s3:GetBucketLocation",
|
||||
"s3:ListBucket"
|
||||
],
|
||||
"Resource": "arn:aws:s3:::alpine-cloud-images.*"
|
||||
},
|
||||
{
|
||||
"Sid": "VisualEditor2",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetObject",
|
||||
"s3:PutObject"
|
||||
],
|
||||
"Resource": "arn:aws:s3:::alpine-cloud-images.*/*"
|
||||
}
|
||||
]
|
||||
}
|
17
alpine-cloud-images/support/aws/iam_role_vmimport_trust.json
Normal file
17
alpine-cloud-images/support/aws/iam_role_vmimport_trust.json
Normal file
@ -0,0 +1,17 @@
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Principal": {
|
||||
"Service": "vmie.amazonaws.com"
|
||||
},
|
||||
"Action": "sts:AssumeRole",
|
||||
"Condition": {
|
||||
"StringEquals": {
|
||||
"sts:Externalid": "vmimport"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
Loading…
Reference in New Issue
Block a user