Compare commits
5 Commits
22cc3a226b
...
a037ca3d1c
Author | SHA1 | Date |
---|---|---|
Stefan Reimer | a037ca3d1c | |
Stefan Reimer | c336c5109a | |
Stefan Reimer | da6fb3afd1 | |
Stefan Reimer | 6152828797 | |
Stefan Reimer | 469804206a |
|
@ -1,7 +1,8 @@
|
|||
*~
|
||||
*.bak
|
||||
*.swp
|
||||
**/*~
|
||||
**/*.bak
|
||||
**/*.swp
|
||||
.DS_Store
|
||||
.vscode/
|
||||
/work/
|
||||
releases*yaml
|
||||
/*.yaml
|
||||
|
|
|
@ -154,6 +154,12 @@ For the official Alpine Linux cloud images, this is set to
|
|||
When building custom images, you **MUST** override **AT LEAST** this setting to
|
||||
avoid image import and publishing collisions.
|
||||
|
||||
### `userhost` string
|
||||
|
||||
This is the remote _user_@_host_ that is used for storing state, uploading
|
||||
files, and releasing official images. Currently used by `storage_url` and
|
||||
`release_cmd`.
|
||||
|
||||
### `name` array
|
||||
|
||||
The ultimate contents of this array contribute to the overall naming of the
|
||||
|
@ -193,10 +199,24 @@ Directories (under `work/scripts/`) that contain additional data that the
|
|||
`scripts` will need. Packer will copy these to the VM responsible for setting
|
||||
up the variant image.
|
||||
|
||||
### `size` string
|
||||
### `disk_size` array
|
||||
|
||||
The size of the image disk, by default we use `1G` (1 GiB). This disk may (or
|
||||
may not) be further partitioned, based on other factors.
|
||||
The sum of this array is the size of the image disk, specified in MiB; this
|
||||
allows different dimension variants to "bump up" the size of the image if
|
||||
extra space is needed.
|
||||
|
||||
### `image_format` string
|
||||
|
||||
The format/extension of the disk image, i.e. `qcow2`, `vhd`, or `raw`.
|
||||
|
||||
### `image_format_opts` string
|
||||
|
||||
Some formats have additional options; currently `vhd/force-size` and
|
||||
`vhd/fixed_force-size` are defined.
|
||||
|
||||
### `image_compress` string
|
||||
|
||||
***TODO***
|
||||
|
||||
### `login` string
|
||||
|
||||
|
@ -312,3 +332,23 @@ Currently, only the **aws** cloud module supports this.
|
|||
|
||||
List of addtional repository keys to trust during the package installation phase.
|
||||
This allows pulling in custom apk packages by simple specifying the repository name in packages block.
|
||||
|
||||
### `storage_url` string
|
||||
|
||||
This is an URL that defines where the persistent state about images is stored,
|
||||
from `upload` through `release` steps (and beyond). This allows one `build`
|
||||
session to pick up where another left off. Currently, `ssh://` and `file://`
|
||||
URLs are supported.
|
||||
|
||||
### `download_url` string
|
||||
|
||||
This string is used for building download URLs for officially released images
|
||||
on the https://alpinelinux.org/cloud web page.
|
||||
|
||||
### `signing_cmd` string
|
||||
|
||||
Command template to cryptographically sign files.
|
||||
|
||||
### `release_cmd` string
|
||||
|
||||
Command template to complete the release of an image.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
Copyright (c) 2017-2022 Jake Buchholz Göktürk, Michael Crute
|
||||
Copyright (c) 2017-2024 Jake Buchholz Göktürk, Michael Crute
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
|
|
|
@ -11,16 +11,17 @@ own customized images.
|
|||
To get started with offical pre-built Alpine Linux cloud images, visit
|
||||
https://alpinelinux.org/cloud. Currently, we build official images for the
|
||||
following cloud platforms...
|
||||
* AWS
|
||||
* Amazon Web Services (AWS)
|
||||
* Microsoft Azure
|
||||
* GCP (Google Cloud Platform)
|
||||
* OCI (Oracle Cloud Infrastructure)
|
||||
* NoCloud
|
||||
|
||||
...we are working on also publishing offical images to other major cloud
|
||||
providers.
|
||||
Each image's name contains the Alpine version release, architecture, firmware,
|
||||
bootstrap, and image revision; a YAML metadata file containing these details
|
||||
and more is downloadable.
|
||||
|
||||
Each published image's name contains the Alpine version release, architecture,
|
||||
firmware, bootstrap, and image revision. These details (and more) are also
|
||||
tagged on the images...
|
||||
|
||||
| Tag | Description / Values |
|
||||
| Key | Description / Values |
|
||||
|-----|----------------------|
|
||||
| name | `alpine-`_`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-r`_`revision`_ |
|
||||
| project | `https://alpinelinux.org/cloud` |
|
||||
|
@ -32,15 +33,20 @@ tagged on the images...
|
|||
| bootstrap | initial bootstrap system (`tiny` = Tiny Cloud) |
|
||||
| cloud | provider short name (`aws`) |
|
||||
| revision | image revision number |
|
||||
| built | image build timestamp |
|
||||
| uploaded | image storage timestamp |
|
||||
| imported | image import timestamp |
|
||||
| import_id | imported image id |
|
||||
| import_region | imported image region |
|
||||
| signed | image signing timestamp |
|
||||
| published | image publication timestamp |
|
||||
| released | image release timestamp _(won't be set until second publish)_ |
|
||||
| description | image description |
|
||||
|
||||
Although AWS does not allow cross-account filtering by tags, the image name can
|
||||
still be used to filter images. For example, to get a list of available Alpine
|
||||
3.x aarch64 images in AWS eu-west-2...
|
||||
Published AWS images are also tagged with this data, but other AWS accounts
|
||||
can't read these tags. However, the image name can still be used to filter
|
||||
images to find what you're looking for. For example, to get a list of
|
||||
available Alpine 3.x aarch64 images in AWS eu-west-2...
|
||||
```
|
||||
aws ec2 describe-images \
|
||||
--region eu-west-2 \
|
||||
|
@ -61,28 +67,43 @@ To get just the most recent matching image, use...
|
|||
|
||||
The build system consists of a number of components:
|
||||
|
||||
* the primary `build` script
|
||||
* the primary `build` script, and other related libararies...
|
||||
* `clouds/` - specific cloud provider plugins
|
||||
* `alpine.py` - for getting the latest Alpine information
|
||||
* `image_config_manager.py` - manages collection of image configs
|
||||
* `image_config.py` - individual image config functionality
|
||||
* `image_storage.py` - persistent image/metadata storage
|
||||
* `image_tags.py` - classes for working with image tags
|
||||
|
||||
* the `configs/` directory, defining the set of images to be built
|
||||
|
||||
* the `scripts/` directory, containing scripts and related data used to set up
|
||||
image contents during provisioning
|
||||
* the Packer `alpine.pkr.hcl`, which orchestrates build, import, and publishing
|
||||
of images
|
||||
|
||||
* the Packer `alpine.pkr.hcl`, which orchestrates the various build steps
|
||||
from `local` and beyond.
|
||||
|
||||
* the `cloud_helper.py` script that Packer runs in order to do cloud-specific
|
||||
import and publish operations
|
||||
per-image operations, such as image format conversion, upload, publishing,
|
||||
etc.
|
||||
|
||||
### Build Requirements
|
||||
* [Python](https://python.org) (3.9.7 is known to work)
|
||||
* [Packer](https://packer.io) (1.7.6 is known to work)
|
||||
* [QEMU](https://www.qemu.org) (6.1.0 is known to work)
|
||||
* cloud provider account(s)
|
||||
|
||||
* [Python](https://python.org) (3.9.9 is known to work)
|
||||
* [Packer](https://packer.io) (1.9.4 is known to work)
|
||||
* [QEMU](https://www.qemu.org) (8.1.2 is known to work)
|
||||
* cloud provider account(s) _(for import/publish steps)_
|
||||
|
||||
### Cloud Credentials
|
||||
|
||||
By default, the build system relies on the cloud providers' Python API
|
||||
Importing and publishing images relies on the cloud providers' Python API
|
||||
libraries to find and use the necessary credentials, usually via configuration
|
||||
under the user's home directory (i.e. `~/.aws/`, `~/.oci/`, etc.) or or via
|
||||
environment variables (i.e. `AWS_...`, `OCI_...`, etc.)
|
||||
|
||||
_Note that presently, importing and publishing to cloud providers is only
|
||||
supported for AWS images._
|
||||
|
||||
The credentials' user/role needs sufficient permission to query, import, and
|
||||
publish images -- the exact details will vary from cloud to cloud. _It is
|
||||
recommended that only the minimum required permissions are granted._
|
||||
|
@ -98,35 +119,36 @@ usage: build [-h] [--debug] [--clean] [--pad-uefi-bin-arch ARCH [ARCH ...]]
|
|||
[--custom DIR [DIR ...]] [--skip KEY [KEY ...]] [--only KEY [KEY ...]]
|
||||
[--revise] [--use-broker] [--no-color] [--parallel N]
|
||||
[--vars FILE [FILE ...]]
|
||||
{configs,state,rollback,local,upload,import,publish,release}
|
||||
{configs,state,rollback,local,upload,import,sign,publish,release}
|
||||
|
||||
positional arguments: (build up to and including this step)
|
||||
configs resolve image build configuration
|
||||
state refresh current image build state
|
||||
rollback remove existing local/uploaded/imported images if un-published/released
|
||||
state report current build state of images
|
||||
rollback remove local/uploaded/imported images if not published or released
|
||||
local build images locally
|
||||
upload upload images and metadata to storage
|
||||
* import import local images to cloud provider default region (*)
|
||||
sign cryptographically sign images
|
||||
* publish set image permissions and publish to cloud regions (*)
|
||||
release mark images as being officially relased
|
||||
|
||||
(*) may not apply to or be implemented for all cloud providers
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--debug enable debug output
|
||||
--clean start with a clean work environment
|
||||
-h, --help show this help message and exit
|
||||
--debug enable debug output
|
||||
--clean start with a clean work environment
|
||||
--pad-uefi-bin-arch ARCH [ARCH ...]
|
||||
pad out UEFI firmware to 64 MiB ('aarch64')
|
||||
--custom DIR [DIR ...] overlay custom directory in work environment
|
||||
--skip KEY [KEY ...] skip variants with dimension key(s)
|
||||
--only KEY [KEY ...] only variants with dimension key(s)
|
||||
--revise remove existing local/uploaded/imported images if
|
||||
un-published/released, or bump revision and rebuild
|
||||
--use-broker use the identity broker to get credentials
|
||||
--no-color turn off Packer color output
|
||||
--parallel N build N images in parallel
|
||||
--vars FILE [FILE ...] supply Packer with -vars-file(s) (default: [])
|
||||
pad out UEFI firmware to 64 MiB ('aarch64')
|
||||
--custom DIR [DIR ...] overlay custom directory in work environment
|
||||
--skip KEY [KEY ...] skip variants with dimension key(s)
|
||||
--only KEY [KEY ...] only variants with dimension key(s)
|
||||
--revise bump revision and rebuild if published or released
|
||||
--use-broker use the identity broker to get credentials
|
||||
--no-color turn off Packer color output
|
||||
--parallel N build N images in parallel
|
||||
--vars FILE [FILE ...] supply Packer with -vars-file(s) (default: [])
|
||||
--disable STEP [STEP ...] disable optional steps (default: [])
|
||||
```
|
||||
|
||||
The `build` script will automatically create a `work/` directory containing a
|
||||
|
@ -155,21 +177,20 @@ determines what actions need to be taken, and updates `work/images.yaml`. A
|
|||
subset of image builds can be targeted by using the `--skip` and `--only`
|
||||
arguments.
|
||||
|
||||
The `rollback` step, when used with `--revise` argument indicates that any
|
||||
_unpublished_ and _unreleased_ local, imported, or uploaded images should be
|
||||
removed and rebuilt.
|
||||
The `rollback` step will remove any imported, uploaded, or local images, but
|
||||
only if they are _unpublished_ and _unreleased_.
|
||||
|
||||
As _published_ and _released_ images can't be removed, `--revise` can be used
|
||||
with `configs` or `state` to increment the _`revision`_ value to rebuild newly
|
||||
revised images.
|
||||
As _published_ and _released_ images can't be rolled back, `--revise` can be
|
||||
used to increment the _`revision`_ value to rebuild newly revised images.
|
||||
|
||||
`local`, `upload`, `import`, `publish`, and `release` steps are orchestrated by
|
||||
Packer. By default, each image will be processed serially; providing the
|
||||
`--parallel` argument with a value greater than 1 will parallelize operations.
|
||||
The degree to which you can parallelze `local` image builds will depend on the
|
||||
local build hardware -- as QEMU virtual machines are launched for each image
|
||||
being built. Image `upload`, `import`, `publish`, and `release` steps are much
|
||||
more lightweight, and can support higher parallelism.
|
||||
`local`, `upload`, `import`, `publish`, `sign` , and `release` steps are
|
||||
orchestrated by Packer. By default, each image will be processed serially;
|
||||
providing the `--parallel` argument with a value greater than 1 will
|
||||
parallelize operations. The degree to which you can parallelize `local` image
|
||||
builds will depend on the local build hardware -- as QEMU virtual machines are
|
||||
launched for each image being built. Image `upload`, `import`, `publish`,
|
||||
`sign`, and `release` steps are much more lightweight, and can support higher
|
||||
parallelism.
|
||||
|
||||
The `local` step builds local images with QEMU, for those that are not already
|
||||
built locally or have already been imported. Images are converted to formats
|
||||
|
@ -184,6 +205,9 @@ The `import` step imports the local images into the cloud providers' default
|
|||
regions, unless they've already been imported. At this point the images are
|
||||
not available publicly, allowing for additional testing prior to publishing.
|
||||
|
||||
The `sign` step will cryptographically sign the built image, using the command
|
||||
specified by the `signing_cmd` config value.
|
||||
|
||||
The `publish` step copies the image from the default region to other regions,
|
||||
if they haven't already been copied there. This step will always update
|
||||
image permissions, descriptions, tags, and deprecation date (if applicable)
|
||||
|
@ -193,7 +217,10 @@ in all regions where the image has been published.
|
|||
providers where this does not make sense (i.e. NoCloud) or for those which
|
||||
it has not yet been coded.
|
||||
|
||||
The `release` step marks the images as being fully released.
|
||||
The `release` step simply marks the images as being fully released. If there
|
||||
is a `release_cmd` specified, this is also executed, per image. _(For the
|
||||
offical Alpine releases, we have a `gen_mksite_release.py` script to convert
|
||||
the image data to a format that can be used by https://alpinelinux.org/cloud.)_
|
||||
|
||||
### The `cloud_helper.py` Script
|
||||
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
* consider separating official Alpine Linux configuration into an overlay
|
||||
to be applied via `--custom`.
|
||||
|
||||
* add per-cloud documentation for importing images
|
||||
|
||||
* figure out `image_compression`, especially for the weird case of GCP
|
||||
|
||||
* clean up cloud modules now that `get_latest_imported_tags` isn't really
|
||||
needed -- AWS publish_image still uses it to make sure the imported image
|
||||
is actually there (and the right one), this can be made more specific.
|
||||
|
||||
* do we still need to set `ntp_server` for AWS images, starting with 3.18.4?
|
||||
_(or is this now handled via `dhcpcd`?)_
|
||||
|
||||
* figure out rollback / `refresh_state()` for images that are already signed,
|
||||
don't sign again unless directed to do so.
|
|
@ -1,5 +1,14 @@
|
|||
# Alpine Cloud Images Packer Configuration
|
||||
|
||||
packer {
|
||||
required_plugins {
|
||||
qemu = {
|
||||
source = "github.com/hashicorp/qemu"
|
||||
version = "~> 1"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
### Variables
|
||||
|
||||
# include debug output from provisioning/post-processing scripts
|
||||
|
@ -31,7 +40,7 @@ variable "qemu" {
|
|||
locals {
|
||||
# possible actions for the post-processor
|
||||
actions = [
|
||||
"local", "upload", "import", "publish", "release"
|
||||
"local", "upload", "import", "sign", "publish", "release"
|
||||
]
|
||||
|
||||
debug_arg = var.DEBUG == 0 ? "" : "--debug"
|
||||
|
@ -106,7 +115,7 @@ build {
|
|||
|
||||
# results
|
||||
output_directory = "work/images/${B.value.cloud}/${B.value.image_key}"
|
||||
disk_size = B.value.size
|
||||
disk_size = B.value.disk_size
|
||||
format = "qcow2"
|
||||
vm_name = "image.qcow2"
|
||||
}
|
||||
|
|
|
@ -48,7 +48,8 @@ from image_config_manager import ImageConfigManager
|
|||
|
||||
### Constants & Variables
|
||||
|
||||
STEPS = ['configs', 'state', 'rollback', 'local', 'upload', 'import', 'publish', 'release']
|
||||
STEPS = ['configs', 'state', 'rollback', 'local', 'upload', 'import', 'sign', 'publish', 'release']
|
||||
DISABLEABLE = ['import', 'sign', 'publish']
|
||||
LOGFORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
WORK_CLEAN = {'bin', 'include', 'lib', 'pyvenv.cfg', '__pycache__'}
|
||||
WORK_OVERLAYS = ['configs', 'scripts']
|
||||
|
@ -62,6 +63,8 @@ UEFI_FIRMWARE = {
|
|||
'bin': 'usr/share/OVMF/OVMF.fd',
|
||||
}
|
||||
}
|
||||
PACKER_CACHE_DIR = 'work/packer_cache'
|
||||
PACKER_PLUGIN_PATH = 'work/packer_plugin'
|
||||
alpine = Alpine()
|
||||
|
||||
|
||||
|
@ -228,14 +231,15 @@ parser.add_argument(
|
|||
default=[], help='only variants with dimension key(s)')
|
||||
parser.add_argument(
|
||||
'--revise', action='store_true',
|
||||
help='remove existing local/uploaded/imported image, or bump revision and '
|
||||
' rebuild if published or released')
|
||||
help='bump revision and rebuild if published or released')
|
||||
# --revise is not needed after new revision is uploaded
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
# packer options
|
||||
parser.add_argument(
|
||||
'--no-color', action='store_true', help='turn off Packer color output')
|
||||
'--color', default=True, action=argparse.BooleanOptionalAction,
|
||||
help='turn on/off Packer color output')
|
||||
parser.add_argument(
|
||||
'--parallel', metavar='N', type=int, default=1,
|
||||
help='build N images in parallel')
|
||||
|
@ -245,6 +249,11 @@ parser.add_argument(
|
|||
# positional argument
|
||||
parser.add_argument(
|
||||
'step', choices=STEPS, help='build up to and including this step')
|
||||
# steps we may choose to not do
|
||||
parser.add_argument(
|
||||
'--disable', metavar='STEP', nargs='+', action=remove_dupe_args(),
|
||||
choices=DISABLEABLE, default=[], help='disable optional steps'
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger('build')
|
||||
|
@ -256,9 +265,13 @@ console.setFormatter(logfmt)
|
|||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
if args.step == 'rollback':
|
||||
log.warning('"rollback" step enables --revise option')
|
||||
args.revise = True
|
||||
if args.step == 'rollback' and args.revise:
|
||||
log.error('"rollback" step does not support --revise option')
|
||||
sys.exit(1)
|
||||
|
||||
if 'import' in args.disable and 'publish' not in args.disable:
|
||||
log.warning('--disable import also implicitly disables publish')
|
||||
args.disable.append('publish')
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
|
@ -288,11 +301,11 @@ if args.step == 'configs':
|
|||
### What needs doing?
|
||||
|
||||
if not image_configs.refresh_state(
|
||||
step=args.step, only=args.only, skip=args.skip, revise=args.revise):
|
||||
args.step, args.disable, args.revise, args.only, args.skip):
|
||||
log.info('No pending actions to take at this time.')
|
||||
sys.exit(0)
|
||||
|
||||
if args.step == 'state' or args.step == 'rollback':
|
||||
if args.step == 'state':
|
||||
sys.exit(0)
|
||||
|
||||
# install firmware if missing
|
||||
|
@ -302,14 +315,32 @@ install_qemu_firmware()
|
|||
|
||||
env = os.environ | {
|
||||
'TZ': 'UTC',
|
||||
'PACKER_CACHE_DIR': 'work/packer_cache'
|
||||
'PACKER_CACHE_DIR': PACKER_CACHE_DIR,
|
||||
'PACKER_PLUGIN_PATH': PACKER_PLUGIN_PATH
|
||||
}
|
||||
|
||||
if not os.path.exists(PACKER_PLUGIN_PATH):
|
||||
packer_init_cmd = [ 'packer', 'init', '.' ]
|
||||
log.info('Initializing Packer...')
|
||||
log.debug(packer_init_cmd)
|
||||
out = io.StringIO()
|
||||
p = Popen(packer_init_cmd, stdout=PIPE, encoding='utf8', env=env)
|
||||
while p.poll() is None:
|
||||
text = p.stdout.readline()
|
||||
out.write(text)
|
||||
print(text, end="")
|
||||
|
||||
if p.returncode != 0:
|
||||
log.critical('Packer Initialization Failure')
|
||||
sys.exit(p.returncode)
|
||||
|
||||
log.info('Packer Initialized')
|
||||
|
||||
packer_cmd = [
|
||||
'packer', 'build', '-timestamp-ui',
|
||||
'-parallel-builds', str(args.parallel)
|
||||
]
|
||||
if args.no_color:
|
||||
if not args.color:
|
||||
packer_cmd.append('-color=false')
|
||||
|
||||
if args.use_broker:
|
||||
|
@ -339,9 +370,8 @@ if p.returncode != 0:
|
|||
log.info('Packer Completed')
|
||||
|
||||
# update final state in work/images.yaml
|
||||
# TODO: do we need to do all of this or just save all the image_configs?
|
||||
image_configs.refresh_state(
|
||||
step='final',
|
||||
'final',
|
||||
only=args.only,
|
||||
skip=args.skip
|
||||
)
|
||||
|
|
|
@ -37,7 +37,7 @@ from image_config_manager import ImageConfigManager
|
|||
|
||||
### Constants & Variables
|
||||
|
||||
ACTIONS = ['local', 'upload', 'import', 'publish', 'release']
|
||||
ACTIONS = ['local', 'upload', 'import', 'sign', 'publish', 'release']
|
||||
LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
|
||||
|
@ -76,23 +76,29 @@ yaml.explicit_start = True
|
|||
|
||||
for image_key in args.image_keys:
|
||||
image_config = configs.get(image_key)
|
||||
image_config.load_local_metadata() # if it exists
|
||||
|
||||
if args.action in ["import", "sign"] and not image_config.image_path.exists():
|
||||
# if we don't have the image locally, retrieve it from storage
|
||||
image_config.retrieve_image()
|
||||
|
||||
if args.action == 'local':
|
||||
image_config.convert_image()
|
||||
|
||||
elif args.action == 'upload':
|
||||
if image_config.storage:
|
||||
image_config.upload_image()
|
||||
image_config.upload_image()
|
||||
|
||||
elif args.action == 'import':
|
||||
clouds.import_image(image_config)
|
||||
|
||||
elif args.action == 'publish':
|
||||
elif args.action == 'sign':
|
||||
image_config.sign_image()
|
||||
|
||||
elif args.action == 'publish' and 'publish' in clouds.actions(image_config):
|
||||
clouds.publish_image(image_config)
|
||||
|
||||
elif args.action == 'release':
|
||||
pass
|
||||
# TODO: image_config.release_image() - configurable steps to take on remote host
|
||||
image_config.release_image()
|
||||
|
||||
# save per-image metadata
|
||||
image_config.save_metadata(args.action)
|
||||
|
|
|
@ -31,7 +31,7 @@ def set_credential_provider(debug=False):
|
|||
|
||||
### forward to the correct adapter
|
||||
|
||||
# TODO: latest_imported_tags(...)
|
||||
# TODO: deprecate/remove
|
||||
def get_latest_imported_tags(config):
|
||||
return ADAPTERS[config.cloud].get_latest_imported_tags(
|
||||
config.project,
|
||||
|
@ -49,3 +49,7 @@ def delete_image(config, image_id):
|
|||
|
||||
def publish_image(config):
|
||||
return ADAPTERS[config.cloud].publish_image(config)
|
||||
|
||||
# supported actions
|
||||
def actions(config):
|
||||
return ADAPTERS[config.cloud].ACTIONS
|
||||
|
|
|
@ -30,6 +30,10 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
'bios': 'legacy-bios',
|
||||
'uefi': 'uefi',
|
||||
}
|
||||
ACTIONS = [
|
||||
'import',
|
||||
'publish',
|
||||
]
|
||||
|
||||
@property
|
||||
def sdk(self):
|
||||
|
@ -96,6 +100,7 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
tags = ImageTags(from_list=i.tags)
|
||||
return DictObj({k: tags.get(k, None) for k in self.IMAGE_INFO})
|
||||
|
||||
# TODO: deprectate/remove
|
||||
# get the latest imported image's tags for a given build key
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
images = self._get_images_with_tags(
|
||||
|
@ -110,10 +115,12 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
|
||||
# import an image
|
||||
# NOTE: requires 'vmimport' role with read/write of <s3_bucket>.* and its objects
|
||||
def import_image(self, ic):
|
||||
log = logging.getLogger('import')
|
||||
description = ic.image_description
|
||||
def import_image(self, ic, log=None):
|
||||
# if we try reimport from publish, we already have a log going
|
||||
if not log:
|
||||
log = logging.getLogger('import')
|
||||
|
||||
description = ic.image_description
|
||||
session = self.session()
|
||||
s3r = session.resource('s3')
|
||||
ec2c = session.client('ec2')
|
||||
|
@ -200,7 +207,7 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
}],
|
||||
Description=description,
|
||||
EnaSupport=True,
|
||||
Name=ic.image_name,
|
||||
Name=tags.name,
|
||||
RootDeviceName='/dev/xvda',
|
||||
SriovNetSupport='simple',
|
||||
VirtualizationType='hvm',
|
||||
|
@ -222,6 +229,10 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
tags.import_id = image_id
|
||||
tags.import_region = ec2c.meta.region_name
|
||||
image.create_tags(Tags=tags.as_list())
|
||||
# update image config with import information
|
||||
ic.imported = tags.imported
|
||||
ic.import_id = tags.import_id
|
||||
ic.import_region = tags.import_region
|
||||
except Exception:
|
||||
log.error('Unable to tag image:', exc_info=True)
|
||||
log.info('Removing image and snapshot')
|
||||
|
@ -229,9 +240,6 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
snapshot.delete()
|
||||
raise
|
||||
|
||||
# update ImageConfig with imported tag values, minus special AWS 'Name'
|
||||
tags.pop('Name', None)
|
||||
ic.__dict__ |= tags
|
||||
|
||||
# delete an (unpublished) image
|
||||
def delete_image(self, image_id):
|
||||
|
@ -252,9 +260,18 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
ic.project,
|
||||
ic.image_key,
|
||||
)
|
||||
if not source_image:
|
||||
log.error('No source image for %s', ic.image_key)
|
||||
raise RuntimeError('Missing source imamge')
|
||||
# TODO: might be the wrong source image?
|
||||
if not source_image or source_image.name != ic.tags.name:
|
||||
log.warning('No source image for %s, reimporting', ic.tags.name)
|
||||
# TODO: try importing it again?
|
||||
self.import_image(ic, log)
|
||||
source_image = self.get_latest_imported_tags(
|
||||
ic.project,
|
||||
ic.image_key,
|
||||
)
|
||||
if not source_image or source_image.name != ic.tags.name:
|
||||
log.error('No source image for %s', ic.tags.name)
|
||||
raise RuntimeError('Missing source image')
|
||||
|
||||
source_id = source_image.import_id
|
||||
source_region = source_image.import_region
|
||||
|
@ -390,7 +407,9 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
time.sleep(copy_wait)
|
||||
copy_wait = 30
|
||||
|
||||
# update image config with published information
|
||||
ic.artifacts = artifacts
|
||||
ic.published = datetime.utcnow().isoformat()
|
||||
|
||||
|
||||
def register(cloud, cred_provider=None):
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
class CloudAdapterInterface:
|
||||
|
||||
ACTIONS = []
|
||||
|
||||
def __init__(self, cloud, cred_provider=None):
|
||||
self._sdk = None
|
||||
self._sessions = {}
|
||||
|
|
|
@ -5,14 +5,15 @@
|
|||
# *AT LEAST* the 'project' setting with a unique identifier string value
|
||||
# via a "config overlay" to avoid image import and publishing collisions.
|
||||
|
||||
project = "https://alpinelinux.org/cloud"
|
||||
project = "https://alpinelinux.org/cloud"
|
||||
userhost = "tomalok@dev.alpinelinux.org"
|
||||
|
||||
# all build configs start with these
|
||||
Default {
|
||||
project = ${project}
|
||||
|
||||
# image name/description components
|
||||
name = [ alpine ]
|
||||
name = [ "{cloud}_alpine" ]
|
||||
description = [ Alpine Linux ]
|
||||
|
||||
motd {
|
||||
|
@ -34,16 +35,19 @@ Default {
|
|||
scripts = [ setup ]
|
||||
script_dirs = [ setup.d ]
|
||||
|
||||
size = 1G
|
||||
disk_size = [116]
|
||||
image_format = qcow2
|
||||
image_compress = bz2
|
||||
|
||||
login = alpine
|
||||
|
||||
image_format = qcow2
|
||||
|
||||
# these paths are subject to change, as image downloads are developed
|
||||
storage_url = "ssh://tomalok@dev.alpinelinux.org/public_html/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}"
|
||||
#storage_url = "file://~jake/tmp/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}"
|
||||
download_url = "https://dev.alpinelinux.org/~tomalok/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}" # development
|
||||
#download_url = "https://dl-cdn.alpinelinux.org/alpine/{v_version}/cloud/{cloud}/{arch}"
|
||||
# storage_url contents are authoritative!
|
||||
storage_url = "ssh://"${userhost}"/public_html/alpine-cloud-images/{v_version}/{cloud}/{arch}"
|
||||
# released images are available here
|
||||
download_url = "https://dl-cdn.alpinelinux.org/alpine/{v_version}/releases/cloud"
|
||||
signing_cmd = "keybase pgp sign -d -i {file} -o {file}.asc"
|
||||
release_cmd = ssh ${userhost} "bin/release-image {v_version} {cloud} {arch} {base}"
|
||||
|
||||
# image access
|
||||
access.PUBLIC = true
|
||||
|
@ -55,10 +59,10 @@ Default {
|
|||
# profile build matrix
|
||||
Dimensions {
|
||||
version {
|
||||
"3.19" { include required("version/3.19.conf") }
|
||||
"3.18" { include required("version/3.18.conf") }
|
||||
"3.17" { include required("version/3.17.conf") }
|
||||
"3.16" { include required("version/3.16.conf") }
|
||||
"3.15" { include required("version/3.15.conf") }
|
||||
"3.14" { include required("version/3.14.conf") }
|
||||
edge { include required("version/edge.conf") }
|
||||
}
|
||||
arch {
|
||||
|
@ -75,8 +79,8 @@ Dimensions {
|
|||
}
|
||||
cloud {
|
||||
aws { include required("cloud/aws.conf") }
|
||||
# considered beta...
|
||||
nocloud { include required("cloud/nocloud.conf") }
|
||||
# these are considered "alpha"
|
||||
azure { include required("cloud/azure.conf") }
|
||||
gcp { include required("cloud/gcp.conf") }
|
||||
oci { include required("cloud/oci.conf") }
|
||||
|
@ -94,12 +98,4 @@ Mandatory {
|
|||
|
||||
# final provisioning script
|
||||
scripts = [ cleanup ]
|
||||
|
||||
# TODO: remove this after testing
|
||||
#access.PUBLIC = false
|
||||
#regions {
|
||||
# ALL = false
|
||||
# us-west-2 = true
|
||||
# us-east-1 = true
|
||||
#}
|
||||
}
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
name = [aarch64]
|
||||
arch_name = aarch64
|
||||
|
||||
disk_size = [32]
|
||||
|
||||
# aarch64 is UEFI only
|
||||
EXCLUDE = [bios]
|
||||
|
||||
|
@ -13,3 +15,8 @@ qemu.args = [
|
|||
[-device, usb-ehci],
|
||||
[-device, usb-kbd],
|
||||
]
|
||||
|
||||
kernel_options {
|
||||
"console=ttyS0,115200n8" = false
|
||||
"console=ttyAMA0,115200n8" = true
|
||||
}
|
||||
|
|
|
@ -3,15 +3,31 @@ name = [cloudinit]
|
|||
bootstrap_name = cloud-init
|
||||
bootstrap_url = "https://cloud-init.io"
|
||||
|
||||
disk_size = [64]
|
||||
|
||||
# start cloudinit images with 3.15
|
||||
EXCLUDE = ["3.12", "3.13", "3.14"]
|
||||
|
||||
packages {
|
||||
cloud-init = true
|
||||
dhclient = true # offically supported, for now
|
||||
dhcpcd = null # unsupported, for now
|
||||
openssh-server-pam = true
|
||||
e2fsprogs-extra = true # for resize2fs
|
||||
}
|
||||
|
||||
WHEN.nocloud {
|
||||
# fix for "failed to mount /dev/sr0 when looking for data"
|
||||
# @see https://git.alpinelinux.org/aports/tree/community/cloud-init/README.Alpine
|
||||
packages.mount = true
|
||||
WHEN {
|
||||
"3.15 3.16" {
|
||||
packages.mount = null
|
||||
packages.util-linux-misc = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
services.default.cloud-init-hotplugd = true
|
||||
|
||||
scripts = [ setup-cloudinit ]
|
||||
|
|
|
@ -3,23 +3,22 @@ name = [tiny]
|
|||
bootstrap_name = Tiny Cloud
|
||||
bootstrap_url = "https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud"
|
||||
|
||||
services {
|
||||
sysinit.tiny-cloud-early = true
|
||||
default.tiny-cloud = true
|
||||
default.tiny-cloud-final = true
|
||||
}
|
||||
|
||||
WHEN {
|
||||
"3.13 3.14 3.15 3.16 3.17" {
|
||||
# tiny-cloud < 3.0.0 doesn't have --setup option
|
||||
services.boot.tiny-cloud-early = true
|
||||
services.default.tiny-cloud = true
|
||||
services.default.tiny-cloud-final = true
|
||||
}
|
||||
aws {
|
||||
packages.tiny-cloud-aws = true
|
||||
WHEN {
|
||||
"3.12" {
|
||||
# tiny-cloud-network requires ifupdown-ng (unavailable in 3.12)
|
||||
# fallback to the old tiny-ec2-bootstrap package
|
||||
packages.tiny-cloud-aws = null
|
||||
services.sysinit.tiny-cloud-early = null
|
||||
services.boot.tiny-cloud-early = null
|
||||
services.default.tiny-cloud = null
|
||||
services.default.tiny-cloud-final = null
|
||||
# fall back to tiny-ec2-bootstrap instead
|
||||
packages.tiny-ec2-bootstrap = true
|
||||
services.default.tiny-ec2-bootstrap = true
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Amazon Web Services
|
||||
image_format = vhd
|
||||
cloud_name = Amazon Web Services
|
||||
image_format = vhd
|
||||
image_format_opts = vhd/force-size
|
||||
|
||||
kernel_modules {
|
||||
ena = true
|
||||
|
@ -21,6 +22,14 @@ ntp_server = 169.254.169.123
|
|||
access.PUBLIC = true
|
||||
regions.ALL = true
|
||||
|
||||
# limit edge publishing
|
||||
WHEN.edge {
|
||||
access.PUBLIC = false
|
||||
regions.ALL = false
|
||||
regions.us-west-2 = true
|
||||
regions.us-east-1 = true
|
||||
}
|
||||
|
||||
cloud_region_url = "https://{region}.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId={image_id}",
|
||||
cloud_launch_url = "https://{region}.console.aws.amazon.com/ec2/home#launchAmi={image_id}"
|
||||
|
||||
|
@ -36,5 +45,10 @@ WHEN {
|
|||
initfs_features.gpio_pl061 = false
|
||||
}
|
||||
}
|
||||
# AWS is weird, other aarch64 use ttyAMA0
|
||||
kernel_options {
|
||||
"console=ttyAMA0,115200n8" = false
|
||||
"console=ttyS0,115200n8" = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Microsoft Azure (alpha)
|
||||
image_format = vhd
|
||||
cloud_name = Microsoft Azure (beta)
|
||||
image_format = vhd
|
||||
image_format_opts = vhd/fixed_force-size
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Google Cloud Platform (alpha)
|
||||
cloud_name = Google Cloud Platform (beta)
|
||||
# TODO: https://cloud.google.com/compute/docs/import/importing-virtual-disks
|
||||
# Mentions "VHD" but also mentions "..." if that also includes QCOW2, then
|
||||
# we should use that instead. The "Manual Import" section on the sidebar
|
||||
# has a "Manually import boot disks" subpage which also mentions importing
|
||||
# compressed raw images... We would prefer to avoid that if possible.
|
||||
image_format = vhd
|
||||
image_format = raw
|
||||
image_compress = tar.gz
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = NoCloud
|
||||
cloud_name = NoCloud (beta)
|
||||
image_format = qcow2
|
||||
|
||||
# start with 3.18
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Oracle Cloud Infrastructure (alpha)
|
||||
cloud_name = Oracle Cloud Infrastructure (beta)
|
||||
image_format = qcow2
|
||||
|
||||
# start with 3.18
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
name = [uefi]
|
||||
firmware_name = UEFI
|
||||
|
||||
disk_size = [16]
|
||||
|
||||
bootloader = grub-efi
|
||||
packages {
|
||||
grub-efi = --no-scripts
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
include required("base/3.conf")
|
||||
|
||||
end_of_life: 2023-12-05 # to fix openssl CVEs past original EOL
|
||||
|
||||
motd {
|
||||
sudo_deprecated = "NOTE: 'sudo' has been deprecated, please use 'doas' instead."
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
# vim: ts=2 et:
|
||||
|
||||
include required("base/5.conf")
|
||||
|
||||
motd {
|
||||
sudo_removed = "NOTE: 'sudo' is not installed by default, please use 'doas' instead."
|
||||
}
|
|
@ -0,0 +1,7 @@
|
|||
# vim: ts=2 et:
|
||||
|
||||
include required("base/5.conf")
|
||||
|
||||
motd {
|
||||
sudo_removed = "NOTE: 'sudo' is not installed by default, please use 'doas' instead."
|
||||
}
|
|
@ -6,6 +6,8 @@ repos {
|
|||
"https://dl-cdn.alpinelinux.org/alpine/v{version}/testing" = false
|
||||
}
|
||||
|
||||
repo_keys = []
|
||||
|
||||
packages {
|
||||
alpine-base = true
|
||||
linux-virt = true
|
||||
|
|
|
@ -2,7 +2,5 @@
|
|||
|
||||
include required("4.conf")
|
||||
|
||||
packages {
|
||||
# start using dhcpcd for improved IPv6 experience
|
||||
dhcpcd = true
|
||||
}
|
||||
# start using dhcpcd for improved IPv6 experience
|
||||
packages.dhcpcd = true
|
||||
|
|
|
@ -11,7 +11,7 @@ import textwrap
|
|||
NOTE = textwrap.dedent("""
|
||||
This script's output provides a mustache-ready datasource to alpine-mksite
|
||||
(https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite) and should be
|
||||
run after the main 'build' script has published ALL images.
|
||||
run after the main 'build' script has released ALL images.
|
||||
STDOUT from this script should be saved as 'cloud/releases.yaml' in the
|
||||
above alpine-mksite repo.
|
||||
""")
|
||||
|
@ -87,7 +87,7 @@ configs = ImageConfigManager(
|
|||
log='gen_mksite_releases'
|
||||
)
|
||||
# make sure images.yaml is up-to-date with reality
|
||||
configs.refresh_state('final')
|
||||
configs.refresh_state('final', skip=['edge'])
|
||||
|
||||
yaml = YAML()
|
||||
|
||||
|
@ -97,19 +97,22 @@ data = {}
|
|||
|
||||
log.info('Transforming image data')
|
||||
for i_key, i_cfg in configs.get().items():
|
||||
if not i_cfg.published:
|
||||
if not i_cfg.released:
|
||||
continue
|
||||
|
||||
released = i_cfg.uploaded.split('T')[0]
|
||||
|
||||
version = i_cfg.version
|
||||
if version == 'edge':
|
||||
continue
|
||||
|
||||
image_name = i_cfg.image_name
|
||||
release = i_cfg.release
|
||||
arch = i_cfg.arch
|
||||
firmware = i_cfg.firmware
|
||||
bootstrap = i_cfg.bootstrap
|
||||
cloud = i_cfg.cloud
|
||||
# key on "variant" (but do not include cloud!)
|
||||
variant = f"{release} {arch} {firmware} {bootstrap}"
|
||||
|
||||
if cloud not in filters['clouds']:
|
||||
filters['clouds'][cloud] = {
|
||||
|
@ -117,8 +120,6 @@ for i_key, i_cfg in configs.get().items():
|
|||
'cloud_name': i_cfg.cloud_name,
|
||||
}
|
||||
|
||||
filters['regions'] = {}
|
||||
|
||||
if arch not in filters['archs']:
|
||||
filters['archs'][arch] = {
|
||||
'arch': arch,
|
||||
|
@ -137,9 +138,33 @@ for i_key, i_cfg in configs.get().items():
|
|||
'bootstrap_name': i_cfg.bootstrap_name,
|
||||
}
|
||||
|
||||
if i_cfg.artifacts:
|
||||
versions[version] |= {
|
||||
'version': version,
|
||||
'release': release,
|
||||
'end_of_life': i_cfg.end_of_life,
|
||||
}
|
||||
versions[version]['images'][variant] |= {
|
||||
'variant': variant,
|
||||
'arch': arch,
|
||||
'firmware': firmware,
|
||||
'bootstrap': bootstrap,
|
||||
#'released': i_cfg.released.split('T')[0], # just the date
|
||||
'released': released
|
||||
}
|
||||
versions[version]['images'][variant]['downloads'][cloud] |= {
|
||||
'cloud': cloud,
|
||||
'image_name': i_cfg.image_name,
|
||||
'image_format': i_cfg.image_format,
|
||||
'image_url': i_cfg.download_url + '/' + (i_cfg.image_name)
|
||||
}
|
||||
|
||||
# TODO: not all clouds will have artifacts
|
||||
if i_cfg._get('artifacts'):
|
||||
log.debug("ARTIFACTS: %s", i_cfg.artifacts)
|
||||
for region, image_id in {r: i_cfg.artifacts[r] for r in sorted(i_cfg.artifacts)}.items():
|
||||
log.debug("REGION: %s", region)
|
||||
if region not in filters['regions']:
|
||||
log.debug("not in filters['region']")
|
||||
filters['regions'][region] = {
|
||||
'region': region,
|
||||
'clouds': [cloud],
|
||||
|
@ -148,24 +173,7 @@ for i_key, i_cfg in configs.get().items():
|
|||
if cloud not in filters['regions'][region]['clouds']:
|
||||
filters['regions'][region]['clouds'].append(cloud)
|
||||
|
||||
versions[version] |= {
|
||||
'version': version,
|
||||
'release': release,
|
||||
'end_of_life': i_cfg.end_of_life,
|
||||
}
|
||||
versions[version]['images'][image_name] |= {
|
||||
'image_name': image_name,
|
||||
'arch': arch,
|
||||
'firmware': firmware,
|
||||
'bootstrap': bootstrap,
|
||||
'published': i_cfg.published.split('T')[0], # just the date
|
||||
}
|
||||
versions[version]['images'][image_name]['downloads'][cloud] |= {
|
||||
'cloud': cloud,
|
||||
'image_format': i_cfg.image_format,
|
||||
'image_url': i_cfg.download_url + '/' + (i_cfg.image_name)
|
||||
}
|
||||
versions[version]['images'][image_name]['regions'][region] |= {
|
||||
versions[version]['images'][variant]['regions'][region] |= {
|
||||
'cloud': cloud,
|
||||
'region': region,
|
||||
'region_url': i_cfg.region_url(region, image_id),
|
||||
|
@ -191,21 +199,21 @@ versions = undictfactory(versions)
|
|||
for version in sorted(versions, reverse=True, key=lambda s: [int(u) for u in s.split('.')]):
|
||||
images = versions[version].pop('images')
|
||||
i = []
|
||||
for image_name in images: # order as they appear in work/images.yaml
|
||||
downloads = images[image_name].pop('downloads')
|
||||
for variant in images: # order as they appear in work/images.yaml
|
||||
downloads = images[variant].pop('downloads')
|
||||
d = []
|
||||
for download in downloads:
|
||||
d.append(downloads[download])
|
||||
|
||||
images[image_name]['downloads'] = d
|
||||
images[variant]['downloads'] = d
|
||||
|
||||
regions = images[image_name].pop('regions')
|
||||
regions = images[variant].pop('regions', [])
|
||||
r = []
|
||||
for region in sorted(regions):
|
||||
r.append(regions[region])
|
||||
|
||||
images[image_name]['regions'] = r
|
||||
i.append(images[image_name])
|
||||
images[variant]['regions'] = r
|
||||
i.append(images[variant])
|
||||
|
||||
versions[version]['images'] = i
|
||||
data['versions'].append(versions[version])
|
||||
|
|
|
@ -0,0 +1,190 @@
|
|||
#!/usr/bin/env python3
|
||||
# vim: ts=4 et:
|
||||
|
||||
# NOTE: this is an experimental work-in-progress
|
||||
|
||||
# Ensure we're using the Python virtual env with our installed dependencies
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
NOTE = textwrap.dedent("""
|
||||
Experimental: Outputs image cache YAML on STDOUT for use with prune-images.py
|
||||
""")
|
||||
|
||||
sys.pycache_prefix = 'work/__pycache__'
|
||||
|
||||
if not os.path.exists('work'):
|
||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
||||
print(NOTE, file=sys.stderr)
|
||||
exit(1)
|
||||
|
||||
# Re-execute using the right virtual environment, if necessary.
|
||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
||||
os.execv(venv_args[0], venv_args)
|
||||
|
||||
# We're now in the right Python environment
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from ruamel.yaml import YAML
|
||||
|
||||
import clouds
|
||||
|
||||
|
||||
### Constants & Variables
|
||||
|
||||
CLOUDS = ['aws']
|
||||
LOGFORMAT = '%(asctime)s - %(levelname)s - %(message)s'
|
||||
|
||||
RE_ALPINE = re.compile(r'^(?:aws_)?alpine-')
|
||||
RE_RELEASE = re.compile(r'-(edge|[\d\.]+)-')
|
||||
RE_REVISION = re.compile(r'-r?(\d+)$')
|
||||
RE_STUFF = re.compile(r'(edge|[\d+\.]+)(?:_rc(\d+))?-(.+)-r?(\d+)$')
|
||||
|
||||
|
||||
### Functions
|
||||
|
||||
# allows us to set values deep within an object that might not be fully defined
|
||||
def dictfactory():
|
||||
return defaultdict(dictfactory)
|
||||
|
||||
|
||||
# undo dictfactory() objects to normal objects
|
||||
def undictfactory(o):
|
||||
if isinstance(o, defaultdict):
|
||||
o = {k: undictfactory(v) for k, v in o.items()}
|
||||
return o
|
||||
|
||||
|
||||
### Command Line & Logging
|
||||
|
||||
parser = argparse.ArgumentParser(description=NOTE)
|
||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
||||
parser.add_argument('--cloud', choices=CLOUDS, required=True, help='cloud provider')
|
||||
parser.add_argument('--region', help='specific region, instead of all regions')
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger()
|
||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
||||
console = logging.StreamHandler()
|
||||
logfmt = logging.Formatter(LOGFORMAT, datefmt='%FT%TZ')
|
||||
logfmt.converter = time.gmtime
|
||||
console.setFormatter(logfmt)
|
||||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider(debug=args.debug)
|
||||
|
||||
# what region(s)?
|
||||
regions = clouds.ADAPTERS[args.cloud].regions
|
||||
if args.region:
|
||||
if args.region not in regions:
|
||||
log.error('invalid region: %s', args.region)
|
||||
exit(1)
|
||||
else:
|
||||
regions = [args.region]
|
||||
|
||||
filters = {
|
||||
'Owners': ['self'],
|
||||
'Filters': [
|
||||
{'Name': 'state', 'Values': ['available']},
|
||||
]
|
||||
}
|
||||
|
||||
data = dictfactory()
|
||||
now = time.gmtime()
|
||||
|
||||
for region in sorted(regions):
|
||||
# TODO: make more generic if we need to do this for other clouds someday
|
||||
ec2r = clouds.ADAPTERS[args.cloud].session(region).resource('ec2')
|
||||
images = sorted(ec2r.images.filter(**filters), key=lambda k: k.creation_date)
|
||||
log.info(f'--- {region} : {len(images)} ---')
|
||||
version = release = revision = None
|
||||
|
||||
for image in images:
|
||||
latest = data[region]['latest'] # shortcut
|
||||
|
||||
# information about the image
|
||||
id = image.id
|
||||
name = image.name
|
||||
|
||||
# only consider images named /^alpine-/
|
||||
if not RE_ALPINE.search(image.name):
|
||||
log.warning(f'IGNORING {region}\t{id}\t{name}')
|
||||
continue
|
||||
|
||||
# parse image name for more information
|
||||
# NOTE: we can't rely on tags, because they may not have been set successfully
|
||||
m = RE_STUFF.search(name)
|
||||
if not m:
|
||||
log.error(f'!PARSE\t{region}\t{id}\t{name}')
|
||||
continue
|
||||
|
||||
release = m.group(1)
|
||||
rc = m.group(2)
|
||||
version = '.'.join(release.split('.')[0:2])
|
||||
variant = m.group(3)
|
||||
revision = m.group(4)
|
||||
variant_key = '-'.join([version, variant])
|
||||
release_key = revision if release == 'edge' else '-'.join([release, revision])
|
||||
|
||||
last_launched_attr = image.describe_attribute(Attribute='lastLaunchedTime')['LastLaunchedTime']
|
||||
last_launched = last_launched_attr.get('Value', 'Never')
|
||||
|
||||
eol = None # we don't know for sure, unless we have a deprecation time
|
||||
if image.deprecation_time:
|
||||
eol = time.strptime(image.deprecation_time, '%Y-%m-%dT%H:%M:%S.%fZ') < now
|
||||
|
||||
# keep track of images
|
||||
data[region]['images'][id] = {
|
||||
'name': name,
|
||||
'release': release,
|
||||
'version': version,
|
||||
'variant': variant,
|
||||
'revision': revision,
|
||||
'variant_key': variant_key,
|
||||
'release_key': release_key,
|
||||
'created': image.creation_date,
|
||||
'launched': last_launched,
|
||||
'deprecated': image.deprecation_time,
|
||||
'rc': rc is not None,
|
||||
'eol': eol,
|
||||
'private': not image.public,
|
||||
'snapshot_id': image.block_device_mappings[0]['Ebs']['SnapshotId']
|
||||
}
|
||||
|
||||
# keep track of the latest release_key per variant_key
|
||||
if variant_key not in latest or (release > latest[variant_key]['release']) or (release == latest[variant_key]['release'] and [revision > latest[variant_key]['revision']]):
|
||||
data[region]['latest'][variant_key] = {
|
||||
'release': release,
|
||||
'revision': revision,
|
||||
'release_key': release_key
|
||||
}
|
||||
|
||||
log.info(f'{region}\t{not image.public}\t{eol}\t{last_launched.split("T")[0]}\t{name}')
|
||||
|
||||
# instantiate YAML
|
||||
yaml = YAML()
|
||||
yaml.explicit_start = True
|
||||
|
||||
# TODO? dump out to a file instead of STDOUT?
|
||||
yaml.dump(undictfactory(data), sys.stdout)
|
||||
|
||||
total = 0
|
||||
for region, rdata in sorted(data.items()):
|
||||
count = len(rdata['images'])
|
||||
log.info(f'{region} : {count} images')
|
||||
total += count
|
||||
|
||||
log.info(f'TOTAL : {total} images')
|
|
@ -1,6 +1,5 @@
|
|||
# vim: ts=4 et:
|
||||
|
||||
import hashlib
|
||||
import mergedeep
|
||||
import os
|
||||
import pyhocon
|
||||
|
@ -19,15 +18,38 @@ class ImageConfig():
|
|||
|
||||
CONVERT_CMD = {
|
||||
'qcow2': ['ln', '-f'],
|
||||
'vhd': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'vpc', '-o', 'force_size=on'],
|
||||
'vhd': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'vpc'],
|
||||
'raw': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'raw'],
|
||||
}
|
||||
CONVERT_OPTS = {
|
||||
None: [],
|
||||
'vhd/fixed_force-size': ['-o', 'subformat=fixed,force_size'],
|
||||
'vhd/force-size': ['-o', 'force_size=on'],
|
||||
}
|
||||
COMPRESS_CMD = {
|
||||
'bz2': ['bzip2', '-c']
|
||||
}
|
||||
DECOMPRESS_CMD = {
|
||||
'bz2': ['bzip2', '-dc']
|
||||
}
|
||||
# these tags may-or-may-not exist at various times
|
||||
OPTIONAL_TAGS = [
|
||||
'built', 'uploaded', 'imported', 'import_id', 'import_region', 'published', 'released'
|
||||
'built', 'uploaded', 'imported', 'import_id', 'import_region',
|
||||
'signed', 'published', 'released'
|
||||
]
|
||||
STEPS = [
|
||||
'local', 'upload', 'import', 'publish', 'release'
|
||||
'local', 'upload', 'import', 'sign', 'publish', 'release'
|
||||
]
|
||||
# we expect these to be available
|
||||
DEFAULT_OBJ = {
|
||||
'built': None,
|
||||
'uploaded': None,
|
||||
'imported': None,
|
||||
'signed': None,
|
||||
'published': None,
|
||||
'released': None,
|
||||
'artifacts': None,
|
||||
}
|
||||
|
||||
def __init__(self, config_key, obj={}, log=None, yaml=None):
|
||||
self._log = log
|
||||
|
@ -35,7 +57,7 @@ class ImageConfig():
|
|||
self._storage = None
|
||||
self.config_key = str(config_key)
|
||||
tags = obj.pop('tags', None)
|
||||
self.__dict__ |= self._deep_dict(obj)
|
||||
self.__dict__ |= self.DEFAULT_OBJ | self._deep_dict(obj)
|
||||
# ensure tag values are str() when loading
|
||||
if tags:
|
||||
self.tags = tags
|
||||
|
@ -101,7 +123,7 @@ class ImageConfig():
|
|||
'end_of_life': self.end_of_life,
|
||||
'firmware': self.firmware,
|
||||
'image_key': self.image_key,
|
||||
'name': self.image_name,
|
||||
'name': self.image_name.replace(self.cloud + '_', '', 1),
|
||||
'project': self.project,
|
||||
'release': self.release,
|
||||
'revision': self.revision,
|
||||
|
@ -152,6 +174,7 @@ class ImageConfig():
|
|||
self.name = '-'.join(self.name)
|
||||
self.description = ' '.join(self.description)
|
||||
self.repo_keys = ' '.join(self.repo_keys)
|
||||
self._resolve_disk_size()
|
||||
self._resolve_motd()
|
||||
self._resolve_urls()
|
||||
self._stringify_repos()
|
||||
|
@ -161,6 +184,9 @@ class ImageConfig():
|
|||
self._stringify_dict_keys('kernel_options', ' ')
|
||||
self._stringify_dict_keys('initfs_features', ' ')
|
||||
|
||||
def _resolve_disk_size(self):
|
||||
self.disk_size = str(sum(self.disk_size)) + 'M'
|
||||
|
||||
def _resolve_motd(self):
|
||||
# merge release notes, as apporpriate
|
||||
if 'release_notes' not in self.motd or not self.release_notes:
|
||||
|
@ -266,108 +292,80 @@ class ImageConfig():
|
|||
|
||||
return self.STEPS.index(s) <= self.STEPS.index(step)
|
||||
|
||||
def load_local_metadata(self):
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
if metadata_path.exists():
|
||||
self._log.debug('Loading image metadata from %s', metadata_path)
|
||||
loaded = self._yaml.load(metadata_path)
|
||||
loaded.pop('name', None) # don't overwrite 'name' format string!
|
||||
loaded.pop('Name', None) # remove special AWS tag
|
||||
self.__dict__ |= loaded
|
||||
|
||||
# TODO: this needs to be sorted out for 'upload' and 'release' steps
|
||||
def refresh_state(self, step, revise=False):
|
||||
def refresh_state(self, step, disable=[], revise=False):
|
||||
log = self._log
|
||||
actions = {}
|
||||
revision = 0
|
||||
step_state = step == 'state'
|
||||
step_rollback = step == 'rollback'
|
||||
undo = {}
|
||||
|
||||
# enable initial set of possible actions based on specified step
|
||||
for s in self.STEPS:
|
||||
if self._is_step_or_earlier(s, step):
|
||||
if self._is_step_or_earlier(s, step) and s not in disable:
|
||||
actions[s] = True
|
||||
|
||||
# pick up any updated image metadata
|
||||
self.load_metadata()
|
||||
# sets the latest revision metadata (from storage and local)
|
||||
self.load_metadata(step)
|
||||
|
||||
# TODO: check storage and/or cloud - use this instead of remote_image
|
||||
# latest_revision = self.get_latest_revision()
|
||||
# if we're rolling back, figure out what we need to undo first
|
||||
if step == 'rollback':
|
||||
if self.released or self.published:
|
||||
undo['ROLLBACK_BLOCKED'] = True
|
||||
|
||||
if (step_rollback or revise) and self.local_image.exists():
|
||||
undo['local'] = True
|
||||
else:
|
||||
if self.imported and 'import' in clouds.actions(self):
|
||||
undo['import'] = True
|
||||
self.imported = None
|
||||
|
||||
|
||||
|
||||
if step_rollback:
|
||||
if self.local_image.exists():
|
||||
undo['local'] = True
|
||||
|
||||
if not self.published or self.released:
|
||||
if self.uploaded:
|
||||
undo['upload'] = True
|
||||
self.uploaded = None
|
||||
|
||||
if self.imported:
|
||||
undo['import'] = True
|
||||
if self.built and self.local_dir.exists():
|
||||
undo['local'] = True
|
||||
self.built = None
|
||||
|
||||
# TODO: rename to 'remote_tags'?
|
||||
# if we load remote tags into state automatically, shouldn't that info already be in self?
|
||||
remote_image = clouds.get_latest_imported_tags(self)
|
||||
log.debug('\n%s', remote_image)
|
||||
# handle --revise option, if necessary
|
||||
if revise and (self.published or self.released):
|
||||
# get rid of old metadata
|
||||
(self.local_dir / self.metadata_file).unlink()
|
||||
self.revision = int(self.revision) + 1
|
||||
self.__dict__ |= self.DEFAULT_OBJ
|
||||
self.__dict__.pop('import_id', None)
|
||||
self.__dict__.pop('import_region', None)
|
||||
|
||||
if revise:
|
||||
if self.local_image.exists():
|
||||
# remove previously built local image artifacts
|
||||
log.warning('%s existing local image dir %s',
|
||||
'Would remove' if step_state else 'Removing',
|
||||
self.local_dir)
|
||||
if not step_state:
|
||||
shutil.rmtree(self.local_dir)
|
||||
# do we already have it built locally?
|
||||
if self.image_path.exists():
|
||||
# then we should use its metadata
|
||||
self.load_local_metadata()
|
||||
|
||||
if remote_image and remote_image.get('published', None):
|
||||
log.warning('%s image revision for %s',
|
||||
'Would bump' if step_state else 'Bumping',
|
||||
self.image_key)
|
||||
revision = int(remote_image.revision) + 1
|
||||
else:
|
||||
undo['local'] = True
|
||||
|
||||
elif remote_image and remote_image.get('imported', None):
|
||||
# remove existing imported (but unpublished) image
|
||||
log.warning('%s unpublished remote image %s',
|
||||
'Would remove' if step_state else 'Removing',
|
||||
remote_image.import_id)
|
||||
if not step_state:
|
||||
clouds.delete_image(self, remote_image.import_id)
|
||||
|
||||
remote_image = None
|
||||
|
||||
elif remote_image:
|
||||
if remote_image.get('imported', None):
|
||||
# already imported, don't build/upload/import again
|
||||
log.debug('%s - already imported', self.image_key)
|
||||
actions.pop('local', None)
|
||||
actions.pop('upload', None)
|
||||
actions.pop('import', None)
|
||||
|
||||
if remote_image.get('published', None):
|
||||
# NOTE: re-publishing can update perms or push to new regions
|
||||
log.debug('%s - already published', self.image_key)
|
||||
|
||||
if self.local_image.exists():
|
||||
# local image's already built, don't rebuild
|
||||
log.debug('%s - already locally built', self.image_key)
|
||||
# after all that, let's figure out what's to be done!
|
||||
if self.built:
|
||||
actions.pop('local', None)
|
||||
|
||||
else:
|
||||
self.built = None
|
||||
if self.uploaded:
|
||||
actions.pop('upload', None)
|
||||
|
||||
# merge remote_image data into image state
|
||||
if remote_image:
|
||||
self.__dict__ |= dict(remote_image)
|
||||
if self.imported or 'import' not in clouds.actions(self):
|
||||
actions.pop('import', None)
|
||||
|
||||
else:
|
||||
self.__dict__ |= {
|
||||
'revision': revision,
|
||||
'uploaded': None,
|
||||
'imported': None,
|
||||
'import_id': None,
|
||||
'import_region': None,
|
||||
'published': None,
|
||||
'artifacts': None,
|
||||
'released': None,
|
||||
}
|
||||
# NOTE: always publish (if cloud allows) to support new regions
|
||||
if 'publish' not in clouds.actions(self):
|
||||
actions.pop('publish', None)
|
||||
|
||||
# don't re-publish again if we're targeting the release step
|
||||
elif step == 'release' and self.published:
|
||||
actions.pop('publish', None)
|
||||
|
||||
# remove remaining actions not possible based on specified step
|
||||
for s in self.STEPS:
|
||||
|
@ -375,7 +373,26 @@ class ImageConfig():
|
|||
actions.pop(s, None)
|
||||
|
||||
self.actions = list(actions)
|
||||
log.info('%s/%s = %s', self.cloud, self.image_name, self.actions)
|
||||
log.info('%s/%s = [%s]', self.cloud, self.image_name, ' '.join(self.actions))
|
||||
|
||||
if undo:
|
||||
act = "Would undo" if step == 'state' else "Undoing"
|
||||
log.warning('%s: [%s]', act, ' '.join(undo.keys()))
|
||||
|
||||
if step != 'state':
|
||||
if 'import' in undo:
|
||||
log.warning('Deleting imported image: %s', self.import_id)
|
||||
clouds.delete_image(self, self.import_id)
|
||||
self.import_id = None
|
||||
self.import_region = None
|
||||
|
||||
if 'upload' in undo:
|
||||
log.warning('Removing uploaded image from storage')
|
||||
self.remove_image()
|
||||
|
||||
if 'local' in undo:
|
||||
log.warning('Removing local build directory')
|
||||
shutil.rmtree(self.local_dir)
|
||||
|
||||
self.state_updated = datetime.utcnow().isoformat()
|
||||
|
||||
|
@ -386,45 +403,87 @@ class ImageConfig():
|
|||
|
||||
return self._storage
|
||||
|
||||
def _save_checksum(self, file):
|
||||
self._log.info("Calculating checksum for '%s'", file)
|
||||
sha256_hash = hashlib.sha256()
|
||||
sha512_hash = hashlib.sha512()
|
||||
with open(file, 'rb') as f:
|
||||
for block in iter(lambda: f.read(4096), b''):
|
||||
sha256_hash.update(block)
|
||||
sha512_hash.update(block)
|
||||
@property
|
||||
def convert_opts(self):
|
||||
if 'image_format_opts' in self.__dict__:
|
||||
return self.CONVERT_OPTS[self.image_format_opts]
|
||||
|
||||
with open(str(file) + '.sha256', 'w') as f:
|
||||
print(sha256_hash.hexdigest(), file=f)
|
||||
|
||||
with open(str(file) + '.sha512', 'w') as f:
|
||||
print(sha512_hash.hexdigest(), file=f)
|
||||
return []
|
||||
|
||||
# convert local QCOW2 to format appropriate for a cloud
|
||||
def convert_image(self):
|
||||
self._log.info('Converting %s to %s', self.local_image, self.image_path)
|
||||
run(
|
||||
self.CONVERT_CMD[self.image_format] + [self.local_image, self.image_path],
|
||||
self.CONVERT_CMD[self.image_format] + self.convert_opts
|
||||
+ [self.local_image, self.image_path],
|
||||
log=self._log, errmsg='Unable to convert %s to %s',
|
||||
errvals=[self.local_image, self.image_path]
|
||||
)
|
||||
self._save_checksum(self.image_path)
|
||||
#self._save_checksum(self.image_path)
|
||||
self.built = datetime.utcnow().isoformat()
|
||||
|
||||
def upload_image(self):
|
||||
self.storage.store(
|
||||
self.image_file,
|
||||
self.image_file + '.sha256',
|
||||
self.image_file + '.sha512'
|
||||
)
|
||||
# TODO: compress here? upload that instead
|
||||
self.storage.store(self.image_file, checksum=True)
|
||||
self.uploaded = datetime.utcnow().isoformat()
|
||||
|
||||
def retrieve_image(self):
|
||||
self._log.info('Retrieving %s from storage', self.image_file)
|
||||
# TODO: try downloading compressed and decompressed?
|
||||
self.storage.retrieve(self.image_file) #, checksum=True)
|
||||
# TODO: decompress compressed if exists
|
||||
|
||||
def remove_image(self):
|
||||
self.storage.remove(
|
||||
#self.image_file + '*',
|
||||
#self.metadata_file + '*')
|
||||
# TODO: self.image_compressed, .asc, .sha512
|
||||
self.image_file,
|
||||
self.image_file + '.asc',
|
||||
self.image_file + '.sha512',
|
||||
self.metadata_file,
|
||||
self.metadata_file + '.sha512'
|
||||
)
|
||||
|
||||
def sign_image(self):
|
||||
log = self._log
|
||||
if 'signing_cmd' not in self.__dict__:
|
||||
log.warning("No 'signing_cmd' set, not signing image.")
|
||||
return
|
||||
|
||||
# TODO: sign compressed file?
|
||||
cmd = self.signing_cmd.format(file=self.image_path).split(' ')
|
||||
log.info(f'Signing {self.image_file}...')
|
||||
log.debug(cmd)
|
||||
run(
|
||||
cmd, log=log, errmsg='Unable to sign image: %s',
|
||||
errvals=[self.image_file]
|
||||
)
|
||||
self.signed = datetime.utcnow().isoformat()
|
||||
# TODO?: self.signed_by? self.signed_fingerprint?
|
||||
self.storage.store(self.image_file + '.asc')
|
||||
|
||||
def release_image(self):
|
||||
log = self._log
|
||||
if 'release_cmd' not in self.__dict__:
|
||||
log.warning("No 'release_cmd' set, not releasing image.")
|
||||
return
|
||||
|
||||
base=self.image_name
|
||||
cmd = self.release_cmd.format(
|
||||
**self.__dict__, v_version=self.v_version,
|
||||
base=base
|
||||
).split(' ')
|
||||
log.info(f'releasing {base}...')
|
||||
run(
|
||||
cmd, log=log, errmsg='Unable to release image: %s',
|
||||
errvals=[self.image_name]
|
||||
)
|
||||
self.released = datetime.utcnow().isoformat()
|
||||
|
||||
def save_metadata(self, action):
|
||||
os.makedirs(self.local_dir, exist_ok=True)
|
||||
self._log.info('Saving image metadata')
|
||||
# TODO: save metadata updated timestamp as metadata?
|
||||
# TODO: def self.metadata to return what we consider metadata?
|
||||
metadata = dict(self.tags)
|
||||
self.metadata_updated = datetime.utcnow().isoformat()
|
||||
metadata |= {
|
||||
|
@ -433,33 +492,36 @@ class ImageConfig():
|
|||
}
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
self._yaml.dump(metadata, metadata_path)
|
||||
self._save_checksum(metadata_path)
|
||||
if action != 'local' and self.storage:
|
||||
self.storage.store(
|
||||
self.metadata_file,
|
||||
self.metadata_file + '.sha256',
|
||||
self.metadata_file + '.sha512'
|
||||
)
|
||||
self.storage.store(self.metadata_file, checksum=True)
|
||||
|
||||
def load_metadata(self):
|
||||
# TODO: what if we have fresh configs, but the image is already uploaded/imported?
|
||||
# we'll need to get revision first somehow
|
||||
if 'revision' not in self.__dict__:
|
||||
return
|
||||
def load_metadata(self, step):
|
||||
new = True
|
||||
if step != 'final':
|
||||
# what's the latest uploaded revision?
|
||||
revision_glob = self.name.format(**(self.__dict__ | {'revision': '*'}))
|
||||
try:
|
||||
revision_yamls = self.storage.list(revision_glob + '.yaml', err_ok=True)
|
||||
new = not revision_yamls # empty list is still new
|
||||
|
||||
# TODO: revision = '*' for now - or only if unknown?
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
latest_revision = 0
|
||||
if not new:
|
||||
for y in revision_yamls:
|
||||
yr = int(y.rstrip('.yaml').rsplit('r', 1)[1])
|
||||
if yr > latest_revision:
|
||||
latest_revision = yr
|
||||
|
||||
self.revision = latest_revision
|
||||
|
||||
# get a list of local matching <name>-r*.yaml?
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
if metadata_path.exists():
|
||||
self._log.info('Loading image metadata from %s', metadata_path)
|
||||
self.__dict__ |= self._yaml.load(metadata_path).items()
|
||||
if step != 'final' and not new and not metadata_path.exists():
|
||||
try:
|
||||
self.storage.retrieve(self.metadata_file)
|
||||
except RuntimeError as e:
|
||||
# TODO: don't we already log an error/warning?
|
||||
self._log.warning(f'Unable to retrieve from storage: {metadata_path}')
|
||||
|
||||
# get a list of storage matching <name>-r*.yaml
|
||||
#else:
|
||||
# retrieve metadata (and image?) from storage_url
|
||||
# else:
|
||||
# retrieve metadata from imported image
|
||||
|
||||
# if there's no stored metadata, we are in transition,
|
||||
# get a list of imported images matching <name>-r*.yaml
|
||||
self.load_local_metadata() # if it exists
|
||||
|
|
|
@ -27,6 +27,7 @@ class ImageConfigManager():
|
|||
self.yaml = YAML()
|
||||
self.yaml.register_class(ImageConfig)
|
||||
self.yaml.explicit_start = True
|
||||
self.yaml.width = 1000
|
||||
# hide !ImageConfig tag from Packer
|
||||
self.yaml.representer.org_represent_mapping = self.yaml.representer.represent_mapping
|
||||
self.yaml.representer.represent_mapping = self._strip_yaml_tag_type
|
||||
|
@ -144,15 +145,16 @@ class ImageConfigManager():
|
|||
def _set_version_release(self, v, c):
|
||||
info = self.alpine.version_info(v)
|
||||
c.put('release', info['release'])
|
||||
c.put('end_of_life', info['end_of_life'])
|
||||
c.put('release_notes', info['notes'])
|
||||
if 'end_of_life' not in c:
|
||||
c.put('end_of_life', info['end_of_life'])
|
||||
|
||||
# release is also appended to name & description arrays
|
||||
c.put('name', [c.release])
|
||||
c.put('description', [c.release])
|
||||
|
||||
# update current config status
|
||||
def refresh_state(self, step, only=[], skip=[], revise=False):
|
||||
def refresh_state(self, step, disable=[], revise=False, only=[], skip=[]):
|
||||
self.log.info('Refreshing State')
|
||||
has_actions = False
|
||||
for ic in self._configs.values():
|
||||
|
@ -169,7 +171,7 @@ class ImageConfigManager():
|
|||
self.log.debug('%s SKIPPED, matches --skip', ic.config_key)
|
||||
continue
|
||||
|
||||
ic.refresh_state(step, revise)
|
||||
ic.refresh_state(step, disable, revise)
|
||||
if not has_actions and len(ic.actions):
|
||||
has_actions = True
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# vim: ts=4 et:
|
||||
|
||||
import copy
|
||||
import hashlib
|
||||
import shutil
|
||||
import os
|
||||
|
||||
|
@ -11,22 +13,25 @@ from urllib.parse import urlparse
|
|||
from image_tags import DictObj
|
||||
|
||||
|
||||
def run(cmd, log, errmsg=None, errvals=[]):
|
||||
def run(cmd, log, errmsg=None, errvals=[], err_ok=False):
|
||||
# ensure command and error values are lists of strings
|
||||
cmd = [str(c) for c in cmd]
|
||||
errvals = [str(ev) for ev in errvals]
|
||||
|
||||
log.debug('COMMAND: %s', ' '.join(cmd))
|
||||
p = Popen(cmd, stdout=PIPE, stdin=PIPE, encoding='utf8')
|
||||
p = Popen(cmd, stdout=PIPE, stdin=PIPE, stderr=PIPE, encoding='utf8')
|
||||
out, err = p.communicate()
|
||||
if p.returncode:
|
||||
if errmsg:
|
||||
log.error(errmsg, *errvals)
|
||||
if err_ok:
|
||||
log.debug(errmsg, *errvals)
|
||||
|
||||
log.error('COMMAND: %s', ' '.join(cmd))
|
||||
log.error('EXIT: %d', p.returncode)
|
||||
log.error('STDOUT:\n%s', out)
|
||||
log.error('STDERR:\n%s', err)
|
||||
else:
|
||||
log.error(errmsg, *errvals)
|
||||
|
||||
log.debug('EXIT: %d / COMMAND: %s', p.returncode, ' '.join(cmd))
|
||||
log.debug('STDOUT:\n%s', out)
|
||||
log.debug('STDERR:\n%s', err)
|
||||
raise RuntimeError
|
||||
|
||||
return out, err
|
||||
|
@ -60,18 +65,45 @@ class ImageStorage():
|
|||
'user': url.username + '@' if url.username else '',
|
||||
})
|
||||
|
||||
def store(self, *files):
|
||||
def _checksum(self, file, save=False):
|
||||
log = self.log
|
||||
src = self.local
|
||||
log.debug("Calculating checksum for '%s'", file)
|
||||
sha512_hash = hashlib.sha512()
|
||||
with open(src / file, 'rb') as f:
|
||||
for block in iter(lambda: f.read(4096), b''):
|
||||
sha512_hash.update(block)
|
||||
|
||||
checksum = sha512_hash.hexdigest()
|
||||
if save:
|
||||
log.debug("Saving '%s'", file + '.sha512')
|
||||
with open(str(src / file) + '.sha512', 'w') as f:
|
||||
print(checksum, file=f)
|
||||
|
||||
return checksum
|
||||
|
||||
def store(self, *files, checksum=False):
|
||||
log = self.log
|
||||
src = self.local
|
||||
dest = self.remote
|
||||
|
||||
# take care of any globbing in file list
|
||||
files = [Path(p).name for p in sum([glob(str(src / f)) for f in files], [])]
|
||||
|
||||
if not files:
|
||||
log.debug('No files to store')
|
||||
return
|
||||
|
||||
src = self.local
|
||||
dest = self.remote
|
||||
if checksum:
|
||||
log.info('Creating checksum(s) for %s', files)
|
||||
for f in copy.copy(files):
|
||||
self._checksum(f, save=True)
|
||||
files.append(f + '.sha512')
|
||||
|
||||
log.info('Storing %s', files)
|
||||
if self.scheme == 'file':
|
||||
dest.mkdir(parents=True, exist_ok=True)
|
||||
for file in files:
|
||||
log.info('Storing %s', dest / file)
|
||||
shutil.copy2(src / file, dest / file)
|
||||
|
||||
return
|
||||
|
@ -94,8 +126,9 @@ class ImageStorage():
|
|||
log=log, errmsg='Failed to store files'
|
||||
)
|
||||
|
||||
def retrieve(self, *files):
|
||||
def retrieve(self, *files, checksum=False):
|
||||
log = self.log
|
||||
# TODO? use list()
|
||||
if not files:
|
||||
log.debug('No files to retrieve')
|
||||
return
|
||||
|
@ -105,7 +138,7 @@ class ImageStorage():
|
|||
dest.mkdir(parents=True, exist_ok=True)
|
||||
if self.scheme == 'file':
|
||||
for file in files:
|
||||
log.info('Retrieving %s', src / file)
|
||||
log.debug('Retrieving %s', src / file)
|
||||
shutil.copy2(src / file, dest / file)
|
||||
|
||||
return
|
||||
|
@ -115,7 +148,7 @@ class ImageStorage():
|
|||
scp = self.scp
|
||||
src_files = []
|
||||
for file in files:
|
||||
log.info('Retrieving %s', url + '/' + file)
|
||||
log.debug('Retrieving %s', url + '/' + file)
|
||||
src_files.append(scp.user + ':'.join([host, str(src / file)]))
|
||||
|
||||
run(
|
||||
|
@ -124,7 +157,7 @@ class ImageStorage():
|
|||
)
|
||||
|
||||
# TODO: optional files=[]?
|
||||
def list(self, match=None):
|
||||
def list(self, match=None, err_ok=False):
|
||||
log = self.log
|
||||
path = self.remote
|
||||
if not match:
|
||||
|
@ -133,28 +166,29 @@ class ImageStorage():
|
|||
files = []
|
||||
if self.scheme == 'file':
|
||||
path.mkdir(parents=True, exist_ok=True)
|
||||
log.info('Listing of %s files in %s', match, path)
|
||||
log.debug('Listing of %s files in %s', match, path)
|
||||
files = sorted(glob(str(path / match)), key=os.path.getmtime, reverse=True)
|
||||
|
||||
else:
|
||||
url = self.url
|
||||
host = self.host
|
||||
ssh = self.ssh
|
||||
log.info('Listing %s files at %s', match, url)
|
||||
log.debug('Listing %s files at %s', match, url)
|
||||
run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'mkdir', '-p', path],
|
||||
log=log, errmsg='Unable to create path'
|
||||
log=log, errmsg='Unable to create path', err_ok=err_ok
|
||||
)
|
||||
out, _ = run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'ls', '-1drt', path / match],
|
||||
log=log, errmsg='Failed to list files'
|
||||
log=log, errmsg='Failed to list files', err_ok=err_ok
|
||||
)
|
||||
files = out.splitlines()
|
||||
|
||||
return [os.path.basename(f) for f in files]
|
||||
|
||||
def remove(self, files):
|
||||
def remove(self, *files):
|
||||
log = self.log
|
||||
# TODO? use list()
|
||||
if not files:
|
||||
log.debug('No files to remove')
|
||||
return
|
||||
|
@ -163,7 +197,7 @@ class ImageStorage():
|
|||
if self.scheme == 'file':
|
||||
for file in files:
|
||||
path = dest / file
|
||||
log.info('Removing %s', path)
|
||||
log.debug('Removing %s', path)
|
||||
if path.exists():
|
||||
path.unlink()
|
||||
|
||||
|
@ -174,7 +208,7 @@ class ImageStorage():
|
|||
ssh = self.ssh
|
||||
dest_files = []
|
||||
for file in files:
|
||||
log.info('Removing %s', url + '/' + file)
|
||||
log.debug('Removing %s', url + '/' + file)
|
||||
dest_files.append(dest / file)
|
||||
|
||||
run(
|
||||
|
|
|
@ -0,0 +1,239 @@
|
|||
#!/usr/bin/env python3
|
||||
# vim: ts=4 et:
|
||||
|
||||
# NOTE: this is an experimental work-in-progress
|
||||
|
||||
# Ensure we're using the Python virtual env with our installed dependencies
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
NOTE = textwrap.dedent("""
|
||||
Experimental: Given an image cache YAML file, figure out what needs to be pruned.
|
||||
""")
|
||||
|
||||
sys.pycache_prefix = 'work/__pycache__'
|
||||
|
||||
if not os.path.exists('work'):
|
||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
||||
print(NOTE, file=sys.stderr)
|
||||
exit(1)
|
||||
|
||||
# Re-execute using the right virtual environment, if necessary.
|
||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
||||
os.execv(venv_args[0], venv_args)
|
||||
|
||||
# We're now in the right Python environment
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from ruamel.yaml import YAML
|
||||
from pathlib import Path
|
||||
|
||||
import clouds
|
||||
|
||||
|
||||
### Constants & Variables
|
||||
|
||||
ACTIONS = ['list', 'prune']
|
||||
CLOUDS = ['aws']
|
||||
SELECTIONS = ['keep-last', 'unused', 'ALL']
|
||||
LOGFORMAT = '%(asctime)s - %(levelname)s - %(message)s'
|
||||
|
||||
RE_ALPINE = re.compile(r'^alpine-')
|
||||
RE_RELEASE = re.compile(r'-(edge|[\d\.]+)-')
|
||||
RE_REVISION = re.compile(r'-r?(\d+)$')
|
||||
RE_STUFF = re.compile(r'(edge|[\d+\.]+)-(.+)-r?(\d+)$')
|
||||
|
||||
### Functions
|
||||
|
||||
# allows us to set values deep within an object that might not be fully defined
|
||||
def dictfactory():
|
||||
return defaultdict(dictfactory)
|
||||
|
||||
|
||||
# undo dictfactory() objects to normal objects
|
||||
def undictfactory(o):
|
||||
if isinstance(o, defaultdict):
|
||||
o = {k: undictfactory(v) for k, v in o.items()}
|
||||
return o
|
||||
|
||||
|
||||
### Command Line & Logging
|
||||
|
||||
parser = argparse.ArgumentParser(description=NOTE)
|
||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
||||
parser.add_argument('--really', action='store_true', help='really prune images')
|
||||
parser.add_argument('--cloud', choices=CLOUDS, required=True, help='cloud provider')
|
||||
parser.add_argument('--region', help='specific region, instead of all regions')
|
||||
# what to prune...
|
||||
parser.add_argument('--bad-name', action='store_true')
|
||||
parser.add_argument('--private', action='store_true')
|
||||
parser.add_argument('--edge-eol', action='store_true')
|
||||
parser.add_argument('--rc', action='store_true')
|
||||
parser.add_argument('--eol-unused-not-latest', action='store_true')
|
||||
parser.add_argument('--eol-not-latest', action='store_true')
|
||||
parser.add_argument('--unused-not-latest', action='store_true')
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
parser.add_argument('cache_file')
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger()
|
||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
||||
console = logging.StreamHandler()
|
||||
logfmt = logging.Formatter(LOGFORMAT, datefmt='%FT%TZ')
|
||||
logfmt.converter = time.gmtime
|
||||
console.setFormatter(logfmt)
|
||||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider(debug=args.debug)
|
||||
|
||||
# what region(s)?
|
||||
regions = clouds.ADAPTERS[args.cloud].regions
|
||||
if args.region:
|
||||
if args.region not in regions:
|
||||
log.error('invalid region: %s', args.region)
|
||||
exit(1)
|
||||
else:
|
||||
regions = [args.region]
|
||||
|
||||
filters = {
|
||||
'Owners': ['self'],
|
||||
'Filters': [
|
||||
{'Name': 'state', 'Values': ['available']},
|
||||
]
|
||||
}
|
||||
|
||||
initial = dictfactory()
|
||||
variants = dictfactory()
|
||||
removes = dictfactory()
|
||||
summary = dictfactory()
|
||||
latest = {}
|
||||
now = time.gmtime()
|
||||
|
||||
# load cache
|
||||
yaml = YAML()
|
||||
log.info(f'loading image cache from {args.cache_file}')
|
||||
cache = yaml.load(Path(args.cache_file))
|
||||
log.info(f'loaded image cache')
|
||||
|
||||
|
||||
for region in sorted(regions):
|
||||
latest = cache[region]['latest']
|
||||
images = cache[region]['images']
|
||||
log.info(f'--- {region} : {len(images)} ---')
|
||||
|
||||
for id, image in images.items():
|
||||
name = image['name']
|
||||
|
||||
if args.bad_name and not name.startswith('alpine-'):
|
||||
log.info(f"{region}\tBAD_NAME\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['BAD_NAME'][id] = name
|
||||
continue
|
||||
|
||||
if args.private and image['private']:
|
||||
log.info(f"{region}\tPRIVATE\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['PRIVATE'][id] = name
|
||||
continue
|
||||
|
||||
if args.edge_eol and image['version'] == 'edge' and image['eol']:
|
||||
log.info(f"{region}\tEDGE-EOL\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['EDGE-EOL'][id] = name
|
||||
continue
|
||||
|
||||
if args.rc and image['rc']:
|
||||
log.info(f"{region}\tRC\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['RC'][id] = name
|
||||
continue
|
||||
|
||||
unused = image['launched'] == 'Never'
|
||||
release_key = image['release_key']
|
||||
variant_key = image['variant_key']
|
||||
if variant_key not in latest:
|
||||
log.warning(f"variant key '{variant_key}' not in latest, skipping.")
|
||||
summary[region]['__WTF__'][id] = name
|
||||
continue
|
||||
|
||||
latest_release_key = latest[variant_key]['release_key']
|
||||
not_latest = release_key != latest_release_key
|
||||
|
||||
if args.eol_unused_not_latest and image['eol'] and unused and not_latest:
|
||||
log.info(f"{region}\tEOL-UNUSED-NOT-LATEST\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['EOL-UNUSED-NOT-LATEST'][id] = name
|
||||
continue
|
||||
|
||||
if args.eol_not_latest and image['eol'] and not_latest:
|
||||
log.info(f"{region}\tEOL-NOT-LATEST\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['EOL-NOT-LATEST'][id] = name
|
||||
continue
|
||||
|
||||
if args.unused_not_latest and unused and not_latest:
|
||||
log.info(f"{region}\tUNUSED-NOT-LATEST\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['UNUSED-NOT-LATEST'][id] = name
|
||||
continue
|
||||
|
||||
log.debug(f"{region}\t__KEPT__\t{name}")
|
||||
summary[region]['__KEPT__'][id] = name
|
||||
|
||||
totals = {}
|
||||
log.info('SUMMARY')
|
||||
for region, reasons in sorted(summary.items()):
|
||||
log.info(f"\t{region}")
|
||||
for reason, images in sorted(reasons.items()):
|
||||
count = len(images)
|
||||
log.info(f"\t\t{count}\t{reason}")
|
||||
if reason not in totals:
|
||||
totals[reason] = 0
|
||||
|
||||
totals[reason] += count
|
||||
|
||||
log.info('TOTALS')
|
||||
for reason, count in sorted(totals.items()):
|
||||
log.info(f"\t{count}\t{reason}")
|
||||
|
||||
if args.really:
|
||||
log.warning('Please confirm you wish to actually prune these images...')
|
||||
r = input("(yes/NO): ")
|
||||
print()
|
||||
if r.lower() != 'yes':
|
||||
args.really = False
|
||||
|
||||
if not args.really:
|
||||
log.warning("Not really pruning any images.")
|
||||
exit(0)
|
||||
|
||||
# do the pruning...
|
||||
|
||||
for region, images in sorted(removes.items()):
|
||||
ec2r = clouds.ADAPTERS[args.cloud].session(region).resource('ec2')
|
||||
for id, image in images.items():
|
||||
name = image['name']
|
||||
snapshot_id = image['snapshot_id']
|
||||
try:
|
||||
log.info(f'Deregistering: {region}/{id}: {name}')
|
||||
ec2r.Image(id).deregister()
|
||||
log.info(f"Deleting: {region}/{snapshot_id}: {name}")
|
||||
ec2r.Snapshot(snapshot_id).delete()
|
||||
|
||||
except Exception as e:
|
||||
log.warning(f"Failed: {e}")
|
||||
pass
|
||||
|
||||
log.info('DONE')
|
|
@ -34,6 +34,9 @@ cleanup() {
|
|||
"$TARGET/proc" \
|
||||
"$TARGET/sys"
|
||||
|
||||
einfo "*** Volume Usage ***"
|
||||
du -sh "$TARGET"
|
||||
|
||||
umount "$TARGET"
|
||||
}
|
||||
|
||||
|
|
|
@ -3,6 +3,11 @@
|
|||
|
||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
||||
|
||||
CONSOLE=ttyS0
|
||||
if [ "$ARCH" = "aarch64" ] && [ "$CLOUD" != "aws" ]; then
|
||||
CONSOLE=ttyAMA0
|
||||
fi
|
||||
|
||||
export \
|
||||
DEVICE=/dev/vda \
|
||||
TARGET=/mnt \
|
||||
|
@ -56,7 +61,10 @@ make_filesystem() {
|
|||
mkfs.fat -n EFI "${DEVICE}1"
|
||||
fi
|
||||
|
||||
mkfs.ext4 -O ^64bit -L / "$root_dev"
|
||||
# before Alpine 3.18...
|
||||
# - grub2 can't handle "metadata_csum_seed"
|
||||
# - fsck can't handle "orphan_file"
|
||||
mkfs.ext4 -O ^64bit,^metadata_csum_seed,^orphan_file -L / "$root_dev"
|
||||
mkdir -p "$TARGET"
|
||||
mount -t ext4 "$root_dev" "$TARGET"
|
||||
|
||||
|
@ -137,14 +145,12 @@ install_extlinux() {
|
|||
#
|
||||
# Shorten timeout (1/10s), eliminating delays for instance launches.
|
||||
#
|
||||
# ttyS0 is for EC2 Console "Get system log" and "EC2 Serial Console"
|
||||
# features, whereas tty0 is for "Get Instance screenshot" feature. Enabling
|
||||
# the port early in extlinux gives the most complete output in the log.
|
||||
# Enabling console port early in extlinux gives the most complete output.
|
||||
#
|
||||
# TODO: review for other clouds -- this may need to be cloud-specific.
|
||||
sed -Ei -e "s|^[# ]*(root)=.*|\1=LABEL=/|" \
|
||||
-e "s|^[# ]*(default_kernel_opts)=.*|\1=\"$KERNEL_OPTIONS\"|" \
|
||||
-e "s|^[# ]*(serial_port)=.*|\1=ttyS0|" \
|
||||
-e "s|^[# ]*(serial_port)=.*|\1=$CONSOLE|" \
|
||||
-e "s|^[# ]*(modules)=.*|\1=$KERNEL_MODULES|" \
|
||||
-e "s|^[# ]*(default)=.*|\1=virt|" \
|
||||
-e "s|^[# ]*(timeout)=.*|\1=1|" \
|
||||
|
@ -198,8 +204,9 @@ configure_system() {
|
|||
cat "$SETUP/fstab.grub-efi" >> "$TARGET/etc/fstab"
|
||||
fi
|
||||
|
||||
# Disable getty for physical ttys, enable getty for serial ttyS0.
|
||||
sed -Ei -e '/^tty[0-9]/s/^/#/' -e '/^#ttyS0:/s/^#//' "$TARGET/etc/inittab"
|
||||
# Disable getty for physical ttys, enable getty for serial console.
|
||||
sed -Ei -e '/^tty[0-9]/s/^/#/' -e "s/ttyS0/$CONSOLE/g" \
|
||||
-e "/^#$CONSOLE:/s/^#//" "$TARGET/etc/inittab"
|
||||
|
||||
# setup sudo and/or doas
|
||||
if grep -q '^sudo$' "$TARGET/etc/apk/world"; then
|
||||
|
|
|
@ -9,13 +9,30 @@ einfo() {
|
|||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
||||
}
|
||||
|
||||
greater_or_equal() {
|
||||
return $(echo "$1 $2" | awk '{print ($1 < $2)}')
|
||||
}
|
||||
|
||||
if [ "$VERSION" = "3.12" ]; then
|
||||
# tiny-cloud-network requires ifupdown-ng, not in 3.12
|
||||
einfo "Configuring Tiny EC2 Bootstrap..."
|
||||
echo "EC2_USER=$IMAGE_LOGIN" > /etc/conf.d/tiny-ec2-bootstrap
|
||||
else
|
||||
einfo "Configuring Tiny Cloud..."
|
||||
sed -i.bak -Ee "s/^#?CLOUD_USER=.*/CLOUD_USER=$IMAGE_LOGIN/" \
|
||||
"$TARGET"/etc/conf.d/tiny-cloud
|
||||
rm "$TARGET"/etc/conf.d/tiny-cloud.bak
|
||||
|
||||
TC_CONF="$TARGET/etc/tiny-cloud.conf"
|
||||
# tiny-cloud >= 3.0.0 moved configs, the following supports older versions
|
||||
[ ! -f "$TC_CONF" ] && TC_CONF="$TARGET/etc/conf.d/tiny-cloud"
|
||||
|
||||
sed -i.bak -Ee "s/^#?CLOUD_USER=.*/CLOUD_USER=$IMAGE_LOGIN/" "$TC_CONF"
|
||||
rm "$TC_CONF.bak"
|
||||
|
||||
# tiny-cloud >= 3.0.0 sets up init scripts with /sbin/tiny-cloud --setup
|
||||
if [ -f "$TARGET/sbin/tiny-cloud" ]; then
|
||||
chroot "$TARGET" /sbin/tiny-cloud --enable
|
||||
elif greater_or_equal "$VERSION" 3.18; then
|
||||
# 3.18 has tiny-cloud 3.0.0, and we didn't find what we expected
|
||||
echo "Error: /sbin/tiny-cloud not found" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
|
|
@ -2,4 +2,5 @@ GRUB_CMDLINE_LINUX_DEFAULT="modules=$KERNEL_MODULES $KERNEL_OPTIONS"
|
|||
GRUB_DISABLE_RECOVERY=true
|
||||
GRUB_DISABLE_SUBMENU=y
|
||||
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
|
||||
GRUB_TERMINAL="serial console"
|
||||
GRUB_TIMEOUT=0
|
||||
|
|
Loading…
Reference in New Issue