chore: sync upstream library
This commit is contained in:
parent
c336c5109a
commit
a037ca3d1c
|
@ -154,6 +154,12 @@ For the official Alpine Linux cloud images, this is set to
|
|||
When building custom images, you **MUST** override **AT LEAST** this setting to
|
||||
avoid image import and publishing collisions.
|
||||
|
||||
### `userhost` string
|
||||
|
||||
This is the remote _user_@_host_ that is used for storing state, uploading
|
||||
files, and releasing official images. Currently used by `storage_url` and
|
||||
`release_cmd`.
|
||||
|
||||
### `name` array
|
||||
|
||||
The ultimate contents of this array contribute to the overall naming of the
|
||||
|
@ -193,10 +199,24 @@ Directories (under `work/scripts/`) that contain additional data that the
|
|||
`scripts` will need. Packer will copy these to the VM responsible for setting
|
||||
up the variant image.
|
||||
|
||||
### `size` string
|
||||
### `disk_size` array
|
||||
|
||||
The size of the image disk, by default we use `1G` (1 GiB). This disk may (or
|
||||
may not) be further partitioned, based on other factors.
|
||||
The sum of this array is the size of the image disk, specified in MiB; this
|
||||
allows different dimension variants to "bump up" the size of the image if
|
||||
extra space is needed.
|
||||
|
||||
### `image_format` string
|
||||
|
||||
The format/extension of the disk image, i.e. `qcow2`, `vhd`, or `raw`.
|
||||
|
||||
### `image_format_opts` string
|
||||
|
||||
Some formats have additional options; currently `vhd/force-size` and
|
||||
`vhd/fixed_force-size` are defined.
|
||||
|
||||
### `image_compress` string
|
||||
|
||||
***TODO***
|
||||
|
||||
### `login` string
|
||||
|
||||
|
@ -312,3 +332,23 @@ Currently, only the **aws** cloud module supports this.
|
|||
|
||||
List of addtional repository keys to trust during the package installation phase.
|
||||
This allows pulling in custom apk packages by simple specifying the repository name in packages block.
|
||||
|
||||
### `storage_url` string
|
||||
|
||||
This is an URL that defines where the persistent state about images is stored,
|
||||
from `upload` through `release` steps (and beyond). This allows one `build`
|
||||
session to pick up where another left off. Currently, `ssh://` and `file://`
|
||||
URLs are supported.
|
||||
|
||||
### `download_url` string
|
||||
|
||||
This string is used for building download URLs for officially released images
|
||||
on the https://alpinelinux.org/cloud web page.
|
||||
|
||||
### `signing_cmd` string
|
||||
|
||||
Command template to cryptographically sign files.
|
||||
|
||||
### `release_cmd` string
|
||||
|
||||
Command template to complete the release of an image.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
Copyright (c) 2017-2022 Jake Buchholz Göktürk, Michael Crute
|
||||
Copyright (c) 2017-2024 Jake Buchholz Göktürk, Michael Crute
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
|
|
|
@ -11,16 +11,17 @@ own customized images.
|
|||
To get started with offical pre-built Alpine Linux cloud images, visit
|
||||
https://alpinelinux.org/cloud. Currently, we build official images for the
|
||||
following cloud platforms...
|
||||
* AWS
|
||||
* Amazon Web Services (AWS)
|
||||
* Microsoft Azure
|
||||
* GCP (Google Cloud Platform)
|
||||
* OCI (Oracle Cloud Infrastructure)
|
||||
* NoCloud
|
||||
|
||||
...we are working on also publishing offical images to other major cloud
|
||||
providers.
|
||||
Each image's name contains the Alpine version release, architecture, firmware,
|
||||
bootstrap, and image revision; a YAML metadata file containing these details
|
||||
and more is downloadable.
|
||||
|
||||
Each published image's name contains the Alpine version release, architecture,
|
||||
firmware, bootstrap, and image revision. These details (and more) are also
|
||||
tagged on the images...
|
||||
|
||||
| Tag | Description / Values |
|
||||
| Key | Description / Values |
|
||||
|-----|----------------------|
|
||||
| name | `alpine-`_`release`_`-`_`arch`_`-`_`firmware`_`-`_`bootstrap`_`-r`_`revision`_ |
|
||||
| project | `https://alpinelinux.org/cloud` |
|
||||
|
@ -37,13 +38,15 @@ tagged on the images...
|
|||
| imported | image import timestamp |
|
||||
| import_id | imported image id |
|
||||
| import_region | imported image region |
|
||||
| signed | image signing timestamp |
|
||||
| published | image publication timestamp |
|
||||
| released | image release timestamp _(won't be set until second publish)_ |
|
||||
| description | image description |
|
||||
|
||||
Although AWS does not allow cross-account filtering by tags, the image name can
|
||||
still be used to filter images. For example, to get a list of available Alpine
|
||||
3.x aarch64 images in AWS eu-west-2...
|
||||
Published AWS images are also tagged with this data, but other AWS accounts
|
||||
can't read these tags. However, the image name can still be used to filter
|
||||
images to find what you're looking for. For example, to get a list of
|
||||
available Alpine 3.x aarch64 images in AWS eu-west-2...
|
||||
```
|
||||
aws ec2 describe-images \
|
||||
--region eu-west-2 \
|
||||
|
@ -77,25 +80,30 @@ The build system consists of a number of components:
|
|||
* the `scripts/` directory, containing scripts and related data used to set up
|
||||
image contents during provisioning
|
||||
|
||||
* the Packer `alpine.pkr.hcl`, which orchestrates build, import, and publishing
|
||||
of images
|
||||
* the Packer `alpine.pkr.hcl`, which orchestrates the various build steps
|
||||
from `local` and beyond.
|
||||
|
||||
* the `cloud_helper.py` script that Packer runs in order to do cloud-specific
|
||||
import and publish operations
|
||||
per-image operations, such as image format conversion, upload, publishing,
|
||||
etc.
|
||||
|
||||
### Build Requirements
|
||||
* [Python](https://python.org) (3.9.7 is known to work)
|
||||
* [Packer](https://packer.io) (1.7.6 is known to work)
|
||||
* [QEMU](https://www.qemu.org) (6.1.0 is known to work)
|
||||
* cloud provider account(s)
|
||||
|
||||
* [Python](https://python.org) (3.9.9 is known to work)
|
||||
* [Packer](https://packer.io) (1.9.4 is known to work)
|
||||
* [QEMU](https://www.qemu.org) (8.1.2 is known to work)
|
||||
* cloud provider account(s) _(for import/publish steps)_
|
||||
|
||||
### Cloud Credentials
|
||||
|
||||
By default, the build system relies on the cloud providers' Python API
|
||||
Importing and publishing images relies on the cloud providers' Python API
|
||||
libraries to find and use the necessary credentials, usually via configuration
|
||||
under the user's home directory (i.e. `~/.aws/`, `~/.oci/`, etc.) or or via
|
||||
environment variables (i.e. `AWS_...`, `OCI_...`, etc.)
|
||||
|
||||
_Note that presently, importing and publishing to cloud providers is only
|
||||
supported for AWS images._
|
||||
|
||||
The credentials' user/role needs sufficient permission to query, import, and
|
||||
publish images -- the exact details will vary from cloud to cloud. _It is
|
||||
recommended that only the minimum required permissions are granted._
|
||||
|
@ -111,7 +119,7 @@ usage: build [-h] [--debug] [--clean] [--pad-uefi-bin-arch ARCH [ARCH ...]]
|
|||
[--custom DIR [DIR ...]] [--skip KEY [KEY ...]] [--only KEY [KEY ...]]
|
||||
[--revise] [--use-broker] [--no-color] [--parallel N]
|
||||
[--vars FILE [FILE ...]]
|
||||
{configs,state,rollback,local,upload,import,publish,release}
|
||||
{configs,state,rollback,local,upload,import,sign,publish,release}
|
||||
|
||||
positional arguments: (build up to and including this step)
|
||||
configs resolve image build configuration
|
||||
|
@ -120,6 +128,7 @@ positional arguments: (build up to and including this step)
|
|||
local build images locally
|
||||
upload upload images and metadata to storage
|
||||
* import import local images to cloud provider default region (*)
|
||||
sign cryptographically sign images
|
||||
* publish set image permissions and publish to cloud regions (*)
|
||||
release mark images as being officially relased
|
||||
|
||||
|
@ -139,6 +148,7 @@ optional arguments:
|
|||
--no-color turn off Packer color output
|
||||
--parallel N build N images in parallel
|
||||
--vars FILE [FILE ...] supply Packer with -vars-file(s) (default: [])
|
||||
--disable STEP [STEP ...] disable optional steps (default: [])
|
||||
```
|
||||
|
||||
The `build` script will automatically create a `work/` directory containing a
|
||||
|
@ -173,13 +183,14 @@ only if they are _unpublished_ and _unreleased_.
|
|||
As _published_ and _released_ images can't be rolled back, `--revise` can be
|
||||
used to increment the _`revision`_ value to rebuild newly revised images.
|
||||
|
||||
`local`, `upload`, `import`, `publish`, and `release` steps are orchestrated by
|
||||
Packer. By default, each image will be processed serially; providing the
|
||||
`--parallel` argument with a value greater than 1 will parallelize operations.
|
||||
The degree to which you can parallelze `local` image builds will depend on the
|
||||
local build hardware -- as QEMU virtual machines are launched for each image
|
||||
being built. Image `upload`, `import`, `publish`, and `release` steps are much
|
||||
more lightweight, and can support higher parallelism.
|
||||
`local`, `upload`, `import`, `publish`, `sign` , and `release` steps are
|
||||
orchestrated by Packer. By default, each image will be processed serially;
|
||||
providing the `--parallel` argument with a value greater than 1 will
|
||||
parallelize operations. The degree to which you can parallelize `local` image
|
||||
builds will depend on the local build hardware -- as QEMU virtual machines are
|
||||
launched for each image being built. Image `upload`, `import`, `publish`,
|
||||
`sign`, and `release` steps are much more lightweight, and can support higher
|
||||
parallelism.
|
||||
|
||||
The `local` step builds local images with QEMU, for those that are not already
|
||||
built locally or have already been imported. Images are converted to formats
|
||||
|
@ -194,6 +205,9 @@ The `import` step imports the local images into the cloud providers' default
|
|||
regions, unless they've already been imported. At this point the images are
|
||||
not available publicly, allowing for additional testing prior to publishing.
|
||||
|
||||
The `sign` step will cryptographically sign the built image, using the command
|
||||
specified by the `signing_cmd` config value.
|
||||
|
||||
The `publish` step copies the image from the default region to other regions,
|
||||
if they haven't already been copied there. This step will always update
|
||||
image permissions, descriptions, tags, and deprecation date (if applicable)
|
||||
|
@ -203,7 +217,8 @@ in all regions where the image has been published.
|
|||
providers where this does not make sense (i.e. NoCloud) or for those which
|
||||
it has not yet been coded.
|
||||
|
||||
The `release` step simply marks the images as being fully released. _(For the
|
||||
The `release` step simply marks the images as being fully released. If there
|
||||
is a `release_cmd` specified, this is also executed, per image. _(For the
|
||||
offical Alpine releases, we have a `gen_mksite_release.py` script to convert
|
||||
the image data to a format that can be used by https://alpinelinux.org/cloud.)_
|
||||
|
||||
|
|
|
@ -1,4 +1,16 @@
|
|||
* clean up cloud modules now that `get_latest_imported_tags` isn't needed
|
||||
* consider separating official Alpine Linux configuration into an overlay
|
||||
to be applied via `--custom`.
|
||||
|
||||
* add per-cloud documentation for importing images
|
||||
|
||||
* figure out `image_compression`, especially for the weird case of GCP
|
||||
|
||||
* clean up cloud modules now that `get_latest_imported_tags` isn't really
|
||||
needed -- AWS publish_image still uses it to make sure the imported image
|
||||
is actually there (and the right one), this can be made more specific.
|
||||
|
||||
* do we still need to set `ntp_server` for AWS images, starting with 3.18.4?
|
||||
_(or is this now handled via `dhcpcd`?)_
|
||||
|
||||
* figure out rollback / `refresh_state()` for images that are already signed,
|
||||
don't sign again unless directed to do so.
|
||||
|
|
|
@ -1,5 +1,14 @@
|
|||
# Alpine Cloud Images Packer Configuration
|
||||
|
||||
packer {
|
||||
required_plugins {
|
||||
qemu = {
|
||||
source = "github.com/hashicorp/qemu"
|
||||
version = "~> 1"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
### Variables
|
||||
|
||||
# include debug output from provisioning/post-processing scripts
|
||||
|
@ -31,7 +40,7 @@ variable "qemu" {
|
|||
locals {
|
||||
# possible actions for the post-processor
|
||||
actions = [
|
||||
"local", "upload", "import", "publish", "release"
|
||||
"local", "upload", "import", "sign", "publish", "release"
|
||||
]
|
||||
|
||||
debug_arg = var.DEBUG == 0 ? "" : "--debug"
|
||||
|
@ -106,7 +115,7 @@ build {
|
|||
|
||||
# results
|
||||
output_directory = "work/images/${B.value.cloud}/${B.value.image_key}"
|
||||
disk_size = B.value.size
|
||||
disk_size = B.value.disk_size
|
||||
format = "qcow2"
|
||||
vm_name = "image.qcow2"
|
||||
}
|
||||
|
|
|
@ -48,7 +48,8 @@ from image_config_manager import ImageConfigManager
|
|||
|
||||
### Constants & Variables
|
||||
|
||||
STEPS = ['configs', 'state', 'rollback', 'local', 'upload', 'import', 'publish', 'release']
|
||||
STEPS = ['configs', 'state', 'rollback', 'local', 'upload', 'import', 'sign', 'publish', 'release']
|
||||
DISABLEABLE = ['import', 'sign', 'publish']
|
||||
LOGFORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
WORK_CLEAN = {'bin', 'include', 'lib', 'pyvenv.cfg', '__pycache__'}
|
||||
WORK_OVERLAYS = ['configs', 'scripts']
|
||||
|
@ -62,6 +63,8 @@ UEFI_FIRMWARE = {
|
|||
'bin': 'usr/share/OVMF/OVMF.fd',
|
||||
}
|
||||
}
|
||||
PACKER_CACHE_DIR = 'work/packer_cache'
|
||||
PACKER_PLUGIN_PATH = 'work/packer_plugin'
|
||||
alpine = Alpine()
|
||||
|
||||
|
||||
|
@ -235,7 +238,8 @@ parser.add_argument(
|
|||
help='use the identity broker to get credentials')
|
||||
# packer options
|
||||
parser.add_argument(
|
||||
'--no-color', action='store_true', help='turn off Packer color output')
|
||||
'--color', default=True, action=argparse.BooleanOptionalAction,
|
||||
help='turn on/off Packer color output')
|
||||
parser.add_argument(
|
||||
'--parallel', metavar='N', type=int, default=1,
|
||||
help='build N images in parallel')
|
||||
|
@ -245,6 +249,11 @@ parser.add_argument(
|
|||
# positional argument
|
||||
parser.add_argument(
|
||||
'step', choices=STEPS, help='build up to and including this step')
|
||||
# steps we may choose to not do
|
||||
parser.add_argument(
|
||||
'--disable', metavar='STEP', nargs='+', action=remove_dupe_args(),
|
||||
choices=DISABLEABLE, default=[], help='disable optional steps'
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger('build')
|
||||
|
@ -260,6 +269,10 @@ if args.step == 'rollback' and args.revise:
|
|||
log.error('"rollback" step does not support --revise option')
|
||||
sys.exit(1)
|
||||
|
||||
if 'import' in args.disable and 'publish' not in args.disable:
|
||||
log.warning('--disable import also implicitly disables publish')
|
||||
args.disable.append('publish')
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider(debug=args.debug)
|
||||
|
@ -288,7 +301,7 @@ if args.step == 'configs':
|
|||
### What needs doing?
|
||||
|
||||
if not image_configs.refresh_state(
|
||||
step=args.step, only=args.only, skip=args.skip, revise=args.revise):
|
||||
args.step, args.disable, args.revise, args.only, args.skip):
|
||||
log.info('No pending actions to take at this time.')
|
||||
sys.exit(0)
|
||||
|
||||
|
@ -302,14 +315,32 @@ install_qemu_firmware()
|
|||
|
||||
env = os.environ | {
|
||||
'TZ': 'UTC',
|
||||
'PACKER_CACHE_DIR': 'work/packer_cache'
|
||||
'PACKER_CACHE_DIR': PACKER_CACHE_DIR,
|
||||
'PACKER_PLUGIN_PATH': PACKER_PLUGIN_PATH
|
||||
}
|
||||
|
||||
if not os.path.exists(PACKER_PLUGIN_PATH):
|
||||
packer_init_cmd = [ 'packer', 'init', '.' ]
|
||||
log.info('Initializing Packer...')
|
||||
log.debug(packer_init_cmd)
|
||||
out = io.StringIO()
|
||||
p = Popen(packer_init_cmd, stdout=PIPE, encoding='utf8', env=env)
|
||||
while p.poll() is None:
|
||||
text = p.stdout.readline()
|
||||
out.write(text)
|
||||
print(text, end="")
|
||||
|
||||
if p.returncode != 0:
|
||||
log.critical('Packer Initialization Failure')
|
||||
sys.exit(p.returncode)
|
||||
|
||||
log.info('Packer Initialized')
|
||||
|
||||
packer_cmd = [
|
||||
'packer', 'build', '-timestamp-ui',
|
||||
'-parallel-builds', str(args.parallel)
|
||||
]
|
||||
if args.no_color:
|
||||
if not args.color:
|
||||
packer_cmd.append('-color=false')
|
||||
|
||||
if args.use_broker:
|
||||
|
@ -340,7 +371,7 @@ log.info('Packer Completed')
|
|||
|
||||
# update final state in work/images.yaml
|
||||
image_configs.refresh_state(
|
||||
step='final',
|
||||
'final',
|
||||
only=args.only,
|
||||
skip=args.skip
|
||||
)
|
||||
|
|
|
@ -37,7 +37,7 @@ from image_config_manager import ImageConfigManager
|
|||
|
||||
### Constants & Variables
|
||||
|
||||
ACTIONS = ['local', 'upload', 'import', 'publish', 'release']
|
||||
ACTIONS = ['local', 'upload', 'import', 'sign', 'publish', 'release']
|
||||
LOGFORMAT = '%(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
|
||||
|
@ -78,19 +78,22 @@ for image_key in args.image_keys:
|
|||
image_config = configs.get(image_key)
|
||||
image_config.load_local_metadata() # if it exists
|
||||
|
||||
if args.action in ["import", "sign"] and not image_config.image_path.exists():
|
||||
# if we don't have the image locally, retrieve it from storage
|
||||
image_config.retrieve_image()
|
||||
|
||||
if args.action == 'local':
|
||||
image_config.convert_image()
|
||||
|
||||
elif args.action == 'upload':
|
||||
image_config.upload_image()
|
||||
|
||||
elif args.action == 'import' and 'import' in clouds.actions(image_config):
|
||||
# if we don't have the image locally, retrieve it from storage
|
||||
if not image_config.image_path.exists():
|
||||
image_config.retrieve_image()
|
||||
|
||||
elif args.action == 'import':
|
||||
clouds.import_image(image_config)
|
||||
|
||||
elif args.action == 'sign':
|
||||
image_config.sign_image()
|
||||
|
||||
elif args.action == 'publish' and 'publish' in clouds.actions(image_config):
|
||||
clouds.publish_image(image_config)
|
||||
|
||||
|
|
|
@ -31,7 +31,7 @@ def set_credential_provider(debug=False):
|
|||
|
||||
### forward to the correct adapter
|
||||
|
||||
# TODO: deprexcate/remove
|
||||
# TODO: deprecate/remove
|
||||
def get_latest_imported_tags(config):
|
||||
return ADAPTERS[config.cloud].get_latest_imported_tags(
|
||||
config.project,
|
||||
|
|
|
@ -115,10 +115,12 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
|
||||
# import an image
|
||||
# NOTE: requires 'vmimport' role with read/write of <s3_bucket>.* and its objects
|
||||
def import_image(self, ic):
|
||||
def import_image(self, ic, log=None):
|
||||
# if we try reimport from publish, we already have a log going
|
||||
if not log:
|
||||
log = logging.getLogger('import')
|
||||
description = ic.image_description
|
||||
|
||||
description = ic.image_description
|
||||
session = self.session()
|
||||
s3r = session.resource('s3')
|
||||
ec2c = session.client('ec2')
|
||||
|
@ -205,7 +207,7 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
}],
|
||||
Description=description,
|
||||
EnaSupport=True,
|
||||
Name=ic.image_name,
|
||||
Name=tags.name,
|
||||
RootDeviceName='/dev/xvda',
|
||||
SriovNetSupport='simple',
|
||||
VirtualizationType='hvm',
|
||||
|
@ -258,9 +260,18 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
ic.project,
|
||||
ic.image_key,
|
||||
)
|
||||
if not source_image:
|
||||
log.error('No source image for %s', ic.image_key)
|
||||
raise RuntimeError('Missing source imamge')
|
||||
# TODO: might be the wrong source image?
|
||||
if not source_image or source_image.name != ic.tags.name:
|
||||
log.warning('No source image for %s, reimporting', ic.tags.name)
|
||||
# TODO: try importing it again?
|
||||
self.import_image(ic, log)
|
||||
source_image = self.get_latest_imported_tags(
|
||||
ic.project,
|
||||
ic.image_key,
|
||||
)
|
||||
if not source_image or source_image.name != ic.tags.name:
|
||||
log.error('No source image for %s', ic.tags.name)
|
||||
raise RuntimeError('Missing source image')
|
||||
|
||||
source_id = source_image.import_id
|
||||
source_region = source_image.import_region
|
||||
|
|
|
@ -6,13 +6,14 @@
|
|||
# via a "config overlay" to avoid image import and publishing collisions.
|
||||
|
||||
project = "https://alpinelinux.org/cloud"
|
||||
userhost = "tomalok@dev.alpinelinux.org"
|
||||
|
||||
# all build configs start with these
|
||||
Default {
|
||||
project = ${project}
|
||||
|
||||
# image name/description components
|
||||
name = [ alpine ]
|
||||
name = [ "{cloud}_alpine" ]
|
||||
description = [ Alpine Linux ]
|
||||
|
||||
motd {
|
||||
|
@ -34,16 +35,19 @@ Default {
|
|||
scripts = [ setup ]
|
||||
script_dirs = [ setup.d ]
|
||||
|
||||
size = 1G
|
||||
disk_size = [116]
|
||||
image_format = qcow2
|
||||
image_compress = bz2
|
||||
|
||||
login = alpine
|
||||
|
||||
image_format = qcow2
|
||||
|
||||
# these paths are subject to change, as image downloads are developed
|
||||
storage_url = "ssh://tomalok@dev.alpinelinux.org/public_html/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}"
|
||||
#storage_url = "file://~jake/tmp/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}"
|
||||
download_url = "https://dev.alpinelinux.org/~tomalok/alpine-cloud-images/{v_version}/cloud/{cloud}/{arch}" # development
|
||||
#download_url = "https://dl-cdn.alpinelinux.org/alpine/{v_version}/cloud/{cloud}/{arch}"
|
||||
# storage_url contents are authoritative!
|
||||
storage_url = "ssh://"${userhost}"/public_html/alpine-cloud-images/{v_version}/{cloud}/{arch}"
|
||||
# released images are available here
|
||||
download_url = "https://dl-cdn.alpinelinux.org/alpine/{v_version}/releases/cloud"
|
||||
signing_cmd = "keybase pgp sign -d -i {file} -o {file}.asc"
|
||||
release_cmd = ssh ${userhost} "bin/release-image {v_version} {cloud} {arch} {base}"
|
||||
|
||||
# image access
|
||||
access.PUBLIC = true
|
||||
|
@ -55,10 +59,10 @@ Default {
|
|||
# profile build matrix
|
||||
Dimensions {
|
||||
version {
|
||||
"3.19" { include required("version/3.19.conf") }
|
||||
"3.18" { include required("version/3.18.conf") }
|
||||
"3.17" { include required("version/3.17.conf") }
|
||||
"3.16" { include required("version/3.16.conf") }
|
||||
"3.15" { include required("version/3.15.conf") }
|
||||
edge { include required("version/edge.conf") }
|
||||
}
|
||||
arch {
|
||||
|
@ -75,8 +79,8 @@ Dimensions {
|
|||
}
|
||||
cloud {
|
||||
aws { include required("cloud/aws.conf") }
|
||||
# considered beta...
|
||||
nocloud { include required("cloud/nocloud.conf") }
|
||||
# these are considered "alpha"
|
||||
azure { include required("cloud/azure.conf") }
|
||||
gcp { include required("cloud/gcp.conf") }
|
||||
oci { include required("cloud/oci.conf") }
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
name = [aarch64]
|
||||
arch_name = aarch64
|
||||
|
||||
disk_size = [32]
|
||||
|
||||
# aarch64 is UEFI only
|
||||
EXCLUDE = [bios]
|
||||
|
||||
|
@ -13,3 +15,8 @@ qemu.args = [
|
|||
[-device, usb-ehci],
|
||||
[-device, usb-kbd],
|
||||
]
|
||||
|
||||
kernel_options {
|
||||
"console=ttyS0,115200n8" = false
|
||||
"console=ttyAMA0,115200n8" = true
|
||||
}
|
||||
|
|
|
@ -3,6 +3,8 @@ name = [cloudinit]
|
|||
bootstrap_name = cloud-init
|
||||
bootstrap_url = "https://cloud-init.io"
|
||||
|
||||
disk_size = [64]
|
||||
|
||||
# start cloudinit images with 3.15
|
||||
EXCLUDE = ["3.12", "3.13", "3.14"]
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Amazon Web Services
|
||||
image_format = vhd
|
||||
image_format_opts = vhd/force-size
|
||||
|
||||
kernel_modules {
|
||||
ena = true
|
||||
|
@ -44,5 +45,10 @@ WHEN {
|
|||
initfs_features.gpio_pl061 = false
|
||||
}
|
||||
}
|
||||
# AWS is weird, other aarch64 use ttyAMA0
|
||||
kernel_options {
|
||||
"console=ttyAMA0,115200n8" = false
|
||||
"console=ttyS0,115200n8" = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Microsoft Azure (alpha)
|
||||
cloud_name = Microsoft Azure (beta)
|
||||
image_format = vhd
|
||||
image_format_opts = vhd/fixed_force-size
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Google Cloud Platform (alpha)
|
||||
cloud_name = Google Cloud Platform (beta)
|
||||
# TODO: https://cloud.google.com/compute/docs/import/importing-virtual-disks
|
||||
# Mentions "VHD" but also mentions "..." if that also includes QCOW2, then
|
||||
# we should use that instead. The "Manual Import" section on the sidebar
|
||||
# has a "Manually import boot disks" subpage which also mentions importing
|
||||
# compressed raw images... We would prefer to avoid that if possible.
|
||||
image_format = vhd
|
||||
image_format = raw
|
||||
image_compress = tar.gz
|
||||
|
||||
# start with 3.18
|
||||
EXCLUDE = ["3.12", "3.13", "3.14", "3.15", "3.16", "3.17"]
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = NoCloud
|
||||
cloud_name = NoCloud (beta)
|
||||
image_format = qcow2
|
||||
|
||||
# start with 3.18
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# vim: ts=2 et:
|
||||
cloud_name = Oracle Cloud Infrastructure (alpha)
|
||||
cloud_name = Oracle Cloud Infrastructure (beta)
|
||||
image_format = qcow2
|
||||
|
||||
# start with 3.18
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
name = [uefi]
|
||||
firmware_name = UEFI
|
||||
|
||||
disk_size = [16]
|
||||
|
||||
bootloader = grub-efi
|
||||
packages {
|
||||
grub-efi = --no-scripts
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
include required("base/3.conf")
|
||||
|
||||
end_of_life: 2023-12-05 # to fix openssl CVEs past original EOL
|
||||
|
||||
motd {
|
||||
sudo_deprecated = "NOTE: 'sudo' has been deprecated, please use 'doas' instead."
|
||||
}
|
7
alpine-cloud-images/configs/version/3.19.conf
Normal file
7
alpine-cloud-images/configs/version/3.19.conf
Normal file
|
@ -0,0 +1,7 @@
|
|||
# vim: ts=2 et:
|
||||
|
||||
include required("base/5.conf")
|
||||
|
||||
motd {
|
||||
sudo_removed = "NOTE: 'sudo' is not installed by default, please use 'doas' instead."
|
||||
}
|
|
@ -11,7 +11,7 @@ import textwrap
|
|||
NOTE = textwrap.dedent("""
|
||||
This script's output provides a mustache-ready datasource to alpine-mksite
|
||||
(https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite) and should be
|
||||
run after the main 'build' script has published ALL images.
|
||||
run after the main 'build' script has released ALL images.
|
||||
STDOUT from this script should be saved as 'cloud/releases.yaml' in the
|
||||
above alpine-mksite repo.
|
||||
""")
|
||||
|
@ -87,7 +87,7 @@ configs = ImageConfigManager(
|
|||
log='gen_mksite_releases'
|
||||
)
|
||||
# make sure images.yaml is up-to-date with reality
|
||||
configs.refresh_state('final')
|
||||
configs.refresh_state('final', skip=['edge'])
|
||||
|
||||
yaml = YAML()
|
||||
|
||||
|
@ -97,19 +97,22 @@ data = {}
|
|||
|
||||
log.info('Transforming image data')
|
||||
for i_key, i_cfg in configs.get().items():
|
||||
if not i_cfg.published:
|
||||
if not i_cfg.released:
|
||||
continue
|
||||
|
||||
released = i_cfg.uploaded.split('T')[0]
|
||||
|
||||
version = i_cfg.version
|
||||
if version == 'edge':
|
||||
continue
|
||||
|
||||
image_name = i_cfg.image_name
|
||||
release = i_cfg.release
|
||||
arch = i_cfg.arch
|
||||
firmware = i_cfg.firmware
|
||||
bootstrap = i_cfg.bootstrap
|
||||
cloud = i_cfg.cloud
|
||||
# key on "variant" (but do not include cloud!)
|
||||
variant = f"{release} {arch} {firmware} {bootstrap}"
|
||||
|
||||
if cloud not in filters['clouds']:
|
||||
filters['clouds'][cloud] = {
|
||||
|
@ -140,15 +143,17 @@ for i_key, i_cfg in configs.get().items():
|
|||
'release': release,
|
||||
'end_of_life': i_cfg.end_of_life,
|
||||
}
|
||||
versions[version]['images'][image_name] |= {
|
||||
'image_name': image_name,
|
||||
versions[version]['images'][variant] |= {
|
||||
'variant': variant,
|
||||
'arch': arch,
|
||||
'firmware': firmware,
|
||||
'bootstrap': bootstrap,
|
||||
'published': i_cfg.published.split('T')[0], # just the date
|
||||
#'released': i_cfg.released.split('T')[0], # just the date
|
||||
'released': released
|
||||
}
|
||||
versions[version]['images'][image_name]['downloads'][cloud] |= {
|
||||
versions[version]['images'][variant]['downloads'][cloud] |= {
|
||||
'cloud': cloud,
|
||||
'image_name': i_cfg.image_name,
|
||||
'image_format': i_cfg.image_format,
|
||||
'image_url': i_cfg.download_url + '/' + (i_cfg.image_name)
|
||||
}
|
||||
|
@ -168,7 +173,7 @@ for i_key, i_cfg in configs.get().items():
|
|||
if cloud not in filters['regions'][region]['clouds']:
|
||||
filters['regions'][region]['clouds'].append(cloud)
|
||||
|
||||
versions[version]['images'][image_name]['regions'][region] |= {
|
||||
versions[version]['images'][variant]['regions'][region] |= {
|
||||
'cloud': cloud,
|
||||
'region': region,
|
||||
'region_url': i_cfg.region_url(region, image_id),
|
||||
|
@ -194,21 +199,21 @@ versions = undictfactory(versions)
|
|||
for version in sorted(versions, reverse=True, key=lambda s: [int(u) for u in s.split('.')]):
|
||||
images = versions[version].pop('images')
|
||||
i = []
|
||||
for image_name in images: # order as they appear in work/images.yaml
|
||||
downloads = images[image_name].pop('downloads')
|
||||
for variant in images: # order as they appear in work/images.yaml
|
||||
downloads = images[variant].pop('downloads')
|
||||
d = []
|
||||
for download in downloads:
|
||||
d.append(downloads[download])
|
||||
|
||||
images[image_name]['downloads'] = d
|
||||
images[variant]['downloads'] = d
|
||||
|
||||
regions = images[image_name].pop('regions', [])
|
||||
regions = images[variant].pop('regions', [])
|
||||
r = []
|
||||
for region in sorted(regions):
|
||||
r.append(regions[region])
|
||||
|
||||
images[image_name]['regions'] = r
|
||||
i.append(images[image_name])
|
||||
images[variant]['regions'] = r
|
||||
i.append(images[variant])
|
||||
|
||||
versions[version]['images'] = i
|
||||
data['versions'].append(versions[version])
|
||||
|
|
|
@ -42,7 +42,7 @@ import clouds
|
|||
CLOUDS = ['aws']
|
||||
LOGFORMAT = '%(asctime)s - %(levelname)s - %(message)s'
|
||||
|
||||
RE_ALPINE = re.compile(r'^alpine-')
|
||||
RE_ALPINE = re.compile(r'^(?:aws_)?alpine-')
|
||||
RE_RELEASE = re.compile(r'-(edge|[\d\.]+)-')
|
||||
RE_REVISION = re.compile(r'-r?(\d+)$')
|
||||
RE_STUFF = re.compile(r'(edge|[\d+\.]+)(?:_rc(\d+))?-(.+)-r?(\d+)$')
|
||||
|
@ -142,6 +142,8 @@ for region in sorted(regions):
|
|||
last_launched_attr = image.describe_attribute(Attribute='lastLaunchedTime')['LastLaunchedTime']
|
||||
last_launched = last_launched_attr.get('Value', 'Never')
|
||||
|
||||
eol = None # we don't know for sure, unless we have a deprecation time
|
||||
if image.deprecation_time:
|
||||
eol = time.strptime(image.deprecation_time, '%Y-%m-%dT%H:%M:%S.%fZ') < now
|
||||
|
||||
# keep track of images
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
# vim: ts=4 et:
|
||||
|
||||
import hashlib
|
||||
import mergedeep
|
||||
import os
|
||||
import pyhocon
|
||||
|
@ -19,20 +18,34 @@ class ImageConfig():
|
|||
|
||||
CONVERT_CMD = {
|
||||
'qcow2': ['ln', '-f'],
|
||||
'vhd': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'vpc', '-o', 'force_size=on'],
|
||||
'vhd': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'vpc'],
|
||||
'raw': ['qemu-img', 'convert', '-f', 'qcow2', '-O', 'raw'],
|
||||
}
|
||||
CONVERT_OPTS = {
|
||||
None: [],
|
||||
'vhd/fixed_force-size': ['-o', 'subformat=fixed,force_size'],
|
||||
'vhd/force-size': ['-o', 'force_size=on'],
|
||||
}
|
||||
COMPRESS_CMD = {
|
||||
'bz2': ['bzip2', '-c']
|
||||
}
|
||||
DECOMPRESS_CMD = {
|
||||
'bz2': ['bzip2', '-dc']
|
||||
}
|
||||
# these tags may-or-may-not exist at various times
|
||||
OPTIONAL_TAGS = [
|
||||
'built', 'uploaded', 'imported', 'import_id', 'import_region', 'published', 'released'
|
||||
'built', 'uploaded', 'imported', 'import_id', 'import_region',
|
||||
'signed', 'published', 'released'
|
||||
]
|
||||
STEPS = [
|
||||
'local', 'upload', 'import', 'publish', 'release'
|
||||
'local', 'upload', 'import', 'sign', 'publish', 'release'
|
||||
]
|
||||
# we expect these to be available
|
||||
DEFAULT_OBJ = {
|
||||
'built': None,
|
||||
'uploaded': None,
|
||||
'imported': None,
|
||||
'signed': None,
|
||||
'published': None,
|
||||
'released': None,
|
||||
'artifacts': None,
|
||||
|
@ -110,7 +123,7 @@ class ImageConfig():
|
|||
'end_of_life': self.end_of_life,
|
||||
'firmware': self.firmware,
|
||||
'image_key': self.image_key,
|
||||
'name': self.image_name,
|
||||
'name': self.image_name.replace(self.cloud + '_', '', 1),
|
||||
'project': self.project,
|
||||
'release': self.release,
|
||||
'revision': self.revision,
|
||||
|
@ -161,6 +174,7 @@ class ImageConfig():
|
|||
self.name = '-'.join(self.name)
|
||||
self.description = ' '.join(self.description)
|
||||
self.repo_keys = ' '.join(self.repo_keys)
|
||||
self._resolve_disk_size()
|
||||
self._resolve_motd()
|
||||
self._resolve_urls()
|
||||
self._stringify_repos()
|
||||
|
@ -170,6 +184,9 @@ class ImageConfig():
|
|||
self._stringify_dict_keys('kernel_options', ' ')
|
||||
self._stringify_dict_keys('initfs_features', ' ')
|
||||
|
||||
def _resolve_disk_size(self):
|
||||
self.disk_size = str(sum(self.disk_size)) + 'M'
|
||||
|
||||
def _resolve_motd(self):
|
||||
# merge release notes, as apporpriate
|
||||
if 'release_notes' not in self.motd or not self.release_notes:
|
||||
|
@ -284,14 +301,14 @@ class ImageConfig():
|
|||
loaded.pop('Name', None) # remove special AWS tag
|
||||
self.__dict__ |= loaded
|
||||
|
||||
def refresh_state(self, step, revise=False):
|
||||
def refresh_state(self, step, disable=[], revise=False):
|
||||
log = self._log
|
||||
actions = {}
|
||||
undo = {}
|
||||
|
||||
# enable initial set of possible actions based on specified step
|
||||
for s in self.STEPS:
|
||||
if self._is_step_or_earlier(s, step):
|
||||
if self._is_step_or_earlier(s, step) and s not in disable:
|
||||
actions[s] = True
|
||||
|
||||
# sets the latest revision metadata (from storage and local)
|
||||
|
@ -386,49 +403,82 @@ class ImageConfig():
|
|||
|
||||
return self._storage
|
||||
|
||||
def _save_checksum(self, file):
|
||||
self._log.info("Calculating checksum for '%s'", file)
|
||||
sha512_hash = hashlib.sha512()
|
||||
with open(file, 'rb') as f:
|
||||
for block in iter(lambda: f.read(4096), b''):
|
||||
sha512_hash.update(block)
|
||||
@property
|
||||
def convert_opts(self):
|
||||
if 'image_format_opts' in self.__dict__:
|
||||
return self.CONVERT_OPTS[self.image_format_opts]
|
||||
|
||||
with open(str(file) + '.sha512', 'w') as f:
|
||||
print(sha512_hash.hexdigest(), file=f)
|
||||
return []
|
||||
|
||||
# convert local QCOW2 to format appropriate for a cloud
|
||||
def convert_image(self):
|
||||
self._log.info('Converting %s to %s', self.local_image, self.image_path)
|
||||
run(
|
||||
self.CONVERT_CMD[self.image_format] + [self.local_image, self.image_path],
|
||||
self.CONVERT_CMD[self.image_format] + self.convert_opts
|
||||
+ [self.local_image, self.image_path],
|
||||
log=self._log, errmsg='Unable to convert %s to %s',
|
||||
errvals=[self.local_image, self.image_path]
|
||||
)
|
||||
self._save_checksum(self.image_path)
|
||||
#self._save_checksum(self.image_path)
|
||||
self.built = datetime.utcnow().isoformat()
|
||||
|
||||
def upload_image(self):
|
||||
self.storage.store(
|
||||
self.image_file,
|
||||
self.image_file + '.sha512'
|
||||
)
|
||||
# TODO: compress here? upload that instead
|
||||
self.storage.store(self.image_file, checksum=True)
|
||||
self.uploaded = datetime.utcnow().isoformat()
|
||||
|
||||
def retrieve_image(self):
|
||||
self._log.info('Retrieving %s from storage', self.image_file)
|
||||
self.storage.retrieve(
|
||||
self.image_file
|
||||
)
|
||||
# TODO: try downloading compressed and decompressed?
|
||||
self.storage.retrieve(self.image_file) #, checksum=True)
|
||||
# TODO: decompress compressed if exists
|
||||
|
||||
def remove_image(self):
|
||||
self.storage.remove(
|
||||
#self.image_file + '*',
|
||||
#self.metadata_file + '*')
|
||||
# TODO: self.image_compressed, .asc, .sha512
|
||||
self.image_file,
|
||||
self.image_file + '.asc',
|
||||
self.image_file + '.sha512',
|
||||
self.metadata_file,
|
||||
self.metadata_file + '.sha512'
|
||||
)
|
||||
|
||||
def sign_image(self):
|
||||
log = self._log
|
||||
if 'signing_cmd' not in self.__dict__:
|
||||
log.warning("No 'signing_cmd' set, not signing image.")
|
||||
return
|
||||
|
||||
# TODO: sign compressed file?
|
||||
cmd = self.signing_cmd.format(file=self.image_path).split(' ')
|
||||
log.info(f'Signing {self.image_file}...')
|
||||
log.debug(cmd)
|
||||
run(
|
||||
cmd, log=log, errmsg='Unable to sign image: %s',
|
||||
errvals=[self.image_file]
|
||||
)
|
||||
self.signed = datetime.utcnow().isoformat()
|
||||
# TODO?: self.signed_by? self.signed_fingerprint?
|
||||
self.storage.store(self.image_file + '.asc')
|
||||
|
||||
def release_image(self):
|
||||
log = self._log
|
||||
if 'release_cmd' not in self.__dict__:
|
||||
log.warning("No 'release_cmd' set, not releasing image.")
|
||||
return
|
||||
|
||||
base=self.image_name
|
||||
cmd = self.release_cmd.format(
|
||||
**self.__dict__, v_version=self.v_version,
|
||||
base=base
|
||||
).split(' ')
|
||||
log.info(f'releasing {base}...')
|
||||
run(
|
||||
cmd, log=log, errmsg='Unable to release image: %s',
|
||||
errvals=[self.image_name]
|
||||
)
|
||||
self.released = datetime.utcnow().isoformat()
|
||||
|
||||
def save_metadata(self, action):
|
||||
|
@ -442,12 +492,8 @@ class ImageConfig():
|
|||
}
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
self._yaml.dump(metadata, metadata_path)
|
||||
self._save_checksum(metadata_path)
|
||||
if action != 'local' and self.storage:
|
||||
self.storage.store(
|
||||
self.metadata_file,
|
||||
self.metadata_file + '.sha512'
|
||||
)
|
||||
self.storage.store(self.metadata_file, checksum=True)
|
||||
|
||||
def load_metadata(self, step):
|
||||
new = True
|
||||
|
|
|
@ -27,6 +27,7 @@ class ImageConfigManager():
|
|||
self.yaml = YAML()
|
||||
self.yaml.register_class(ImageConfig)
|
||||
self.yaml.explicit_start = True
|
||||
self.yaml.width = 1000
|
||||
# hide !ImageConfig tag from Packer
|
||||
self.yaml.representer.org_represent_mapping = self.yaml.representer.represent_mapping
|
||||
self.yaml.representer.represent_mapping = self._strip_yaml_tag_type
|
||||
|
@ -144,15 +145,16 @@ class ImageConfigManager():
|
|||
def _set_version_release(self, v, c):
|
||||
info = self.alpine.version_info(v)
|
||||
c.put('release', info['release'])
|
||||
c.put('end_of_life', info['end_of_life'])
|
||||
c.put('release_notes', info['notes'])
|
||||
if 'end_of_life' not in c:
|
||||
c.put('end_of_life', info['end_of_life'])
|
||||
|
||||
# release is also appended to name & description arrays
|
||||
c.put('name', [c.release])
|
||||
c.put('description', [c.release])
|
||||
|
||||
# update current config status
|
||||
def refresh_state(self, step, only=[], skip=[], revise=False):
|
||||
def refresh_state(self, step, disable=[], revise=False, only=[], skip=[]):
|
||||
self.log.info('Refreshing State')
|
||||
has_actions = False
|
||||
for ic in self._configs.values():
|
||||
|
@ -169,7 +171,7 @@ class ImageConfigManager():
|
|||
self.log.debug('%s SKIPPED, matches --skip', ic.config_key)
|
||||
continue
|
||||
|
||||
ic.refresh_state(step, revise)
|
||||
ic.refresh_state(step, disable, revise)
|
||||
if not has_actions and len(ic.actions):
|
||||
has_actions = True
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# vim: ts=4 et:
|
||||
|
||||
import copy
|
||||
import hashlib
|
||||
import shutil
|
||||
import os
|
||||
|
||||
|
@ -63,18 +65,45 @@ class ImageStorage():
|
|||
'user': url.username + '@' if url.username else '',
|
||||
})
|
||||
|
||||
def store(self, *files):
|
||||
def _checksum(self, file, save=False):
|
||||
log = self.log
|
||||
src = self.local
|
||||
log.debug("Calculating checksum for '%s'", file)
|
||||
sha512_hash = hashlib.sha512()
|
||||
with open(src / file, 'rb') as f:
|
||||
for block in iter(lambda: f.read(4096), b''):
|
||||
sha512_hash.update(block)
|
||||
|
||||
checksum = sha512_hash.hexdigest()
|
||||
if save:
|
||||
log.debug("Saving '%s'", file + '.sha512')
|
||||
with open(str(src / file) + '.sha512', 'w') as f:
|
||||
print(checksum, file=f)
|
||||
|
||||
return checksum
|
||||
|
||||
def store(self, *files, checksum=False):
|
||||
log = self.log
|
||||
src = self.local
|
||||
dest = self.remote
|
||||
|
||||
# take care of any globbing in file list
|
||||
files = [Path(p).name for p in sum([glob(str(src / f)) for f in files], [])]
|
||||
|
||||
if not files:
|
||||
log.debug('No files to store')
|
||||
return
|
||||
|
||||
src = self.local
|
||||
dest = self.remote
|
||||
if checksum:
|
||||
log.info('Creating checksum(s) for %s', files)
|
||||
for f in copy.copy(files):
|
||||
self._checksum(f, save=True)
|
||||
files.append(f + '.sha512')
|
||||
|
||||
log.info('Storing %s', files)
|
||||
if self.scheme == 'file':
|
||||
dest.mkdir(parents=True, exist_ok=True)
|
||||
for file in files:
|
||||
log.info('Storing %s', dest / file)
|
||||
shutil.copy2(src / file, dest / file)
|
||||
|
||||
return
|
||||
|
@ -97,8 +126,9 @@ class ImageStorage():
|
|||
log=log, errmsg='Failed to store files'
|
||||
)
|
||||
|
||||
def retrieve(self, *files):
|
||||
def retrieve(self, *files, checksum=False):
|
||||
log = self.log
|
||||
# TODO? use list()
|
||||
if not files:
|
||||
log.debug('No files to retrieve')
|
||||
return
|
||||
|
@ -158,6 +188,7 @@ class ImageStorage():
|
|||
|
||||
def remove(self, *files):
|
||||
log = self.log
|
||||
# TODO? use list()
|
||||
if not files:
|
||||
log.debug('No files to remove')
|
||||
return
|
||||
|
|
|
@ -72,6 +72,7 @@ parser.add_argument('--really', action='store_true', help='really prune images')
|
|||
parser.add_argument('--cloud', choices=CLOUDS, required=True, help='cloud provider')
|
||||
parser.add_argument('--region', help='specific region, instead of all regions')
|
||||
# what to prune...
|
||||
parser.add_argument('--bad-name', action='store_true')
|
||||
parser.add_argument('--private', action='store_true')
|
||||
parser.add_argument('--edge-eol', action='store_true')
|
||||
parser.add_argument('--rc', action='store_true')
|
||||
|
@ -135,6 +136,12 @@ for region in sorted(regions):
|
|||
for id, image in images.items():
|
||||
name = image['name']
|
||||
|
||||
if args.bad_name and not name.startswith('alpine-'):
|
||||
log.info(f"{region}\tBAD_NAME\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['BAD_NAME'][id] = name
|
||||
continue
|
||||
|
||||
if args.private and image['private']:
|
||||
log.info(f"{region}\tPRIVATE\t{name}")
|
||||
removes[region][id] = image
|
||||
|
|
|
@ -34,6 +34,9 @@ cleanup() {
|
|||
"$TARGET/proc" \
|
||||
"$TARGET/sys"
|
||||
|
||||
einfo "*** Volume Usage ***"
|
||||
du -sh "$TARGET"
|
||||
|
||||
umount "$TARGET"
|
||||
}
|
||||
|
||||
|
|
|
@ -3,6 +3,11 @@
|
|||
|
||||
[ -z "$DEBUG" ] || [ "$DEBUG" = 0 ] || set -x
|
||||
|
||||
CONSOLE=ttyS0
|
||||
if [ "$ARCH" = "aarch64" ] && [ "$CLOUD" != "aws" ]; then
|
||||
CONSOLE=ttyAMA0
|
||||
fi
|
||||
|
||||
export \
|
||||
DEVICE=/dev/vda \
|
||||
TARGET=/mnt \
|
||||
|
@ -140,14 +145,12 @@ install_extlinux() {
|
|||
#
|
||||
# Shorten timeout (1/10s), eliminating delays for instance launches.
|
||||
#
|
||||
# ttyS0 is for EC2 Console "Get system log" and "EC2 Serial Console"
|
||||
# features, whereas tty0 is for "Get Instance screenshot" feature. Enabling
|
||||
# the port early in extlinux gives the most complete output in the log.
|
||||
# Enabling console port early in extlinux gives the most complete output.
|
||||
#
|
||||
# TODO: review for other clouds -- this may need to be cloud-specific.
|
||||
sed -Ei -e "s|^[# ]*(root)=.*|\1=LABEL=/|" \
|
||||
-e "s|^[# ]*(default_kernel_opts)=.*|\1=\"$KERNEL_OPTIONS\"|" \
|
||||
-e "s|^[# ]*(serial_port)=.*|\1=ttyS0|" \
|
||||
-e "s|^[# ]*(serial_port)=.*|\1=$CONSOLE|" \
|
||||
-e "s|^[# ]*(modules)=.*|\1=$KERNEL_MODULES|" \
|
||||
-e "s|^[# ]*(default)=.*|\1=virt|" \
|
||||
-e "s|^[# ]*(timeout)=.*|\1=1|" \
|
||||
|
@ -201,8 +204,9 @@ configure_system() {
|
|||
cat "$SETUP/fstab.grub-efi" >> "$TARGET/etc/fstab"
|
||||
fi
|
||||
|
||||
# Disable getty for physical ttys, enable getty for serial ttyS0.
|
||||
sed -Ei -e '/^tty[0-9]/s/^/#/' -e '/^#ttyS0:/s/^#//' "$TARGET/etc/inittab"
|
||||
# Disable getty for physical ttys, enable getty for serial console.
|
||||
sed -Ei -e '/^tty[0-9]/s/^/#/' -e "s/ttyS0/$CONSOLE/g" \
|
||||
-e "/^#$CONSOLE:/s/^#//" "$TARGET/etc/inittab"
|
||||
|
||||
# setup sudo and/or doas
|
||||
if grep -q '^sudo$' "$TARGET/etc/apk/world"; then
|
||||
|
|
|
@ -2,4 +2,5 @@ GRUB_CMDLINE_LINUX_DEFAULT="modules=$KERNEL_MODULES $KERNEL_OPTIONS"
|
|||
GRUB_DISABLE_RECOVERY=true
|
||||
GRUB_DISABLE_SUBMENU=y
|
||||
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
|
||||
GRUB_TERMINAL="serial console"
|
||||
GRUB_TIMEOUT=0
|
||||
|
|
Loading…
Reference in New Issue
Block a user