Squashed 'alpine-cloud-images/' changes from 064a526..ae2361d
ae2361d Make Storage Authoritative for State 91082fb various fixes pre-3.18.4 git-subtree-dir: alpine-cloud-images git-subtree-split: ae2361d9becd3f0bf4c2d3510f4fb126fff3d8fa
This commit is contained in:
parent
469804206a
commit
da6fb3afd1
7
.gitignore
vendored
7
.gitignore
vendored
|
@ -1,7 +1,8 @@
|
|||
*~
|
||||
*.bak
|
||||
*.swp
|
||||
**/*~
|
||||
**/*.bak
|
||||
**/*.swp
|
||||
.DS_Store
|
||||
.vscode/
|
||||
/work/
|
||||
releases*yaml
|
||||
/*.yaml
|
||||
|
|
36
README.md
36
README.md
|
@ -32,10 +32,13 @@ tagged on the images...
|
|||
| bootstrap | initial bootstrap system (`tiny` = Tiny Cloud) |
|
||||
| cloud | provider short name (`aws`) |
|
||||
| revision | image revision number |
|
||||
| built | image build timestamp |
|
||||
| uploaded | image storage timestamp |
|
||||
| imported | image import timestamp |
|
||||
| import_id | imported image id |
|
||||
| import_region | imported image region |
|
||||
| published | image publication timestamp |
|
||||
| released | image release timestamp _(won't be set until second publish)_ |
|
||||
| description | image description |
|
||||
|
||||
Although AWS does not allow cross-account filtering by tags, the image name can
|
||||
|
@ -61,12 +64,22 @@ To get just the most recent matching image, use...
|
|||
|
||||
The build system consists of a number of components:
|
||||
|
||||
* the primary `build` script
|
||||
* the primary `build` script, and other related libararies...
|
||||
* `clouds/` - specific cloud provider plugins
|
||||
* `alpine.py` - for getting the latest Alpine information
|
||||
* `image_config_manager.py` - manages collection of image configs
|
||||
* `image_config.py` - individual image config functionality
|
||||
* `image_storage.py` - persistent image/metadata storage
|
||||
* `image_tags.py` - classes for working with image tags
|
||||
|
||||
* the `configs/` directory, defining the set of images to be built
|
||||
|
||||
* the `scripts/` directory, containing scripts and related data used to set up
|
||||
image contents during provisioning
|
||||
|
||||
* the Packer `alpine.pkr.hcl`, which orchestrates build, import, and publishing
|
||||
of images
|
||||
|
||||
* the `cloud_helper.py` script that Packer runs in order to do cloud-specific
|
||||
import and publish operations
|
||||
|
||||
|
@ -102,8 +115,8 @@ usage: build [-h] [--debug] [--clean] [--pad-uefi-bin-arch ARCH [ARCH ...]]
|
|||
|
||||
positional arguments: (build up to and including this step)
|
||||
configs resolve image build configuration
|
||||
state refresh current image build state
|
||||
rollback remove existing local/uploaded/imported images if un-published/released
|
||||
state report current build state of images
|
||||
rollback remove local/uploaded/imported images if not published or released
|
||||
local build images locally
|
||||
upload upload images and metadata to storage
|
||||
* import import local images to cloud provider default region (*)
|
||||
|
@ -121,8 +134,7 @@ optional arguments:
|
|||
--custom DIR [DIR ...] overlay custom directory in work environment
|
||||
--skip KEY [KEY ...] skip variants with dimension key(s)
|
||||
--only KEY [KEY ...] only variants with dimension key(s)
|
||||
--revise remove existing local/uploaded/imported images if
|
||||
un-published/released, or bump revision and rebuild
|
||||
--revise bump revision and rebuild if published or released
|
||||
--use-broker use the identity broker to get credentials
|
||||
--no-color turn off Packer color output
|
||||
--parallel N build N images in parallel
|
||||
|
@ -155,13 +167,11 @@ determines what actions need to be taken, and updates `work/images.yaml`. A
|
|||
subset of image builds can be targeted by using the `--skip` and `--only`
|
||||
arguments.
|
||||
|
||||
The `rollback` step, when used with `--revise` argument indicates that any
|
||||
_unpublished_ and _unreleased_ local, imported, or uploaded images should be
|
||||
removed and rebuilt.
|
||||
The `rollback` step will remove any imported, uploaded, or local images, but
|
||||
only if they are _unpublished_ and _unreleased_.
|
||||
|
||||
As _published_ and _released_ images can't be removed, `--revise` can be used
|
||||
with `configs` or `state` to increment the _`revision`_ value to rebuild newly
|
||||
revised images.
|
||||
As _published_ and _released_ images can't be rolled back, `--revise` can be
|
||||
used to increment the _`revision`_ value to rebuild newly revised images.
|
||||
|
||||
`local`, `upload`, `import`, `publish`, and `release` steps are orchestrated by
|
||||
Packer. By default, each image will be processed serially; providing the
|
||||
|
@ -193,7 +203,9 @@ in all regions where the image has been published.
|
|||
providers where this does not make sense (i.e. NoCloud) or for those which
|
||||
it has not yet been coded.
|
||||
|
||||
The `release` step marks the images as being fully released.
|
||||
The `release` step simply marks the images as being fully released. _(For the
|
||||
offical Alpine releases, we have a `gen_mksite_release.py` script to convert
|
||||
the image data to a format that can be used by https://alpinelinux.org/cloud.)_
|
||||
|
||||
### The `cloud_helper.py` Script
|
||||
|
||||
|
|
4
TODO.md
Normal file
4
TODO.md
Normal file
|
@ -0,0 +1,4 @@
|
|||
* clean up cloud modules now that `get_latest_imported_tags` isn't needed
|
||||
|
||||
* do we still need to set `ntp_server` for AWS images, starting with 3.18.4?
|
||||
_(or is this now handled via `dhcpcd`?)_
|
13
build
13
build
|
@ -228,8 +228,8 @@ parser.add_argument(
|
|||
default=[], help='only variants with dimension key(s)')
|
||||
parser.add_argument(
|
||||
'--revise', action='store_true',
|
||||
help='remove existing local/uploaded/imported image, or bump revision and '
|
||||
' rebuild if published or released')
|
||||
help='bump revision and rebuild if published or released')
|
||||
# --revise is not needed after new revision is uploaded
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
|
@ -256,9 +256,9 @@ console.setFormatter(logfmt)
|
|||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
if args.step == 'rollback':
|
||||
log.warning('"rollback" step enables --revise option')
|
||||
args.revise = True
|
||||
if args.step == 'rollback' and args.revise:
|
||||
log.error('"rollback" step does not support --revise option')
|
||||
sys.exit(1)
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
|
@ -292,7 +292,7 @@ if not image_configs.refresh_state(
|
|||
log.info('No pending actions to take at this time.')
|
||||
sys.exit(0)
|
||||
|
||||
if args.step == 'state' or args.step == 'rollback':
|
||||
if args.step == 'state':
|
||||
sys.exit(0)
|
||||
|
||||
# install firmware if missing
|
||||
|
@ -339,7 +339,6 @@ if p.returncode != 0:
|
|||
log.info('Packer Completed')
|
||||
|
||||
# update final state in work/images.yaml
|
||||
# TODO: do we need to do all of this or just save all the image_configs?
|
||||
image_configs.refresh_state(
|
||||
step='final',
|
||||
only=args.only,
|
||||
|
|
|
@ -76,23 +76,26 @@ yaml.explicit_start = True
|
|||
|
||||
for image_key in args.image_keys:
|
||||
image_config = configs.get(image_key)
|
||||
image_config.load_local_metadata() # if it exists
|
||||
|
||||
if args.action == 'local':
|
||||
image_config.convert_image()
|
||||
|
||||
elif args.action == 'upload':
|
||||
if image_config.storage:
|
||||
image_config.upload_image()
|
||||
|
||||
elif args.action == 'import':
|
||||
elif args.action == 'import' and 'import' in clouds.actions(image_config):
|
||||
# if we don't have the image locally, retrieve it from storage
|
||||
if not image_config.image_path.exists():
|
||||
image_config.retrieve_image()
|
||||
|
||||
clouds.import_image(image_config)
|
||||
|
||||
elif args.action == 'publish':
|
||||
elif args.action == 'publish' and 'publish' in clouds.actions(image_config):
|
||||
clouds.publish_image(image_config)
|
||||
|
||||
elif args.action == 'release':
|
||||
pass
|
||||
# TODO: image_config.release_image() - configurable steps to take on remote host
|
||||
image_config.release_image()
|
||||
|
||||
# save per-image metadata
|
||||
image_config.save_metadata(args.action)
|
||||
|
|
|
@ -31,7 +31,7 @@ def set_credential_provider(debug=False):
|
|||
|
||||
### forward to the correct adapter
|
||||
|
||||
# TODO: latest_imported_tags(...)
|
||||
# TODO: deprexcate/remove
|
||||
def get_latest_imported_tags(config):
|
||||
return ADAPTERS[config.cloud].get_latest_imported_tags(
|
||||
config.project,
|
||||
|
@ -49,3 +49,7 @@ def delete_image(config, image_id):
|
|||
|
||||
def publish_image(config):
|
||||
return ADAPTERS[config.cloud].publish_image(config)
|
||||
|
||||
# supported actions
|
||||
def actions(config):
|
||||
return ADAPTERS[config.cloud].ACTIONS
|
||||
|
|
|
@ -30,6 +30,10 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
'bios': 'legacy-bios',
|
||||
'uefi': 'uefi',
|
||||
}
|
||||
ACTIONS = [
|
||||
'import',
|
||||
'publish',
|
||||
]
|
||||
|
||||
@property
|
||||
def sdk(self):
|
||||
|
@ -96,6 +100,7 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
tags = ImageTags(from_list=i.tags)
|
||||
return DictObj({k: tags.get(k, None) for k in self.IMAGE_INFO})
|
||||
|
||||
# TODO: deprectate/remove
|
||||
# get the latest imported image's tags for a given build key
|
||||
def get_latest_imported_tags(self, project, image_key):
|
||||
images = self._get_images_with_tags(
|
||||
|
@ -222,6 +227,10 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
tags.import_id = image_id
|
||||
tags.import_region = ec2c.meta.region_name
|
||||
image.create_tags(Tags=tags.as_list())
|
||||
# update image config with import information
|
||||
ic.imported = tags.imported
|
||||
ic.import_id = tags.import_id
|
||||
ic.import_region = tags.import_region
|
||||
except Exception:
|
||||
log.error('Unable to tag image:', exc_info=True)
|
||||
log.info('Removing image and snapshot')
|
||||
|
@ -229,9 +238,6 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
snapshot.delete()
|
||||
raise
|
||||
|
||||
# update ImageConfig with imported tag values, minus special AWS 'Name'
|
||||
tags.pop('Name', None)
|
||||
ic.__dict__ |= tags
|
||||
|
||||
# delete an (unpublished) image
|
||||
def delete_image(self, image_id):
|
||||
|
@ -390,7 +396,9 @@ class AWSCloudAdapter(CloudAdapterInterface):
|
|||
time.sleep(copy_wait)
|
||||
copy_wait = 30
|
||||
|
||||
# update image config with published information
|
||||
ic.artifacts = artifacts
|
||||
ic.published = datetime.utcnow().isoformat()
|
||||
|
||||
|
||||
def register(cloud, cred_provider=None):
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
class CloudAdapterInterface:
|
||||
|
||||
ACTIONS = []
|
||||
|
||||
def __init__(self, cloud, cred_provider=None):
|
||||
self._sdk = None
|
||||
self._sessions = {}
|
||||
|
|
|
@ -55,7 +55,7 @@ Default {
|
|||
# profile build matrix
|
||||
Dimensions {
|
||||
version {
|
||||
"3.18" { include required("version/3.17.conf") }
|
||||
"3.18" { include required("version/3.18.conf") }
|
||||
"3.17" { include required("version/3.17.conf") }
|
||||
"3.16" { include required("version/3.16.conf") }
|
||||
"3.15" { include required("version/3.15.conf") }
|
||||
|
@ -94,12 +94,4 @@ Mandatory {
|
|||
|
||||
# final provisioning script
|
||||
scripts = [ cleanup ]
|
||||
|
||||
# TODO: remove this after testing
|
||||
#access.PUBLIC = false
|
||||
#regions {
|
||||
# ALL = false
|
||||
# us-west-2 = true
|
||||
# us-east-1 = true
|
||||
#}
|
||||
}
|
||||
|
|
|
@ -9,6 +9,7 @@ EXCLUDE = ["3.12", "3.13", "3.14"]
|
|||
packages {
|
||||
cloud-init = true
|
||||
dhclient = true # offically supported, for now
|
||||
dhcpcd = null # unsupported, for now
|
||||
openssh-server-pam = true
|
||||
e2fsprogs-extra = true # for resize2fs
|
||||
}
|
||||
|
|
|
@ -6,9 +6,9 @@ bootstrap_url = "https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud"
|
|||
WHEN {
|
||||
"3.13 3.14 3.15 3.16 3.17" {
|
||||
# tiny-cloud < 3.0.0 doesn't have --setup option
|
||||
boot.tiny-cloud-early = true
|
||||
default.tiny-cloud = true
|
||||
default.tiny-cloud-final = true
|
||||
services.boot.tiny-cloud-early = true
|
||||
services.default.tiny-cloud = true
|
||||
services.default.tiny-cloud-final = true
|
||||
}
|
||||
aws {
|
||||
packages.tiny-cloud-aws = true
|
||||
|
@ -16,7 +16,7 @@ WHEN {
|
|||
"3.12" {
|
||||
# fallback to the old tiny-ec2-bootstrap package
|
||||
packages.tiny-cloud-aws = null
|
||||
services.sysinit.tiny-cloud-early = null
|
||||
services.boot.tiny-cloud-early = null
|
||||
services.default.tiny-cloud = null
|
||||
services.default.tiny-cloud-final = null
|
||||
packages.tiny-ec2-bootstrap = true
|
||||
|
|
|
@ -21,6 +21,14 @@ ntp_server = 169.254.169.123
|
|||
access.PUBLIC = true
|
||||
regions.ALL = true
|
||||
|
||||
# limit edge publishing
|
||||
WHEN.edge {
|
||||
access.PUBLIC = false
|
||||
regions.ALL = false
|
||||
regions.us-west-2 = true
|
||||
regions.us-east-1 = true
|
||||
}
|
||||
|
||||
cloud_region_url = "https://{region}.console.aws.amazon.com/ec2/home#Images:visibility=public-images;imageId={image_id}",
|
||||
cloud_launch_url = "https://{region}.console.aws.amazon.com/ec2/home#launchAmi={image_id}"
|
||||
|
||||
|
|
|
@ -2,7 +2,5 @@
|
|||
|
||||
include required("4.conf")
|
||||
|
||||
packages {
|
||||
# start using dhcpcd for improved IPv6 experience
|
||||
dhcpcd = true
|
||||
}
|
||||
packages.dhcpcd = true
|
||||
|
|
|
@ -117,8 +117,6 @@ for i_key, i_cfg in configs.get().items():
|
|||
'cloud_name': i_cfg.cloud_name,
|
||||
}
|
||||
|
||||
filters['regions'] = {}
|
||||
|
||||
if arch not in filters['archs']:
|
||||
filters['archs'][arch] = {
|
||||
'arch': arch,
|
||||
|
@ -137,17 +135,6 @@ for i_key, i_cfg in configs.get().items():
|
|||
'bootstrap_name': i_cfg.bootstrap_name,
|
||||
}
|
||||
|
||||
if i_cfg.artifacts:
|
||||
for region, image_id in {r: i_cfg.artifacts[r] for r in sorted(i_cfg.artifacts)}.items():
|
||||
if region not in filters['regions']:
|
||||
filters['regions'][region] = {
|
||||
'region': region,
|
||||
'clouds': [cloud],
|
||||
}
|
||||
|
||||
if cloud not in filters['regions'][region]['clouds']:
|
||||
filters['regions'][region]['clouds'].append(cloud)
|
||||
|
||||
versions[version] |= {
|
||||
'version': version,
|
||||
'release': release,
|
||||
|
@ -165,6 +152,22 @@ for i_key, i_cfg in configs.get().items():
|
|||
'image_format': i_cfg.image_format,
|
||||
'image_url': i_cfg.download_url + '/' + (i_cfg.image_name)
|
||||
}
|
||||
|
||||
# TODO: not all clouds will have artifacts
|
||||
if i_cfg._get('artifacts'):
|
||||
log.debug("ARTIFACTS: %s", i_cfg.artifacts)
|
||||
for region, image_id in {r: i_cfg.artifacts[r] for r in sorted(i_cfg.artifacts)}.items():
|
||||
log.debug("REGION: %s", region)
|
||||
if region not in filters['regions']:
|
||||
log.debug("not in filters['region']")
|
||||
filters['regions'][region] = {
|
||||
'region': region,
|
||||
'clouds': [cloud],
|
||||
}
|
||||
|
||||
if cloud not in filters['regions'][region]['clouds']:
|
||||
filters['regions'][region]['clouds'].append(cloud)
|
||||
|
||||
versions[version]['images'][image_name]['regions'][region] |= {
|
||||
'cloud': cloud,
|
||||
'region': region,
|
||||
|
@ -199,7 +202,7 @@ for version in sorted(versions, reverse=True, key=lambda s: [int(u) for u in s.s
|
|||
|
||||
images[image_name]['downloads'] = d
|
||||
|
||||
regions = images[image_name].pop('regions')
|
||||
regions = images[image_name].pop('regions', [])
|
||||
r = []
|
||||
for region in sorted(regions):
|
||||
r.append(regions[region])
|
||||
|
|
188
get-image-cache.py
Executable file
188
get-image-cache.py
Executable file
|
@ -0,0 +1,188 @@
|
|||
#!/usr/bin/env python3
|
||||
# vim: ts=4 et:
|
||||
|
||||
# NOTE: this is an experimental work-in-progress
|
||||
|
||||
# Ensure we're using the Python virtual env with our installed dependencies
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
NOTE = textwrap.dedent("""
|
||||
Experimental: Outputs image cache YAML on STDOUT for use with prune-images.py
|
||||
""")
|
||||
|
||||
sys.pycache_prefix = 'work/__pycache__'
|
||||
|
||||
if not os.path.exists('work'):
|
||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
||||
print(NOTE, file=sys.stderr)
|
||||
exit(1)
|
||||
|
||||
# Re-execute using the right virtual environment, if necessary.
|
||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
||||
os.execv(venv_args[0], venv_args)
|
||||
|
||||
# We're now in the right Python environment
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from ruamel.yaml import YAML
|
||||
|
||||
import clouds
|
||||
|
||||
|
||||
### Constants & Variables
|
||||
|
||||
CLOUDS = ['aws']
|
||||
LOGFORMAT = '%(asctime)s - %(levelname)s - %(message)s'
|
||||
|
||||
RE_ALPINE = re.compile(r'^alpine-')
|
||||
RE_RELEASE = re.compile(r'-(edge|[\d\.]+)-')
|
||||
RE_REVISION = re.compile(r'-r?(\d+)$')
|
||||
RE_STUFF = re.compile(r'(edge|[\d+\.]+)(?:_rc(\d+))?-(.+)-r?(\d+)$')
|
||||
|
||||
|
||||
### Functions
|
||||
|
||||
# allows us to set values deep within an object that might not be fully defined
|
||||
def dictfactory():
|
||||
return defaultdict(dictfactory)
|
||||
|
||||
|
||||
# undo dictfactory() objects to normal objects
|
||||
def undictfactory(o):
|
||||
if isinstance(o, defaultdict):
|
||||
o = {k: undictfactory(v) for k, v in o.items()}
|
||||
return o
|
||||
|
||||
|
||||
### Command Line & Logging
|
||||
|
||||
parser = argparse.ArgumentParser(description=NOTE)
|
||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
||||
parser.add_argument('--cloud', choices=CLOUDS, required=True, help='cloud provider')
|
||||
parser.add_argument('--region', help='specific region, instead of all regions')
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger()
|
||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
||||
console = logging.StreamHandler()
|
||||
logfmt = logging.Formatter(LOGFORMAT, datefmt='%FT%TZ')
|
||||
logfmt.converter = time.gmtime
|
||||
console.setFormatter(logfmt)
|
||||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider(debug=args.debug)
|
||||
|
||||
# what region(s)?
|
||||
regions = clouds.ADAPTERS[args.cloud].regions
|
||||
if args.region:
|
||||
if args.region not in regions:
|
||||
log.error('invalid region: %s', args.region)
|
||||
exit(1)
|
||||
else:
|
||||
regions = [args.region]
|
||||
|
||||
filters = {
|
||||
'Owners': ['self'],
|
||||
'Filters': [
|
||||
{'Name': 'state', 'Values': ['available']},
|
||||
]
|
||||
}
|
||||
|
||||
data = dictfactory()
|
||||
now = time.gmtime()
|
||||
|
||||
for region in sorted(regions):
|
||||
# TODO: make more generic if we need to do this for other clouds someday
|
||||
ec2r = clouds.ADAPTERS[args.cloud].session(region).resource('ec2')
|
||||
images = sorted(ec2r.images.filter(**filters), key=lambda k: k.creation_date)
|
||||
log.info(f'--- {region} : {len(images)} ---')
|
||||
version = release = revision = None
|
||||
|
||||
for image in images:
|
||||
latest = data[region]['latest'] # shortcut
|
||||
|
||||
# information about the image
|
||||
id = image.id
|
||||
name = image.name
|
||||
|
||||
# only consider images named /^alpine-/
|
||||
if not RE_ALPINE.search(image.name):
|
||||
log.warning(f'IGNORING {region}\t{id}\t{name}')
|
||||
continue
|
||||
|
||||
# parse image name for more information
|
||||
# NOTE: we can't rely on tags, because they may not have been set successfully
|
||||
m = RE_STUFF.search(name)
|
||||
if not m:
|
||||
log.error(f'!PARSE\t{region}\t{id}\t{name}')
|
||||
continue
|
||||
|
||||
release = m.group(1)
|
||||
rc = m.group(2)
|
||||
version = '.'.join(release.split('.')[0:2])
|
||||
variant = m.group(3)
|
||||
revision = m.group(4)
|
||||
variant_key = '-'.join([version, variant])
|
||||
release_key = revision if release == 'edge' else '-'.join([release, revision])
|
||||
|
||||
last_launched_attr = image.describe_attribute(Attribute='lastLaunchedTime')['LastLaunchedTime']
|
||||
last_launched = last_launched_attr.get('Value', 'Never')
|
||||
|
||||
eol = time.strptime(image.deprecation_time ,'%Y-%m-%dT%H:%M:%S.%fZ') < now
|
||||
|
||||
# keep track of images
|
||||
data[region]['images'][id] = {
|
||||
'name': name,
|
||||
'release': release,
|
||||
'version': version,
|
||||
'variant': variant,
|
||||
'revision': revision,
|
||||
'variant_key': variant_key,
|
||||
'release_key': release_key,
|
||||
'created': image.creation_date,
|
||||
'launched': last_launched,
|
||||
'deprecated': image.deprecation_time,
|
||||
'rc': rc is not None,
|
||||
'eol': eol,
|
||||
'private': not image.public,
|
||||
'snapshot_id': image.block_device_mappings[0]['Ebs']['SnapshotId']
|
||||
}
|
||||
|
||||
# keep track of the latest release_key per variant_key
|
||||
if variant_key not in latest or (release > latest[variant_key]['release']) or (release == latest[variant_key]['release'] and [revision > latest[variant_key]['revision']]):
|
||||
data[region]['latest'][variant_key] = {
|
||||
'release': release,
|
||||
'revision': revision,
|
||||
'release_key': release_key
|
||||
}
|
||||
|
||||
log.info(f'{region}\t{not image.public}\t{eol}\t{last_launched.split("T")[0]}\t{name}')
|
||||
|
||||
# instantiate YAML
|
||||
yaml = YAML()
|
||||
yaml.explicit_start = True
|
||||
|
||||
# TODO? dump out to a file instead of STDOUT?
|
||||
yaml.dump(undictfactory(data), sys.stdout)
|
||||
|
||||
total = 0
|
||||
for region, rdata in sorted(data.items()):
|
||||
count = len(rdata['images'])
|
||||
log.info(f'{region} : {count} images')
|
||||
total += count
|
||||
|
||||
log.info(f'TOTAL : {total} images')
|
236
image_config.py
236
image_config.py
|
@ -28,6 +28,15 @@ class ImageConfig():
|
|||
STEPS = [
|
||||
'local', 'upload', 'import', 'publish', 'release'
|
||||
]
|
||||
# we expect these to be available
|
||||
DEFAULT_OBJ = {
|
||||
'built': None,
|
||||
'uploaded': None,
|
||||
'imported': None,
|
||||
'published': None,
|
||||
'released': None,
|
||||
'artifacts': None,
|
||||
}
|
||||
|
||||
def __init__(self, config_key, obj={}, log=None, yaml=None):
|
||||
self._log = log
|
||||
|
@ -35,7 +44,7 @@ class ImageConfig():
|
|||
self._storage = None
|
||||
self.config_key = str(config_key)
|
||||
tags = obj.pop('tags', None)
|
||||
self.__dict__ |= self._deep_dict(obj)
|
||||
self.__dict__ |= self.DEFAULT_OBJ | self._deep_dict(obj)
|
||||
# ensure tag values are str() when loading
|
||||
if tags:
|
||||
self.tags = tags
|
||||
|
@ -266,14 +275,18 @@ class ImageConfig():
|
|||
|
||||
return self.STEPS.index(s) <= self.STEPS.index(step)
|
||||
|
||||
def load_local_metadata(self):
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
if metadata_path.exists():
|
||||
self._log.debug('Loading image metadata from %s', metadata_path)
|
||||
loaded = self._yaml.load(metadata_path)
|
||||
loaded.pop('name', None) # don't overwrite 'name' format string!
|
||||
loaded.pop('Name', None) # remove special AWS tag
|
||||
self.__dict__ |= loaded
|
||||
|
||||
# TODO: this needs to be sorted out for 'upload' and 'release' steps
|
||||
def refresh_state(self, step, revise=False):
|
||||
log = self._log
|
||||
actions = {}
|
||||
revision = 0
|
||||
step_state = step == 'state'
|
||||
step_rollback = step == 'rollback'
|
||||
undo = {}
|
||||
|
||||
# enable initial set of possible actions based on specified step
|
||||
|
@ -281,93 +294,61 @@ class ImageConfig():
|
|||
if self._is_step_or_earlier(s, step):
|
||||
actions[s] = True
|
||||
|
||||
# pick up any updated image metadata
|
||||
self.load_metadata()
|
||||
# sets the latest revision metadata (from storage and local)
|
||||
self.load_metadata(step)
|
||||
|
||||
# TODO: check storage and/or cloud - use this instead of remote_image
|
||||
# latest_revision = self.get_latest_revision()
|
||||
# if we're rolling back, figure out what we need to undo first
|
||||
if step == 'rollback':
|
||||
if self.released or self.published:
|
||||
undo['ROLLBACK_BLOCKED'] = True
|
||||
|
||||
if (step_rollback or revise) and self.local_image.exists():
|
||||
undo['local'] = True
|
||||
else:
|
||||
if self.imported and 'import' in clouds.actions(self):
|
||||
undo['import'] = True
|
||||
self.imported = None
|
||||
|
||||
|
||||
|
||||
if step_rollback:
|
||||
if self.local_image.exists():
|
||||
undo['local'] = True
|
||||
|
||||
if not self.published or self.released:
|
||||
if self.uploaded:
|
||||
undo['upload'] = True
|
||||
self.uploaded = None
|
||||
|
||||
if self.imported:
|
||||
undo['import'] = True
|
||||
|
||||
# TODO: rename to 'remote_tags'?
|
||||
# if we load remote tags into state automatically, shouldn't that info already be in self?
|
||||
remote_image = clouds.get_latest_imported_tags(self)
|
||||
log.debug('\n%s', remote_image)
|
||||
|
||||
if revise:
|
||||
if self.local_image.exists():
|
||||
# remove previously built local image artifacts
|
||||
log.warning('%s existing local image dir %s',
|
||||
'Would remove' if step_state else 'Removing',
|
||||
self.local_dir)
|
||||
if not step_state:
|
||||
shutil.rmtree(self.local_dir)
|
||||
|
||||
if remote_image and remote_image.get('published', None):
|
||||
log.warning('%s image revision for %s',
|
||||
'Would bump' if step_state else 'Bumping',
|
||||
self.image_key)
|
||||
revision = int(remote_image.revision) + 1
|
||||
|
||||
elif remote_image and remote_image.get('imported', None):
|
||||
# remove existing imported (but unpublished) image
|
||||
log.warning('%s unpublished remote image %s',
|
||||
'Would remove' if step_state else 'Removing',
|
||||
remote_image.import_id)
|
||||
if not step_state:
|
||||
clouds.delete_image(self, remote_image.import_id)
|
||||
|
||||
remote_image = None
|
||||
|
||||
elif remote_image:
|
||||
if remote_image.get('imported', None):
|
||||
# already imported, don't build/upload/import again
|
||||
log.debug('%s - already imported', self.image_key)
|
||||
actions.pop('local', None)
|
||||
actions.pop('upload', None)
|
||||
actions.pop('import', None)
|
||||
|
||||
if remote_image.get('published', None):
|
||||
# NOTE: re-publishing can update perms or push to new regions
|
||||
log.debug('%s - already published', self.image_key)
|
||||
|
||||
if self.local_image.exists():
|
||||
# local image's already built, don't rebuild
|
||||
log.debug('%s - already locally built', self.image_key)
|
||||
actions.pop('local', None)
|
||||
|
||||
else:
|
||||
if self.built and self.local_dir.exists():
|
||||
undo['local'] = True
|
||||
self.built = None
|
||||
|
||||
# merge remote_image data into image state
|
||||
if remote_image:
|
||||
self.__dict__ |= dict(remote_image)
|
||||
# handle --revise option, if necessary
|
||||
if revise and (self.published or self.released):
|
||||
# get rid of old metadata
|
||||
(self.local_dir / self.metadata_file).unlink()
|
||||
self.revision = int(self.revision) + 1
|
||||
self.__dict__ |= self.DEFAULT_OBJ
|
||||
self.__dict__.pop('import_id', None)
|
||||
self.__dict__.pop('import_region', None)
|
||||
|
||||
# do we already have it built locally?
|
||||
if self.image_path.exists():
|
||||
# then we should use its metadata
|
||||
self.load_local_metadata()
|
||||
|
||||
else:
|
||||
self.__dict__ |= {
|
||||
'revision': revision,
|
||||
'uploaded': None,
|
||||
'imported': None,
|
||||
'import_id': None,
|
||||
'import_region': None,
|
||||
'published': None,
|
||||
'artifacts': None,
|
||||
'released': None,
|
||||
}
|
||||
undo['local'] = True
|
||||
|
||||
# after all that, let's figure out what's to be done!
|
||||
if self.built:
|
||||
actions.pop('local', None)
|
||||
|
||||
if self.uploaded:
|
||||
actions.pop('upload', None)
|
||||
|
||||
if self.imported or 'import' not in clouds.actions(self):
|
||||
actions.pop('import', None)
|
||||
|
||||
# NOTE: always publish (if cloud allows) to support new regions
|
||||
if 'publish' not in clouds.actions(self):
|
||||
actions.pop('publish', None)
|
||||
|
||||
# don't re-publish again if we're targeting the release step
|
||||
elif step == 'release' and self.published:
|
||||
actions.pop('publish', None)
|
||||
|
||||
# remove remaining actions not possible based on specified step
|
||||
for s in self.STEPS:
|
||||
|
@ -375,7 +356,26 @@ class ImageConfig():
|
|||
actions.pop(s, None)
|
||||
|
||||
self.actions = list(actions)
|
||||
log.info('%s/%s = %s', self.cloud, self.image_name, self.actions)
|
||||
log.info('%s/%s = [%s]', self.cloud, self.image_name, ' '.join(self.actions))
|
||||
|
||||
if undo:
|
||||
act = "Would undo" if step == 'state' else "Undoing"
|
||||
log.warning('%s: [%s]', act, ' '.join(undo.keys()))
|
||||
|
||||
if step != 'state':
|
||||
if 'import' in undo:
|
||||
log.warning('Deleting imported image: %s', self.import_id)
|
||||
clouds.delete_image(self, self.import_id)
|
||||
self.import_id = None
|
||||
self.import_region = None
|
||||
|
||||
if 'upload' in undo:
|
||||
log.warning('Removing uploaded image from storage')
|
||||
self.remove_image()
|
||||
|
||||
if 'local' in undo:
|
||||
log.warning('Removing local build directory')
|
||||
shutil.rmtree(self.local_dir)
|
||||
|
||||
self.state_updated = datetime.utcnow().isoformat()
|
||||
|
||||
|
@ -388,16 +388,11 @@ class ImageConfig():
|
|||
|
||||
def _save_checksum(self, file):
|
||||
self._log.info("Calculating checksum for '%s'", file)
|
||||
sha256_hash = hashlib.sha256()
|
||||
sha512_hash = hashlib.sha512()
|
||||
with open(file, 'rb') as f:
|
||||
for block in iter(lambda: f.read(4096), b''):
|
||||
sha256_hash.update(block)
|
||||
sha512_hash.update(block)
|
||||
|
||||
with open(str(file) + '.sha256', 'w') as f:
|
||||
print(sha256_hash.hexdigest(), file=f)
|
||||
|
||||
with open(str(file) + '.sha512', 'w') as f:
|
||||
print(sha512_hash.hexdigest(), file=f)
|
||||
|
||||
|
@ -415,16 +410,30 @@ class ImageConfig():
|
|||
def upload_image(self):
|
||||
self.storage.store(
|
||||
self.image_file,
|
||||
self.image_file + '.sha256',
|
||||
self.image_file + '.sha512'
|
||||
)
|
||||
self.uploaded = datetime.utcnow().isoformat()
|
||||
|
||||
def retrieve_image(self):
|
||||
self._log.info('Retrieving %s from storage', self.image_file)
|
||||
self.storage.retrieve(
|
||||
self.image_file
|
||||
)
|
||||
|
||||
def remove_image(self):
|
||||
self.storage.remove(
|
||||
self.image_file,
|
||||
self.image_file + '.sha512',
|
||||
self.metadata_file,
|
||||
self.metadata_file + '.sha512'
|
||||
)
|
||||
|
||||
def release_image(self):
|
||||
self.released = datetime.utcnow().isoformat()
|
||||
|
||||
def save_metadata(self, action):
|
||||
os.makedirs(self.local_dir, exist_ok=True)
|
||||
self._log.info('Saving image metadata')
|
||||
# TODO: save metadata updated timestamp as metadata?
|
||||
# TODO: def self.metadata to return what we consider metadata?
|
||||
metadata = dict(self.tags)
|
||||
self.metadata_updated = datetime.utcnow().isoformat()
|
||||
metadata |= {
|
||||
|
@ -437,29 +446,36 @@ class ImageConfig():
|
|||
if action != 'local' and self.storage:
|
||||
self.storage.store(
|
||||
self.metadata_file,
|
||||
self.metadata_file + '.sha256',
|
||||
self.metadata_file + '.sha512'
|
||||
)
|
||||
|
||||
def load_metadata(self):
|
||||
# TODO: what if we have fresh configs, but the image is already uploaded/imported?
|
||||
# we'll need to get revision first somehow
|
||||
if 'revision' not in self.__dict__:
|
||||
return
|
||||
def load_metadata(self, step):
|
||||
new = True
|
||||
if step != 'final':
|
||||
# what's the latest uploaded revision?
|
||||
revision_glob = self.name.format(**(self.__dict__ | {'revision': '*'}))
|
||||
try:
|
||||
revision_yamls = self.storage.list(revision_glob + '.yaml', err_ok=True)
|
||||
new = not revision_yamls # empty list is still new
|
||||
|
||||
# TODO: revision = '*' for now - or only if unknown?
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
latest_revision = 0
|
||||
if not new:
|
||||
for y in revision_yamls:
|
||||
yr = int(y.rstrip('.yaml').rsplit('r', 1)[1])
|
||||
if yr > latest_revision:
|
||||
latest_revision = yr
|
||||
|
||||
self.revision = latest_revision
|
||||
|
||||
# get a list of local matching <name>-r*.yaml?
|
||||
metadata_path = self.local_dir / self.metadata_file
|
||||
if metadata_path.exists():
|
||||
self._log.info('Loading image metadata from %s', metadata_path)
|
||||
self.__dict__ |= self._yaml.load(metadata_path).items()
|
||||
if step != 'final' and not new and not metadata_path.exists():
|
||||
try:
|
||||
self.storage.retrieve(self.metadata_file)
|
||||
except RuntimeError as e:
|
||||
# TODO: don't we already log an error/warning?
|
||||
self._log.warning(f'Unable to retrieve from storage: {metadata_path}')
|
||||
|
||||
# get a list of storage matching <name>-r*.yaml
|
||||
#else:
|
||||
# retrieve metadata (and image?) from storage_url
|
||||
# else:
|
||||
# retrieve metadata from imported image
|
||||
|
||||
# if there's no stored metadata, we are in transition,
|
||||
# get a list of imported images matching <name>-r*.yaml
|
||||
self.load_local_metadata() # if it exists
|
||||
|
|
|
@ -11,22 +11,25 @@ from urllib.parse import urlparse
|
|||
from image_tags import DictObj
|
||||
|
||||
|
||||
def run(cmd, log, errmsg=None, errvals=[]):
|
||||
def run(cmd, log, errmsg=None, errvals=[], err_ok=False):
|
||||
# ensure command and error values are lists of strings
|
||||
cmd = [str(c) for c in cmd]
|
||||
errvals = [str(ev) for ev in errvals]
|
||||
|
||||
log.debug('COMMAND: %s', ' '.join(cmd))
|
||||
p = Popen(cmd, stdout=PIPE, stdin=PIPE, encoding='utf8')
|
||||
p = Popen(cmd, stdout=PIPE, stdin=PIPE, stderr=PIPE, encoding='utf8')
|
||||
out, err = p.communicate()
|
||||
if p.returncode:
|
||||
if errmsg:
|
||||
if err_ok:
|
||||
log.debug(errmsg, *errvals)
|
||||
|
||||
else:
|
||||
log.error(errmsg, *errvals)
|
||||
|
||||
log.error('COMMAND: %s', ' '.join(cmd))
|
||||
log.error('EXIT: %d', p.returncode)
|
||||
log.error('STDOUT:\n%s', out)
|
||||
log.error('STDERR:\n%s', err)
|
||||
log.debug('EXIT: %d / COMMAND: %s', p.returncode, ' '.join(cmd))
|
||||
log.debug('STDOUT:\n%s', out)
|
||||
log.debug('STDERR:\n%s', err)
|
||||
raise RuntimeError
|
||||
|
||||
return out, err
|
||||
|
@ -105,7 +108,7 @@ class ImageStorage():
|
|||
dest.mkdir(parents=True, exist_ok=True)
|
||||
if self.scheme == 'file':
|
||||
for file in files:
|
||||
log.info('Retrieving %s', src / file)
|
||||
log.debug('Retrieving %s', src / file)
|
||||
shutil.copy2(src / file, dest / file)
|
||||
|
||||
return
|
||||
|
@ -115,7 +118,7 @@ class ImageStorage():
|
|||
scp = self.scp
|
||||
src_files = []
|
||||
for file in files:
|
||||
log.info('Retrieving %s', url + '/' + file)
|
||||
log.debug('Retrieving %s', url + '/' + file)
|
||||
src_files.append(scp.user + ':'.join([host, str(src / file)]))
|
||||
|
||||
run(
|
||||
|
@ -124,7 +127,7 @@ class ImageStorage():
|
|||
)
|
||||
|
||||
# TODO: optional files=[]?
|
||||
def list(self, match=None):
|
||||
def list(self, match=None, err_ok=False):
|
||||
log = self.log
|
||||
path = self.remote
|
||||
if not match:
|
||||
|
@ -133,27 +136,27 @@ class ImageStorage():
|
|||
files = []
|
||||
if self.scheme == 'file':
|
||||
path.mkdir(parents=True, exist_ok=True)
|
||||
log.info('Listing of %s files in %s', match, path)
|
||||
log.debug('Listing of %s files in %s', match, path)
|
||||
files = sorted(glob(str(path / match)), key=os.path.getmtime, reverse=True)
|
||||
|
||||
else:
|
||||
url = self.url
|
||||
host = self.host
|
||||
ssh = self.ssh
|
||||
log.info('Listing %s files at %s', match, url)
|
||||
log.debug('Listing %s files at %s', match, url)
|
||||
run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'mkdir', '-p', path],
|
||||
log=log, errmsg='Unable to create path'
|
||||
log=log, errmsg='Unable to create path', err_ok=err_ok
|
||||
)
|
||||
out, _ = run(
|
||||
['ssh'] + ssh.port + ssh.user + [host, 'ls', '-1drt', path / match],
|
||||
log=log, errmsg='Failed to list files'
|
||||
log=log, errmsg='Failed to list files', err_ok=err_ok
|
||||
)
|
||||
files = out.splitlines()
|
||||
|
||||
return [os.path.basename(f) for f in files]
|
||||
|
||||
def remove(self, files):
|
||||
def remove(self, *files):
|
||||
log = self.log
|
||||
if not files:
|
||||
log.debug('No files to remove')
|
||||
|
@ -163,7 +166,7 @@ class ImageStorage():
|
|||
if self.scheme == 'file':
|
||||
for file in files:
|
||||
path = dest / file
|
||||
log.info('Removing %s', path)
|
||||
log.debug('Removing %s', path)
|
||||
if path.exists():
|
||||
path.unlink()
|
||||
|
||||
|
@ -174,7 +177,7 @@ class ImageStorage():
|
|||
ssh = self.ssh
|
||||
dest_files = []
|
||||
for file in files:
|
||||
log.info('Removing %s', url + '/' + file)
|
||||
log.debug('Removing %s', url + '/' + file)
|
||||
dest_files.append(dest / file)
|
||||
|
||||
run(
|
||||
|
|
232
prune-images.py
Executable file
232
prune-images.py
Executable file
|
@ -0,0 +1,232 @@
|
|||
#!/usr/bin/env python3
|
||||
# vim: ts=4 et:
|
||||
|
||||
# NOTE: this is an experimental work-in-progress
|
||||
|
||||
# Ensure we're using the Python virtual env with our installed dependencies
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
NOTE = textwrap.dedent("""
|
||||
Experimental: Given an image cache YAML file, figure out what needs to be pruned.
|
||||
""")
|
||||
|
||||
sys.pycache_prefix = 'work/__pycache__'
|
||||
|
||||
if not os.path.exists('work'):
|
||||
print('FATAL: Work directory does not exist.', file=sys.stderr)
|
||||
print(NOTE, file=sys.stderr)
|
||||
exit(1)
|
||||
|
||||
# Re-execute using the right virtual environment, if necessary.
|
||||
venv_args = [os.path.join('work', 'bin', 'python3')] + sys.argv
|
||||
if os.path.join(os.getcwd(), venv_args[0]) != sys.executable:
|
||||
print("Re-executing with work environment's Python...\n", file=sys.stderr)
|
||||
os.execv(venv_args[0], venv_args)
|
||||
|
||||
# We're now in the right Python environment
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from ruamel.yaml import YAML
|
||||
from pathlib import Path
|
||||
|
||||
import clouds
|
||||
|
||||
|
||||
### Constants & Variables
|
||||
|
||||
ACTIONS = ['list', 'prune']
|
||||
CLOUDS = ['aws']
|
||||
SELECTIONS = ['keep-last', 'unused', 'ALL']
|
||||
LOGFORMAT = '%(asctime)s - %(levelname)s - %(message)s'
|
||||
|
||||
RE_ALPINE = re.compile(r'^alpine-')
|
||||
RE_RELEASE = re.compile(r'-(edge|[\d\.]+)-')
|
||||
RE_REVISION = re.compile(r'-r?(\d+)$')
|
||||
RE_STUFF = re.compile(r'(edge|[\d+\.]+)-(.+)-r?(\d+)$')
|
||||
|
||||
### Functions
|
||||
|
||||
# allows us to set values deep within an object that might not be fully defined
|
||||
def dictfactory():
|
||||
return defaultdict(dictfactory)
|
||||
|
||||
|
||||
# undo dictfactory() objects to normal objects
|
||||
def undictfactory(o):
|
||||
if isinstance(o, defaultdict):
|
||||
o = {k: undictfactory(v) for k, v in o.items()}
|
||||
return o
|
||||
|
||||
|
||||
### Command Line & Logging
|
||||
|
||||
parser = argparse.ArgumentParser(description=NOTE)
|
||||
parser.add_argument('--debug', action='store_true', help='enable debug output')
|
||||
parser.add_argument('--really', action='store_true', help='really prune images')
|
||||
parser.add_argument('--cloud', choices=CLOUDS, required=True, help='cloud provider')
|
||||
parser.add_argument('--region', help='specific region, instead of all regions')
|
||||
# what to prune...
|
||||
parser.add_argument('--private', action='store_true')
|
||||
parser.add_argument('--edge-eol', action='store_true')
|
||||
parser.add_argument('--rc', action='store_true')
|
||||
parser.add_argument('--eol-unused-not-latest', action='store_true')
|
||||
parser.add_argument('--eol-not-latest', action='store_true')
|
||||
parser.add_argument('--unused-not-latest', action='store_true')
|
||||
parser.add_argument(
|
||||
'--use-broker', action='store_true',
|
||||
help='use the identity broker to get credentials')
|
||||
parser.add_argument('cache_file')
|
||||
args = parser.parse_args()
|
||||
|
||||
log = logging.getLogger()
|
||||
log.setLevel(logging.DEBUG if args.debug else logging.INFO)
|
||||
console = logging.StreamHandler()
|
||||
logfmt = logging.Formatter(LOGFORMAT, datefmt='%FT%TZ')
|
||||
logfmt.converter = time.gmtime
|
||||
console.setFormatter(logfmt)
|
||||
log.addHandler(console)
|
||||
log.debug(args)
|
||||
|
||||
# set up credential provider, if we're going to use it
|
||||
if args.use_broker:
|
||||
clouds.set_credential_provider(debug=args.debug)
|
||||
|
||||
# what region(s)?
|
||||
regions = clouds.ADAPTERS[args.cloud].regions
|
||||
if args.region:
|
||||
if args.region not in regions:
|
||||
log.error('invalid region: %s', args.region)
|
||||
exit(1)
|
||||
else:
|
||||
regions = [args.region]
|
||||
|
||||
filters = {
|
||||
'Owners': ['self'],
|
||||
'Filters': [
|
||||
{'Name': 'state', 'Values': ['available']},
|
||||
]
|
||||
}
|
||||
|
||||
initial = dictfactory()
|
||||
variants = dictfactory()
|
||||
removes = dictfactory()
|
||||
summary = dictfactory()
|
||||
latest = {}
|
||||
now = time.gmtime()
|
||||
|
||||
# load cache
|
||||
yaml = YAML()
|
||||
log.info(f'loading image cache from {args.cache_file}')
|
||||
cache = yaml.load(Path(args.cache_file))
|
||||
log.info(f'loaded image cache')
|
||||
|
||||
|
||||
for region in sorted(regions):
|
||||
latest = cache[region]['latest']
|
||||
images = cache[region]['images']
|
||||
log.info(f'--- {region} : {len(images)} ---')
|
||||
|
||||
for id, image in images.items():
|
||||
name = image['name']
|
||||
|
||||
if args.private and image['private']:
|
||||
log.info(f"{region}\tPRIVATE\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['PRIVATE'][id] = name
|
||||
continue
|
||||
|
||||
if args.edge_eol and image['version'] == 'edge' and image['eol']:
|
||||
log.info(f"{region}\tEDGE-EOL\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['EDGE-EOL'][id] = name
|
||||
continue
|
||||
|
||||
if args.rc and image['rc']:
|
||||
log.info(f"{region}\tRC\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['RC'][id] = name
|
||||
continue
|
||||
|
||||
unused = image['launched'] == 'Never'
|
||||
release_key = image['release_key']
|
||||
variant_key = image['variant_key']
|
||||
if variant_key not in latest:
|
||||
log.warning(f"variant key '{variant_key}' not in latest, skipping.")
|
||||
summary[region]['__WTF__'][id] = name
|
||||
continue
|
||||
|
||||
latest_release_key = latest[variant_key]['release_key']
|
||||
not_latest = release_key != latest_release_key
|
||||
|
||||
if args.eol_unused_not_latest and image['eol'] and unused and not_latest:
|
||||
log.info(f"{region}\tEOL-UNUSED-NOT-LATEST\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['EOL-UNUSED-NOT-LATEST'][id] = name
|
||||
continue
|
||||
|
||||
if args.eol_not_latest and image['eol'] and not_latest:
|
||||
log.info(f"{region}\tEOL-NOT-LATEST\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['EOL-NOT-LATEST'][id] = name
|
||||
continue
|
||||
|
||||
if args.unused_not_latest and unused and not_latest:
|
||||
log.info(f"{region}\tUNUSED-NOT-LATEST\t{name}")
|
||||
removes[region][id] = image
|
||||
summary[region]['UNUSED-NOT-LATEST'][id] = name
|
||||
continue
|
||||
|
||||
log.debug(f"{region}\t__KEPT__\t{name}")
|
||||
summary[region]['__KEPT__'][id] = name
|
||||
|
||||
totals = {}
|
||||
log.info('SUMMARY')
|
||||
for region, reasons in sorted(summary.items()):
|
||||
log.info(f"\t{region}")
|
||||
for reason, images in sorted(reasons.items()):
|
||||
count = len(images)
|
||||
log.info(f"\t\t{count}\t{reason}")
|
||||
if reason not in totals:
|
||||
totals[reason] = 0
|
||||
|
||||
totals[reason] += count
|
||||
|
||||
log.info('TOTALS')
|
||||
for reason, count in sorted(totals.items()):
|
||||
log.info(f"\t{count}\t{reason}")
|
||||
|
||||
if args.really:
|
||||
log.warning('Please confirm you wish to actually prune these images...')
|
||||
r = input("(yes/NO): ")
|
||||
print()
|
||||
if r.lower() != 'yes':
|
||||
args.really = False
|
||||
|
||||
if not args.really:
|
||||
log.warning("Not really pruning any images.")
|
||||
exit(0)
|
||||
|
||||
# do the pruning...
|
||||
|
||||
for region, images in sorted(removes.items()):
|
||||
ec2r = clouds.ADAPTERS[args.cloud].session(region).resource('ec2')
|
||||
for id, image in images.items():
|
||||
name = image['name']
|
||||
snapshot_id = image['snapshot_id']
|
||||
try:
|
||||
log.info(f'Deregistering: {region}/{id}: {name}')
|
||||
ec2r.Image(id).deregister()
|
||||
log.info(f"Deleting: {region}/{snapshot_id}: {name}")
|
||||
ec2r.Snapshot(snapshot_id).delete()
|
||||
|
||||
except Exception as e:
|
||||
log.warning(f"Failed: {e}")
|
||||
pass
|
||||
|
||||
log.info('DONE')
|
|
@ -9,6 +9,10 @@ einfo() {
|
|||
printf '\n\033[1;7;36m> %s <\033[0m\n' "$@" >&2 # bold reversed cyan
|
||||
}
|
||||
|
||||
greater_or_equal() {
|
||||
return $(echo "$1 $2" | awk '{print ($1 < $2)}')
|
||||
}
|
||||
|
||||
if [ "$VERSION" = "3.12" ]; then
|
||||
# tiny-cloud-network requires ifupdown-ng, not in 3.12
|
||||
einfo "Configuring Tiny EC2 Bootstrap..."
|
||||
|
@ -25,19 +29,8 @@ else
|
|||
|
||||
# tiny-cloud >= 3.0.0 sets up init scripts with /sbin/tiny-cloud --setup
|
||||
if [ -f "$TARGET/sbin/tiny-cloud" ]; then
|
||||
# fixed in tiny-cloud >3.0.1
|
||||
#chroot "$TARGET" /sbin/tiny-cloud --enable
|
||||
# logic directly implemented here, for now
|
||||
echo -- "- removing tiny-cloud* from all runlevels"
|
||||
rm -f "$TARGET"/etc/runlevels/*/tiny-cloud*
|
||||
ln -s /etc/init.d/tiny-cloud-boot "$TARGET"/etc/runlevels/boot
|
||||
echo -- "+ tiny-cloud-boot service added to boot runlevel"
|
||||
for p in early main final; do
|
||||
ln -s "/etc/init.d/tiny-cloud-$p" "$TARGET"/etc/runlevels/default
|
||||
echo -- "+ tiny-cloud-$p service added to default runlevel"
|
||||
done
|
||||
# TODO: will need to update this for >3.18
|
||||
elif [ "$VERSION" = "3.18" ]; then
|
||||
chroot "$TARGET" /sbin/tiny-cloud --enable
|
||||
elif greater_or_equal "$VERSION" 3.18; then
|
||||
# 3.18 has tiny-cloud 3.0.0, and we didn't find what we expected
|
||||
echo "Error: /sbin/tiny-cloud not found" >&2
|
||||
exit 1
|
||||
|
|
Loading…
Reference in New Issue
Block a user