Squashed '.ci/' changes from 38a9cda..227e39f

227e39f Allow custom GIT_TAG

git-subtree-dir: .ci
git-subtree-split: 227e39fd929165c37b33b3f891fa20bfc7ce22b1
This commit is contained in:
Stefan Reimer 2023-08-16 12:04:32 +01:00
parent 391bbfe6d5
commit 60aa548d2a
816 changed files with 18 additions and 142958 deletions

View File

@ -1,25 +0,0 @@
# ci-tools-lib
Various toolchain bits and pieces shared between projects
# Quickstart
Create top-level Makefile
```
REGISTRY := <your-registry>
IMAGE := <image_name>
REGION := <AWS region of your registry>
include .ci/podman.mk
```
Add subtree to your project:
```
git subtree add --prefix .ci https://git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash
```
## Jenkins
Shared groovy libraries
## Make
Common Makefile include

11
.gitignore vendored
View File

@ -1,11 +0,0 @@
# Vim
*.swp
.vscode
.DS_Store
.idea
**/*.tgz
# Breaks Helm V3 dependencies in Argo
Chart.lock
kubezero-repo.???

View File

@ -1,2 +0,0 @@
# Only our own charts, see Makefile
charts/

View File

@ -1,2 +0,0 @@
# template: "/tmp/doesntexist"
linkReferences: true

View File

@ -1,46 +0,0 @@
# Changelog
## KubeZero - 2.18 ( Argoless )
### High level / Admin changes
- ArgoCD is now optional and NOT required nor used during initial cluster bootstrap
- the bootstrap process now uses the same config and templates as the optional ArgoCD applications later on
- the bootstrap is can now be restarted at any time and considerably faster
- the top level KubeZero config for the ArgoCD app-of-apps is now also maintained via the gitops workflow. Changes can be applied by a simple git push rather than manual scripts
### Calico
- version bump
### Cert-manager
- local issuers are now cluster issuer to allow them being used across namespaces
- all cert-manager resources moved into the cert-manager namespace
- version bump to 1.10
### Kiam
- set priorty class to cluster essential
- certificates are now issued by the cluster issuer
### EBS / EFS
- version bump
### Istio
- istio operator removed, deployment migrated to helm, various cleanups
- version bump to 1.8
- all ingress resources are now in the dedicated new namespace istio-ingress ( deployed via separate kubezero chart istio-ingress)
- set priorty class of ingress components to cluster essential
### Logging
- ES/Kibana version bump to 7.10
- ECK operator is now installed on demand in logging ns
- Custom event fields configurable via new fluent-bit chart
e.g. clustername could be added to each event allowing easy filtering in case multiple clusters stream events into a single central ES cluster
### ArgoCD
- version bump, new app of app architecure
### Metrics
- version bump
- all servicemonitor resources are now in the same namespaces as the respective apps to avoid deployments across multiple namespaces
### upstream Kubernetes 1.18
https://sysdig.com/blog/whats-new-kubernetes-1-18/

View File

@ -1,38 +0,0 @@
ARG ALPINE_VERSION=3.18
FROM docker.io/alpine:${ALPINE_VERSION}
ARG ALPINE_VERSION
ARG KUBE_VERSION=1.26
RUN cd /etc/apk/keys && \
wget "https://cdn.zero-downtime.net/alpine/stefan@zero-downtime.net-61bb6bfb.rsa.pub" && \
echo "@kubezero https://cdn.zero-downtime.net/alpine/v${ALPINE_VERSION}/kubezero" >> /etc/apk/repositories && \
echo "@edge-testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories && \
echo "@edge-community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk upgrade -U -a --no-cache && \
apk --no-cache add \
jq \
yq \
diffutils \
bash \
python3 \
py3-yaml \
restic \
helm \
cri-tools@kubezero \
kubeadm@kubezero~=${KUBE_VERSION} \
kubectl@kubezero~=${KUBE_VERSION} \
etcdhelper@kubezero \
etcd-ctl@edge-testing
RUN helm repo add kubezero https://cdn.zero-downtime.net/charts && \
mkdir -p /var/lib/kubezero
ADD admin/kubezero.sh admin/libhelm.sh admin/migrate_argo_values.py /usr/bin
ADD admin/libhelm.sh admin/pre-upgrade.sh /var/lib/kubezero
ADD charts/kubeadm /charts/kubeadm
ADD charts/kubezero /charts/kubezero
ENTRYPOINT ["kubezero.sh"]

View File

@ -1,651 +0,0 @@
GNU Affero General Public License
=================================
_Version 3, 19 November 2007_
_Copyright © 2007 Free Software Foundation, Inc. &lt;<http://fsf.org/>&gt;_
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
## Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: **(1)** assert copyright on the software, and **(2)** offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
## TERMS AND CONDITIONS
### 0. Definitions
“This License” refers to version 3 of the GNU Affero General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this
License. Each licensee is addressed as “you”. “Licensees” and
“recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a “modified version” of the
earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based
on the Program.
To “propagate” a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices”
to the extent that it includes a convenient and prominently visible
feature that **(1)** displays an appropriate copyright notice, and **(2)**
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
### 1. Source Code
The “source code” for a work means the preferred form of the work
for making modifications to it. “Object code” means any non-source
form of a work.
A “Standard Interface” means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other
than the work as a whole, that **(a)** is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and **(b)** serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
“Major Component”, in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
### 2. Basic Permissions
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
### 4. Conveying Verbatim Copies
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
### 5. Conveying Modified Source Versions
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
* **a)** The work must carry prominent notices stating that you modified
it, and giving a relevant date.
* **b)** The work must carry prominent notices stating that it is
released under this License and any conditions added under section 7.
This requirement modifies the requirement in section 4 to
“keep intact all notices”.
* **c)** You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
* **d)** If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
“aggregate” if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
### 6. Conveying Non-Source Forms
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
* **a)** Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
* **b)** Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either **(1)** a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or **(2)** access to copy the
Corresponding Source from a network server at no charge.
* **c)** Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
* **d)** Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
* **e)** Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A “User Product” is either **(1)** a “consumer product”, which means any
tangible personal property which is normally used for personal, family,
or household purposes, or **(2)** anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, “normally used” refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
“Installation Information” for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
### 7. Additional Terms
“Additional permissions” are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
* **a)** Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
* **b)** Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
* **c)** Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
* **d)** Limiting the use for publicity purposes of names of licensors or
authors of the material; or
* **e)** Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
* **f)** Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered “further
restrictions” within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
### 8. Termination
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated **(a)**
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and **(b)** permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
### 9. Acceptance Not Required for Having Copies
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
### 10. Automatic Licensing of Downstream Recipients
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
### 11. Patents
A “contributor” is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, “control” includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To “grant” such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either **(1)** cause the Corresponding Source to be so
available, or **(2)** arrange to deprive yourself of the benefit of the
patent license for this particular work, or **(3)** arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. “Knowingly relying” means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is “discriminatory” if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license **(a)** in connection with copies of the covered work
conveyed by you (or copies made from those copies), or **(b)** primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
### 12. No Surrender of Others' Freedom
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
### 13. Remote Network Interaction; Use with the GNU General Public License
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
### 14. Revised Versions of this License
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License “or any later version” applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
### 15. Disclaimer of Warranty
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
### 16. Limitation of Liability
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
### 17. Interpretation of Sections 15 and 16
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
_END OF TERMS AND CONDITIONS_
## How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the “copyright” line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a “Source” link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a “copyright disclaimer” for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
&lt;<http://www.gnu.org/licenses/>&gt;.

View File

@ -1,23 +0,0 @@
REGISTRY := public.ecr.aws/zero-downtime
IMAGE := kubezero-admin
REGION := us-east-1
# Also tag as Kubernetes major version
MY_TAG = $(shell git describe --tags --match v*.*.* 2>/dev/null || git rev-parse --short HEAD 2>/dev/null)
EXTRA_TAGS = $(shell echo $(MY_TAG) | awk -F '.' '{ print $$1 "." $$2 }')
include .ci/podman.mk
update-charts:
./scripts/update_helm.sh
update-chart-docs:
for c in charts/*; do \
[[ $$c =~ "kubezero-lib" ]] && continue ; \
[[ $$c =~ "kubeadm" ]] && continue ; \
helm-docs -c $$c ; \
done
publish-charts:
./scripts/publish.sh

115
README.md
View File

@ -1,106 +1,25 @@
KubeZero - Zero Down Time Kubernetes platform
========================
KubeZero is a Kubernetes distribution providing an integrated container platform so you can focus on your applications.
# ci-tools-lib
# Design philosophy
Various toolchain bits and pieces shared between projects
- Cloud provider agnostic, bare-metal/self-hosted
- Focus on security and simplicity over feature creep
- No vendor lock in, most components are optional and could be easily exchanged
- Organic Open Source / open and permissive licenses over closed-source solutions
- No premium services / subscriptions required
- Staying up to date and contributing back to upstream projects, like alpine-cloud-images and others
- Corgi approved :dog:
# Quickstart
Create top-level Makefile
```
REGISTRY := <your-registry>
IMAGE := <image_name>
REGION := <AWS region of your registry>
# Architecture
![aws_architecture](docs/aws_architecture.png)
# Version / Support Matrix
KubeZero releases track the same *minor* version of Kubernetes.
Any 1.26.X-Y release of Kubezero supports any Kubernetes cluster 1.26.X.
KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed.
```mermaid
%%{init: {'theme':'dark'}}%%
gantt
title KubeZero Support Timeline
dateFormat YYYY-MM-DD
section 1.24
beta :124b, 2022-11-14, 2022-12-31
release :after 124b, 2023-06-01
section 1.25
beta :125b, 2023-03-01, 2023-03-31
release :after 125b, 2023-08-01
section 1.26
beta :126b, 2023-06-01, 2023-06-30
release :after 126b, 2023-10-01
include .ci/podman.mk
```
[Upstream release policy](https://kubernetes.io/releases/)
Add subtree to your project:
```
git subtree add --prefix .ci https://git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash
```
# Components
## OS
- all nodes are based on Alpine V3.17
- 2 GB encrypted root filesystem
- no 3rd party dependencies at boot ( other than container registries )
- minimal attack surface
- extremely small memory footprint / overhead
## Jenkins
Shared groovy libraries
## Container runtime
- cri-o rather than Docker for improved security and performance
## Control plane
- all Kubernetes components compiled against Alpine OS using `buildmode=pie`
- support for single node control plane for small clusters / test environments to reduce costs
- access to control plane from within the VPC only by default ( VPN access required for Admin tasks )
- controller nodes are used for various platform admin controllers / operators to reduce costs and noise on worker nodes
## GitOps
- cli / cmd line install
- optional full ArgoCD support and integration
- fuse device plugin support to build containers as part of a CI pipeline leveraging rootless podman build agents
## AWS integrations
- IAM roles for service accounts allowing each pod to assume individual IAM roles
- access to meta-data services is blocked all workload containers on all nodes
- all IAM roles are maintained via CloudBender automation
- aws-node-termination handler integrated
- support for spot instances per worker group incl. early draining etc.
- support for [Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/) part of [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/).
## Network
- Cilium using Geneve encapsulation, incl. increased MTU allowing flexible / more containers per worker node compared to eg. AWS VPC CNI
- Multus support for multiple network interfaces per pod, eg. additional AWS CNI
- no restrictions on IP space / sizing from the underlying VPC architecture
## Storage
- flexible EBS support incl. zone awareness
- EFS support via automated EFS provisioning for worker groups via CloudBender automation
- local storage provider (OpenEBS LVM) for latency sensitive high performance workloads
- CSI Snapshot controller and Gemini snapshot groups and retention
## Ingress
- AWS Network Loadbalancer and Istio Ingress controllers
- no additional costs per exposed service
- real client source IP available to workloads via HTTP header and access logs
- ACME SSL Certificate handling via cert-manager incl. renewal etc.
- support for TCP services
- optional rate limiting support
- optional full service mesh
## Metrics
- Prometheus support for all components, incl. out of cluster EC2 instances (node_exporter)
- automated service discovery allowing instant access to common workload metrics
- pre-configured Grafana dashboards and alerts
- Alertmanager events via SNSAlertHub to Slack, Google, Matrix, etc.
## Logging
- all container logs are enhanced with Kubernetes and AWS metadata to provide context for each message
- flexible ElasticSearch setup, leveraging the ECK operator, for easy maintenance & minimal admin knowledge required, incl. automated backups to S3
- Kibana allowing easy search and dashboards for all logs, incl. pre configured index templates and index management
- [fluentd-concerter](https://git.zero-downtime.net/ZeroDownTime/container-park/src/branch/master/fluentd-concenter) service providing queuing during highload as well as additional parsing options
- lightweight fluent-bit agents on each node requiring minimal resources forwarding logs secure via TLS to fluentd-concenter
## Make
Common Makefile include

View File

@ -1,12 +0,0 @@
# Cluster upgrade flow
## During 1.23 upgrade
- create new kubezero-values CM if not exists yet, by merging parts of the legacy /etc/kubernetes/kubeadm-values.yaml values with potentially existing values from kubezero ArgoCD app values
# General flow
- No ArgoCD -> user kubezero-values CM
- ArgoCD -> update kubezero-values CM with current values from ArgoCD app values
- Apply any upgrades / migrations

View File

@ -1,22 +0,0 @@
#!/bin/bash
#set -eEx
#set -o pipefail
set -x
#VERSION="latest"
KUBE_VERSION="v1.26.6"
WORKDIR=$(mktemp -p /tmp -d kubezero.XXX)
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
# shellcheck disable=SC1091
. "$SCRIPT_DIR"/libhelm.sh
CHARTS="$(dirname $SCRIPT_DIR)/charts"
get_kubezero_values
# Always use embedded kubezero chart
helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --kube-version $KUBE_VERSION --version ~$KUBE_VERSION --devel --output-dir $WORKDIR
# CRDs first
_helm crds $1
_helm apply $1

View File

@ -1,386 +0,0 @@
#!/bin/bash -e
if [ -n "$DEBUG" ]; then
set -x
LOG="--v=5"
fi
# include helm lib
. /var/lib/kubezero/libhelm.sh
# Export vars to ease use in debug_shell etc
export WORKDIR=/tmp/kubezero
export HOSTFS=/host
export CHARTS=/charts
export KUBE_VERSION=$(kubeadm version -o json | jq -r .clientVersion.gitVersion)
export KUBE_VERSION_MINOR="v1.$(kubectl version -o json | jq .clientVersion.minor -r)"
export KUBECONFIG="${HOSTFS}/root/.kube/config"
# etcd
export ETCDCTL_API=3
export ETCDCTL_CACERT=${HOSTFS}/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=${HOSTFS}/etc/kubernetes/pki/apiserver-etcd-client.crt
export ETCDCTL_KEY=${HOSTFS}/etc/kubernetes/pki/apiserver-etcd-client.key
mkdir -p ${WORKDIR}
# Generic retry utility
retry() {
local tries=$1
local waitfor=$2
local timeout=$3
shift 3
while true; do
type -tf $1 >/dev/null && { timeout $timeout $@ && return; } || { $@ && return; }
let tries=$tries-1
[ $tries -eq 0 ] && return 1
sleep $waitfor
done
}
_kubeadm() {
kubeadm $@ --config /etc/kubernetes/kubeadm.yaml --rootfs ${HOSTFS} $LOG
}
# Render cluster config
render_kubeadm() {
helm template $CHARTS/kubeadm --output-dir ${WORKDIR} -f ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml
# Assemble kubeadm config
cat /dev/null > ${HOSTFS}/etc/kubernetes/kubeadm.yaml
for f in Cluster Init Join KubeProxy Kubelet; do
# echo "---" >> /etc/kubernetes/kubeadm.yaml
cat ${WORKDIR}/kubeadm/templates/${f}Configuration.yaml >> ${HOSTFS}/etc/kubernetes/kubeadm.yaml
done
# "uncloak" the json patches after they got processed by helm
for s in apiserver controller-manager scheduler; do
yq eval '.json' ${WORKDIR}/kubeadm/templates/patches/kube-${s}1\+json.yaml > /tmp/_tmp.yaml && \
mv /tmp/_tmp.yaml ${WORKDIR}/kubeadm/templates/patches/kube-${s}1\+json.yaml
done
}
parse_kubezero() {
export CLUSTERNAME=$(yq eval '.global.clusterName // .clusterName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export HIGHAVAILABLE=$(yq eval '.global.highAvailable // .highAvailable // "false"' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export ETCD_NODENAME=$(yq eval '.etcd.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export NODENAME=$(yq eval '.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export PROVIDER_ID=$(yq eval '.providerID // ""' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export AWS_IAM_AUTH=$(yq eval '.api.awsIamAuth.enabled // "false"' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
# From here on bail out, allows debug_shell even in error cases
set -e
}
# Shared steps before calling kubeadm
pre_kubeadm() {
# update all apiserver addons first
cp -r ${WORKDIR}/kubeadm/templates/apiserver ${HOSTFS}/etc/kubernetes
# aws-iam-authenticator enabled ?
if [ "$AWS_IAM_AUTH" == "true" ]; then
# Initialize webhook
if [ ! -f ${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.crt ]; then
${HOSTFS}/usr/bin/aws-iam-authenticator init -i ${CLUSTERNAME}
mv key.pem ${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.key
mv cert.pem ${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.crt
fi
# Patch the aws-iam-authenticator config with the actual cert.pem
yq eval -Mi ".clusters[0].cluster.certificate-authority-data = \"$(cat ${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.crt| base64 -w0)\"" ${HOSTFS}/etc/kubernetes/apiserver/aws-iam-authenticator.yaml
fi
# copy patches to host to make --rootfs of kubeadm work
cp -r ${WORKDIR}/kubeadm/templates/patches /host/tmp/
}
# Shared steps after calling kubeadm
post_kubeadm() {
# KubeZero resources
for f in ${WORKDIR}/kubeadm/templates/resources/*.yaml; do
kubectl apply -f $f $LOG
done
# Patch coreDNS addon, ideally we prevent kubeadm to reset coreDNS to its defaults
kubectl patch deployment coredns -n kube-system --patch-file ${WORKDIR}/kubeadm/templates/patches/coredns0.yaml $LOG
rm -rf /host/tmp/patches
}
kubeadm_upgrade() {
# pre upgrade hook
[ -f /var/lib/kubezero/pre-upgrade.sh ] && . /var/lib/kubezero/pre-upgrade.sh
render_kubeadm
pre_kubeadm
# Upgrade
_kubeadm upgrade apply -y --patches /tmp/patches
post_kubeadm
# If we have a re-cert kubectl config install for root
if [ -f ${HOSTFS}/etc/kubernetes/admin.conf ]; then
cp ${HOSTFS}/etc/kubernetes/admin.conf ${HOSTFS}/root/.kube/config
fi
# post upgrade hook
[ -f /var/lib/kubezero/post-upgrade.sh ] && . /var/lib/kubezero/post-upgrade.sh
# Cleanup after kubeadm on the host
rm -rf ${HOSTFS}/etc/kubernetes/tmp
echo "Successfully upgraded kubeadm control plane."
# TODO
# Send Notification currently done via CloudBender -> SNS -> Slack
# Better deploy https://github.com/opsgenie/kubernetes-event-exporter and set proper routes and labels on this Job
# Removed:
# - update oidc do we need that ?
}
control_plane_node() {
CMD=$1
render_kubeadm
# Ensure clean slate if bootstrap, restore PKI otherwise
if [[ "$CMD" =~ ^(bootstrap)$ ]]; then
rm -rf ${HOSTFS}/var/lib/etcd/member
else
# restore latest backup
retry 10 60 30 restic restore latest --no-lock -t / # --tag $KUBE_VERSION_MINOR
# Make last etcd snapshot available
cp ${WORKDIR}/etcd_snapshot ${HOSTFS}/etc/kubernetes
# Put PKI in place
cp -r ${WORKDIR}/pki ${HOSTFS}/etc/kubernetes
# Always use kubeadm kubectl config to never run into chicken egg with custom auth hooks
cp ${WORKDIR}/admin.conf ${HOSTFS}/root/.kube/config
# Only restore etcd data during "restore" and none exists already
if [[ "$CMD" =~ ^(restore)$ ]]; then
if [ ! -d ${HOSTFS}/var/lib/etcd/member ]; then
etcdctl snapshot restore ${HOSTFS}/etc/kubernetes/etcd_snapshot \
--name $ETCD_NODENAME \
--data-dir="${HOSTFS}/var/lib/etcd" \
--initial-cluster-token etcd-${CLUSTERNAME} \
--initial-advertise-peer-urls https://${ETCD_NODENAME}:2380 \
--initial-cluster $ETCD_NODENAME=https://${ETCD_NODENAME}:2380
fi
fi
fi
# Delete old node certs in case they are around
rm -f ${HOSTFS}/etc/kubernetes/pki/etcd/peer.* ${HOSTFS}/etc/kubernetes/pki/etcd/server.* ${HOSTFS}/etc/kubernetes/pki/etcd/healthcheck-client.* \
${HOSTFS}/etc/kubernetes/pki/apiserver* ${HOSTFS}/etc/kubernetes/pki/front-proxy-client.*
# Issue all certs first, needed for eg. aws-iam-authenticator setup
_kubeadm init phase certs all
pre_kubeadm
# Pull all images
_kubeadm config images pull
_kubeadm init phase preflight
_kubeadm init phase kubeconfig all
if [[ "$CMD" =~ ^(join)$ ]]; then
# Delete any former self in case forseti did not delete yet
kubectl delete node ${NODENAME} --wait=true || true
# Wait for all pods to be deleted otherwise we end up with stale pods eg. kube-proxy and all goes to ....
kubectl delete pods -n kube-system --field-selector spec.nodeName=${NODENAME}
# get current running etcd pods for etcdctl commands
while true; do
etcd_endpoints=$(kubectl get pods -n kube-system -l component=etcd -o yaml | \
yq eval '.items[].metadata.annotations."kubeadm.kubernetes.io/etcd.advertise-client-urls"' - | tr '\n' ',' | sed -e 's/,$//')
[[ $etcd_endpoints =~ ^https:// ]] && break
sleep 3
done
# see if we are a former member and remove our former self if so
MY_ID=$(etcdctl member list --endpoints=$etcd_endpoints | grep $ETCD_NODENAME | awk '{print $1}' | sed -e 's/,$//')
[ -n "$MY_ID" ] && retry 12 5 5 etcdctl member remove $MY_ID --endpoints=$etcd_endpoints
# flush etcd data directory as joining with previous storage seems flaky, especially during etcd version upgrades
rm -rf ${HOSTFS}/var/lib/etcd/member
# Announce new etcd member and capture ETCD_INITIAL_CLUSTER, retry needed in case another node joining causes temp quorum loss
ETCD_ENVS=$(retry 12 5 5 etcdctl member add $ETCD_NODENAME --peer-urls="https://${ETCD_NODENAME}:2380" --endpoints=$etcd_endpoints)
export $(echo "$ETCD_ENVS" | grep ETCD_INITIAL_CLUSTER= | sed -e 's/"//g')
# Patch kubeadm-values.yaml and re-render to get etcd manifest patched
yq eval -i '.etcd.state = "existing"
| .etcd.initialCluster = strenv(ETCD_INITIAL_CLUSTER)
' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml
render_kubeadm
fi
# Generate our custom etcd yaml
_kubeadm init phase etcd local
_kubeadm init phase control-plane all
_kubeadm init phase kubelet-start
cp ${HOSTFS}/etc/kubernetes/admin.conf ${HOSTFS}/root/.kube/config
# Wait for api to be online
echo "Waiting for Kubernetes API to be online ..."
retry 0 5 30 kubectl cluster-info --request-timeout 3 >/dev/null
# Update providerID as underlying VM changed during restore
if [[ "$CMD" =~ ^(restore)$ ]]; then
if [ -n "$PROVIDER_ID" ]; then
etcdhelper \
-cacert ${HOSTFS}/etc/kubernetes/pki/etcd/ca.crt \
-cert ${HOSTFS}/etc/kubernetes/pki/etcd/server.crt \
-key ${HOSTFS}/etc/kubernetes/pki/etcd/server.key \
-endpoint https://${ETCD_NODENAME}:2379 \
change-provider-id ${NODENAME} $PROVIDER_ID
fi
fi
if [[ "$CMD" =~ ^(bootstrap|restore)$ ]]; then
_kubeadm init phase upload-config all
_kubeadm init phase upload-certs --skip-certificate-key-print
# This sets up the ClusterRoleBindings to allow bootstrap nodes to create CSRs etc.
_kubeadm init phase bootstrap-token --skip-token-print
fi
_kubeadm init phase mark-control-plane
_kubeadm init phase kubelet-finalize all
if [[ "$CMD" =~ ^(bootstrap|restore)$ ]]; then
_kubeadm init phase addon all
fi
# Ensure aws-iam-authenticator secret is in place
if [ "$AWS_IAM_AUTH" == "true" ]; then
kubectl get secrets -n kube-system aws-iam-certs || \
kubectl create secret generic aws-iam-certs -n kube-system \
--from-file=key.pem=${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.key \
--from-file=cert.pem=${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.crt
# Store aws-iam-auth admin on SSM
yq eval -M ".clusters[0].cluster.certificate-authority-data = \"$(cat ${HOSTFS}/etc/kubernetes/pki/ca.crt | base64 -w0)\"" ${WORKDIR}/kubeadm/templates/admin-aws-iam.yaml > ${HOSTFS}/etc/kubernetes/admin-aws-iam.yaml
fi
post_kubeadm
echo "${1} cluster $CLUSTERNAME successfull."
}
apply_module() {
MODULES=$1
get_kubezero_values
# Always use embedded kubezero chart
helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --version ~$KUBE_VERSION --devel --output-dir $WORKDIR
# CRDs first
for t in $MODULES; do
_helm crds $t
done
for t in $MODULES; do
_helm apply $t
done
echo "Applied KubeZero modules: $MODULES"
}
delete_module() {
MODULES=$1
get_kubezero_values
# Always use embedded kubezero chart
helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --version ~$KUBE_VERSION --devel --output-dir $WORKDIR
for t in $MODULES; do
_helm delete $t
done
echo "Deleted KubeZero modules: $MODULES. Potential CRDs must be removed manually."
}
# backup etcd + /etc/kubernetes/pki
backup() {
# Display all ENVs, careful this exposes the password !
[ -n "$DEBUG" ] && env
restic snapshots || restic init || exit 1
CV=$(kubectl version -o json | jq .serverVersion.minor -r)
let PCV=$CV-1
CLUSTER_VERSION="v1.$CV"
PREVIOUS_VERSION="v1.$PCV"
etcdctl --endpoints=https://${ETCD_NODENAME}:2379 snapshot save ${WORKDIR}/etcd_snapshot
# pki & cluster-admin access
cp -r ${HOSTFS}/etc/kubernetes/pki ${WORKDIR}
cp -r ${HOSTFS}/etc/kubernetes/admin.conf ${WORKDIR}
# Backup via restic
restic backup ${WORKDIR} -H $CLUSTERNAME --tag $CLUSTER_VERSION
echo "Backup complete."
# Remove backups from pre-previous versions
restic forget --keep-tag $CLUSTER_VERSION --keep-tag $PREVIOUS_VERSION --prune
# Regular retention
restic forget --keep-hourly 24 --keep-daily ${RESTIC_RETENTION:-7} --prune
# Defrag etcd backend
etcdctl --endpoints=https://${ETCD_NODENAME}:2379 defrag
}
debug_shell() {
echo "Entering debug shell"
printf "For manual etcdctl commands use:\n # export ETCDCTL_ENDPOINTS=$ETCD_NODENAME:2379\n"
/bin/bash
}
# First parse kubeadm-values.yaml
parse_kubezero
# Execute tasks
for t in $@; do
case "$t" in
kubeadm_upgrade) kubeadm_upgrade;;
bootstrap) control_plane_node bootstrap;;
join) control_plane_node join;;
restore) control_plane_node restore;;
apply_*) apply_module "${t##apply_}";;
delete_*) delete_module "${t##delete_}";;
backup) backup;;
debug_shell) debug_shell;;
*) echo "Unknown command: '$t'";;
esac
done

View File

@ -1,184 +0,0 @@
#!/bin/bash
# Simulate well-known CRDs being available
API_VERSIONS="-a monitoring.coreos.com/v1 -a snapshot.storage.k8s.io/v1 -a policy/v1/PodDisruptionBudget"
# Waits for max 300s and retries
function wait_for() {
local TRIES=0
while true; do
eval " $@" && break
[ $TRIES -eq 100 ] && return 1
let TRIES=$TRIES+1
sleep 3
done
}
function chart_location() {
echo "$1 --repo https://cdn.zero-downtime.net/charts"
}
function argo_used() {
kubectl get application kubezero -n argocd >/dev/null && rc=$? || rc=$?
return $rc
}
# get kubezero-values from ArgoCD if available or use in-cluster CM without Argo
function get_kubezero_values() {
argo_used && \
{ kubectl get application kubezero -n argocd -o yaml | yq .spec.source.helm.values > ${WORKDIR}/kubezero-values.yaml; } || \
{ kubectl get configmap -n kube-system kubezero-values -o yaml | yq '.data."values.yaml"' > ${WORKDIR}/kubezero-values.yaml ;}
}
function disable_argo() {
cat > _argoapp_patch.yaml <<EOF
spec:
syncWindows:
- kind: deny
schedule: '0 * * * *'
duration: 24h
namespaces:
- '*'
EOF
kubectl patch appproject kubezero -n argocd --patch-file _argoapp_patch.yaml --type=merge && rm _argoapp_patch.yaml
echo "Enabled service window for ArgoCD project kubezero"
}
function enable_argo() {
kubectl patch appproject kubezero -n argocd --type json -p='[{"op": "remove", "path": "/spec/syncWindows"}]' || true
echo "Removed service window for ArgoCD project kubezero"
}
function cntFailedPods() {
NS=$1
NR=$(kubectl get pods -n $NS --field-selector="status.phase!=Succeeded,status.phase!=Running" -o custom-columns="POD:metadata.name" -o json | jq '.items | length')
echo $NR
}
function waitSystemPodsRunning() {
while true; do
[ "$(cntFailedPods kube-system)" -eq 0 ] && break
sleep 3
done
}
function argo_app_synced() {
APP=$1
# Ensure we are synced otherwise bail out
status=$(kubectl get application $APP -n argocd -o yaml | yq .status.sync.status)
if [ "$status" != "Synced" ]; then
echo "ArgoCD Application $APP not 'Synced'!"
return 1
fi
return 0
}
# make sure namespace exists prior to calling helm as the create-namespace options doesn't work
function create_ns() {
local namespace=$1
if [ "$namespace" != "kube-system" ]; then
kubectl get ns $namespace || kubectl create ns $namespace
fi
}
# delete non kube-system ns
function delete_ns() {
local namespace=$1
[ "$namespace" != "kube-system" ] && kubectl delete ns $namespace
}
# Extract crds via helm calls and apply delta=crds only
function _crds() {
helm template $(chart_location $chart) -n $namespace --name-template $module $targetRevision --skip-crds --set ${module}.installCRDs=false -f $WORKDIR/values.yaml $API_VERSIONS --kube-version $KUBE_VERSION > $WORKDIR/helm-no-crds.yaml
helm template $(chart_location $chart) -n $namespace --name-template $module $targetRevision --include-crds --set ${module}.installCRDs=true -f $WORKDIR/values.yaml $API_VERSIONS --kube-version $KUBE_VERSION > $WORKDIR/helm-crds.yaml
diff -e $WORKDIR/helm-no-crds.yaml $WORKDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $WORKDIR/crds.yaml
# Only apply if there are actually any crds
if [ -s $WORKDIR/crds.yaml ]; then
[ -n "$DEBUG" ] && cat $WORKDIR/crds.yaml
kubectl apply -f $WORKDIR/crds.yaml --server-side --force-conflicts
fi
}
# helm template | kubectl apply -f -
# confine to one namespace if possible
function render() {
helm template $(chart_location $chart) -n $namespace --name-template $module $targetRevision --skip-crds -f $WORKDIR/values.yaml $API_VERSIONS --kube-version $KUBE_VERSION $@ \
| python3 -c '
#!/usr/bin/python3
import yaml
import sys
for manifest in yaml.safe_load_all(sys.stdin):
if manifest:
if "metadata" in manifest and "namespace" not in manifest["metadata"]:
manifest["metadata"]["namespace"] = sys.argv[1]
print("---")
print(yaml.dump(manifest))' $namespace > $WORKDIR/helm.yaml
}
function _helm() {
local action=$1
local module=$2
# check if module is even enabled and return if not
[ ! -f $WORKDIR/kubezero/templates/${module}.yaml ] && { echo "Module $module disabled. No-op."; return 0; }
local chart="$(yq eval '.spec.source.chart' $WORKDIR/kubezero/templates/${module}.yaml)"
local namespace="$(yq eval '.spec.destination.namespace' $WORKDIR/kubezero/templates/${module}.yaml)"
targetRevision=""
_version="$(yq eval '.spec.source.targetRevision' $WORKDIR/kubezero/templates/${module}.yaml)"
[ -n "$_version" ] && targetRevision="--version $_version"
yq eval '.spec.source.helm.values' $WORKDIR/kubezero/templates/${module}.yaml > $WORKDIR/values.yaml
echo "using values to $action of module $module: "
cat $WORKDIR/values.yaml
if [ $action == "crds" ]; then
# Allow custom CRD handling
declare -F ${module}-crds && ${module}-crds || _crds
elif [ $action == "apply" ]; then
# namespace must exist prior to apply
create_ns $namespace
# Optional pre hook
declare -F ${module}-pre && ${module}-pre
render
kubectl $action -f $WORKDIR/helm.yaml --server-side --force-conflicts && rc=$? || rc=$?
# Try again without server-side, review with 1.26, required for cert-manager during 1.25
[ $rc -ne 0 ] && kubectl $action -f $WORKDIR/helm.yaml && rc=$? || rc=$?
# Optional post hook
declare -F ${module}-post && ${module}-post
elif [ $action == "delete" ]; then
render
kubectl $action -f $WORKDIR/helm.yaml && rc=$? || rc=$?
# Delete dedicated namespace if not kube-system
[ -n "$DELETE_NS" ] && delete_ns $namespace
fi
return 0
}

View File

@ -1,80 +0,0 @@
#!/usr/bin/env python3
import sys
import argparse
import io
import yaml
def migrate(values):
"""Actual changes here"""
return values
def deleteKey(values, key):
"""Delete key from dictionary if exists"""
try:
values.pop(key)
except KeyError:
pass
return values
class MyDumper(yaml.Dumper):
"""
Required to add additional indent for arrays to match yq behaviour to reduce noise in diffs
"""
def increase_indent(self, flow=False, indentless=False):
return super(MyDumper, self).increase_indent(flow, False)
def str_presenter(dumper, data):
if len(data.splitlines()) > 1: # check for multiline string
return dumper.represent_scalar("tag:yaml.org,2002:str", data, style="|")
return dumper.represent_scalar("tag:yaml.org,2002:str", data)
def rec_sort(d):
if isinstance(d, dict):
res = dict()
# Always have "enabled" first if present
if "enabled" in d.keys():
res["enabled"] = rec_sort(d["enabled"])
d.pop("enabled")
# next is "name" if present
if "name" in d.keys():
res["name"] = rec_sort(d["name"])
d.pop("name")
for k in sorted(d.keys()):
res[k] = rec_sort(d[k])
return res
if isinstance(d, list):
for idx, elem in enumerate(d):
d[idx] = rec_sort(elem)
return d
yaml.add_representer(str, str_presenter)
# to use with safe_dump:
yaml.representer.SafeRepresenter.add_representer(str, str_presenter)
# Read values
values = yaml.safe_load(sys.stdin)
# Output new values
buffer = io.StringIO()
yaml.dump(
rec_sort(migrate(values)),
sys.stdout,
default_flow_style=False,
indent=2,
sort_keys=False,
Dumper=MyDumper,
)

View File

@ -1,21 +0,0 @@
#!/bin/bash
# get current values, argo app over cm
get_kubezero_values
# tumble new config through migrate.py
migrate_argo_values.py < "$WORKDIR"/kubezero-values.yaml > "$WORKDIR"/new-kubezero-values.yaml
# Update kubezero-values CM
kubectl get cm -n kube-system kubezero-values -o=yaml | \
yq e '.data."values.yaml" |= load_str("/tmp/kubezero/new-kubezero-values.yaml")' | \
kubectl replace -f -
# update argo app
kubectl get application kubezero -n argocd -o yaml | \
kubezero_chart_version=$(yq .version /charts/kubezero/Chart.yaml) \
yq '.spec.source.helm.values |= load_str("/tmp/kubezero/new-kubezero-values.yaml") | .spec.source.targetRevision = strenv(kubezero_chart_version)' | \
kubectl apply -f -
# finally remove annotation to allow argo to sync again
kubectl patch app kubezero -n argocd --type json -p='[{"op": "remove", "path": "/metadata/annotations"}]'

View File

@ -1,178 +0,0 @@
#!/bin/bash
set -eE
set -o pipefail
#VERSION="latest"
VERSION="v1.26"
ARGO_APP=${1:-/tmp/new-kubezero-argoapp.yaml}
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
# shellcheck disable=SC1091
. "$SCRIPT_DIR"/libhelm.sh
[ -n "$DEBUG" ] && set -x
all_nodes_upgrade() {
CMD="$1"
echo "Deploy all node upgrade daemonSet(busybox)"
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubezero-all-nodes-upgrade
namespace: kube-system
labels:
app: kubezero-upgrade
spec:
selector:
matchLabels:
name: kubezero-all-nodes-upgrade
template:
metadata:
labels:
name: kubezero-all-nodes-upgrade
spec:
hostNetwork: true
hostIPC: true
hostPID: true
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
initContainers:
- name: node-upgrade
image: busybox
command: ["/bin/sh"]
args: ["-x", "-c", "$CMD" ]
volumeMounts:
- name: host
mountPath: /host
- name: hostproc
mountPath: /hostproc
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
containers:
- name: node-upgrade-wait
image: busybox
command: ["sleep", "3600"]
volumes:
- name: host
hostPath:
path: /
type: Directory
- name: hostproc
hostPath:
path: /proc
type: Directory
EOF
kubectl rollout status daemonset -n kube-system kubezero-all-nodes-upgrade --timeout 300s
kubectl delete ds kubezero-all-nodes-upgrade -n kube-system
}
control_plane_upgrade() {
TASKS="$1"
echo "Deploy cluster admin task: $TASKS"
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: kubezero-upgrade
namespace: kube-system
labels:
app: kubezero-upgrade
spec:
hostNetwork: true
hostIPC: true
hostPID: true
containers:
- name: kubezero-admin
image: public.ecr.aws/zero-downtime/kubezero-admin:${VERSION}
imagePullPolicy: Always
command: ["kubezero.sh"]
args: [$TASKS]
env:
- name: DEBUG
value: "$DEBUG"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: host
mountPath: /host
- name: workdir
mountPath: /tmp
securityContext:
capabilities:
add: ["SYS_CHROOT"]
volumes:
- name: host
hostPath:
path: /
type: Directory
- name: workdir
emptyDir: {}
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
restartPolicy: Never
EOF
kubectl wait pod kubezero-upgrade -n kube-system --timeout 120s --for=condition=initialized 2>/dev/null
while true; do
kubectl logs kubezero-upgrade -n kube-system -f 2>/dev/null && break
sleep 3
done
kubectl delete pod kubezero-upgrade -n kube-system
}
echo "Checking that all pods in kube-system are running ..."
waitSystemPodsRunning
argo_used && disable_argo
#all_nodes_upgrade ""
control_plane_upgrade kubeadm_upgrade
echo "Adjust kubezero values as needed:"
# shellcheck disable=SC2015
argo_used && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kube-system
control_plane_upgrade "apply_network, apply_addons, apply_storage"
echo "Checking that all pods in kube-system are running ..."
waitSystemPodsRunning
echo "Applying remaining KubeZero modules..."
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_argocd"
# Trigger backup of upgraded cluster state
kubectl create job --from=cronjob/kubezero-backup kubezero-backup-$VERSION -n kube-system
while true; do
kubectl wait --for=condition=complete job/kubezero-backup-$VERSION -n kube-system 2>/dev/null && kubectl delete job kubezero-backup-$VERSION -n kube-system && break
sleep 1
done
# Final step is to commit the new argocd kubezero app
kubectl get app kubezero -n argocd -o yaml | yq 'del(.status) | del(.metadata) | del(.operation) | .metadata.name="kubezero" | .metadata.namespace="argocd"' | yq 'sort_keys(..) | .spec.source.helm.values |= (from_yaml | to_yaml)' > $ARGO_APP
echo "Please commit $ARGO_APP as the updated kubezero/application.yaml for your cluster."
echo "Then head over to ArgoCD for this cluster and sync all KubeZero modules to apply remaining upgrades."
echo "<Return> to continue and re-enable ArgoCD:"
read -r
argo_used && enable_argo

View File

@ -1,24 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
clamav.yaml

View File

@ -1,18 +0,0 @@
apiVersion: v2
name: clamav
description: Chart for deploying a ClamavD on kubernetes as statfulSet
type: application
version: "0.2.0"
appVersion: "1.1.0"
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
- kubezero
- clamav
maintainers:
- name: Quarky9
dependencies:
- name: kubezero-lib
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
kubeVersion: ">= 1.25.0"

View File

@ -1,42 +0,0 @@
# clamav
![Version: 0.1.1](https://img.shields.io/badge/Version-0.1.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.104.0](https://img.shields.io/badge/AppVersion-0.104.0-informational?style=flat-square)
Chart for deploying a ClamavD on kubernetes as statfulSet
**Homepage:** <https://kubezero.com>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Quarky9 | | |
## Requirements
Kubernetes: `>= 1.18.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.4 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| clamav.freshclam.mirrors | string | `"database.clamav.net"` | A list of clamav mirrors to be used by the clamav service |
| clamav.image | string | `"clamav/clamav"` | The clamav docker image |
| clamav.limits.connectionQueueLength | int | `100` | Maximum length the queue of pending connections may grow to |
| clamav.limits.fileSize | int | `20` | The largest file size scanable by clamav, in MB |
| clamav.limits.maxThreads | int | `4` | Maximum number of threads running at the same time. |
| clamav.limits.scanSize | int | `100` | The largest scan size permitted in clamav, in MB |
| clamav.limits.sendBufTimeout | int | `500` | |
| clamav.replicaCount | int | `1` | |
| clamav.resources | object | `{"requests":{"cpu":"300m","memory":"1300M"}}` | The resource requests and limits for the clamav service |
| clamav.version | string | `"unstable"` | The clamav docker image version - defaults to .Chart.appVersion |
| fullnameOverride | string | `""` | override the full name of the clamav chart |
| nameOverride | string | `""` | override the name of the clamav chart |
| service.port | int | `3310` | The port to be used by the clamav service |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)

View File

@ -1,7 +0,0 @@
#!/bin/bash
release=clamav
namespace=clamav
helm template . --namespace $namespace --name-template $release > clamav.yaml
kubectl apply --namespace $namespace -f clamav.yaml

View File

@ -1,52 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kubezero-lib.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "kubezero-lib.labels" . | nindent 4 }}
data:
clamd.conf: |
LogTime yes
LogClean yes
LogSyslog no
LogVerbose no
LogFileMaxSize 0
LogFile /dev/stdout
DatabaseDirectory /var/lib/clamav
TCPSocket 3310
LocalSocket /run/clamav/clamd.sock
User clamav
ExitOnOOM yes
Foreground yes
MaxScanSize {{.Values.clamav.limits.scanSize}}M
MaxFileSize {{.Values.clamav.limits.fileSize}}M
# Close the connection when the data size limit is exceeded.
# The value should match your MTA's limit for a maximum attachment size.
# Default: 25M
StreamMaxLength {{.Values.clamav.limits.scanSize}}M
# Maximum length the queue of pending connections may grow to.
# Default: 200
MaxConnectionQueueLength {{.Values.clamav.limits.connectionQueueLength}}
# Maximum number of threads running at the same time.
# Default: 10
MaxThreads {{.Values.clamav.limits.maxThreads}}
# This option specifies how long to wait (in milliseconds) if the send buffer
# is full.
# Keep this value low to prevent clamd hanging.
#
# Default: 500
SendBufTimeout {{.Values.clamav.limits.sendBufTimeout}}
freshclam.conf: |
LogTime yes
LogVerbose yes
NotifyClamd /etc/clamav/clamd.conf
Checks 24
LogSyslog no
DatabaseOwner root
DatabaseMirror {{ .Values.clamav.freshclam.mirrors }}

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "kubezero-lib.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "kubezero-lib.labels" . | nindent 4 }}
spec:
ports:
- port: {{ .Values.service.port }}
targetPort: 3310
protocol: TCP
name: clamav
selector:
{{- include "kubezero-lib.selectorLabels" . | nindent 4 }}

View File

@ -1,78 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "kubezero-lib.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "kubezero-lib.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.clamav.replicaCount }}
selector:
matchLabels:
{{- include "kubezero-lib.selectorLabels" . | nindent 6 }}
serviceName: {{ include "kubezero-lib.fullname" . }}
template:
metadata:
labels:
{{- include "kubezero-lib.selectorLabels" . | nindent 8 }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
spec:
containers:
- name: clamav
image: "{{ .Values.clamav.image }}:{{ default .Chart.AppVersion .Values.clamav.version }}_base"
ports:
- containerPort: 3310
name: clamav
protocol: TCP
# Give clamav up to 300s to get CVDs in place etc.
startupProbe:
exec:
command:
- /usr/local/bin/clamdcheck.sh
failureThreshold: 30
periodSeconds: 10
livenessProbe:
exec:
command:
- /usr/local/bin/clamdcheck.sh
failureThreshold: 2
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 3
resources:
{{- toYaml .Values.clamav.resources | nindent 10 }}
volumeMounts:
- mountPath: /var/lib/clamav
name: signatures
- mountPath: /etc/clamav
name: config-volume
#securityContext:
# runAsNonRoot: true
volumes:
- name: config-volume
configMap:
name: {{ include "kubezero-lib.fullname" . }}
{{- with .Values.clamav.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.clamav.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.clamav.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: signatures
spec:
accessModes: [ "ReadWriteOnce" ]
{{- with .Values.clamav.storageClassName }}
storageClassName: {{ . }}
{{- end }}
resources:
requests:
storage: 2Gi

View File

@ -1,46 +0,0 @@
# Default values for clamav.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# nameOverride -- override the name of the clamav chart
nameOverride: ""
# fullnameOverride -- override the full name of the clamav chart
fullnameOverride: ""
service:
# service.port -- The port to be used by the clamav service
port: 3310
clamav:
# clamav.image -- The clamav docker image
image: clamav/clamav
# clamav.version -- The clamav docker image version - defaults to .Chart.appVersion
# version: "unstable"
replicaCount: 1
freshclam:
# clamav.freshclam.mirrors -- A list of clamav mirrors to be used by the clamav service
mirrors: database.clamav.net
limits:
# clamav.limits.fileSize -- The largest file size scanable by clamav, in MB
fileSize: 20
# clamav.limits.scanSize -- The largest scan size permitted in clamav, in MB
scanSize: 100
# clamav.limits.connectionQueueLength -- Maximum length the queue of pending connections may grow to
connectionQueueLength: 100
# clamav.limits.maxThreads --Maximum number of threads running at the same time.
maxThreads: 4
# clamav.sendBufTimeout -- This option specifies how long to wait (in milliseconds) if the send buffer is full, keep low to avoid clamd hanging
sendBufTimeout: 500
resources:
# clamav.resources -- The resource requests and limits for the clamav service
requests:
cpu: 300m
memory: 2000M
#limits:
# cpu: 2
# memory: 4000M

View File

@ -1,2 +0,0 @@
*.md
*.md.gotmpl

View File

@ -1,14 +0,0 @@
apiVersion: v2
name: kubeadm
description: KubeZero Kubeadm cluster config
type: application
version: 1.26.7
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
- kubezero
- kubeadm
maintainers:
- name: Stefan Reimer
email: stefan@zero-downtime.net
kubeVersion: ">= 1.26.0"

View File

@ -1,57 +0,0 @@
# kubeadm
![Version: 1.25.8](https://img.shields.io/badge/Version-1.25.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Kubeadm cluster config
**Homepage:** <https://kubezero.com>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Stefan Reimer | <stefan@zero-downtime.net> | |
## Requirements
Kubernetes: `>= 1.25.0`
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| api.apiAudiences | string | `"istio-ca"` | |
| api.awsIamAuth.enabled | bool | `false` | |
| api.awsIamAuth.kubeAdminRole | string | `"arn:aws:iam::000000000000:role/KubernetesNode"` | |
| api.awsIamAuth.workerNodeRole | string | `"arn:aws:iam::000000000000:role/KubernetesNode"` | |
| api.endpoint | string | `"kube-api.changeme.org:6443"` | |
| api.etcdServers | string | `"https://etcd:2379"` | |
| api.extraArgs | object | `{}` | |
| api.listenPort | int | `6443` | |
| api.oidcEndpoint | string | `""` | s3://${CFN[ConfigBucket]}/k8s/$CLUSTERNAME |
| api.serviceAccountIssuer | string | `""` | https://s3.${REGION}.amazonaws.com/${CFN[ConfigBucket]}/k8s/$CLUSTERNAME |
| domain | string | `"changeme.org"` | |
| etcd.extraArgs | object | `{}` | |
| etcd.nodeName | string | `"etcd"` | |
| etcd.state | string | `"new"` | |
| global.clusterName | string | `"pleasechangeme"` | |
| global.highAvailable | bool | `false` | |
| listenAddress | string | `"0.0.0.0"` | Needs to be set to primary node IP |
| nodeName | string | `"kubezero-node"` | set to $HOSTNAME |
| protectKernelDefaults | bool | `false` | |
| systemd | bool | `false` | Set to false for openrc, eg. on Gentoo or Alpine |
## Resources
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/
- https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3
- https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3
- https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
- https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
- https://github.com/awslabs/amazon-eks-ami
### Etcd
- https://itnext.io/breaking-down-and-fixing-etcd-cluster-d81e35b9260d

View File

@ -1,31 +0,0 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.maintainersSection" . }}
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
## Resources
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/
- https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3
- https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3
- https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
- https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
- https://github.com/awslabs/amazon-eks-ami
### Etcd
- https://itnext.io/breaking-down-and-fixing-etcd-cluster-d81e35b9260d

View File

@ -1,159 +0,0 @@
#!/bin/sh
function createMasterAuditPolicy() {
path="templates/apiserver/audit-policy.yaml"
known_apis='
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apiextensions.k8s.io"
- group: "apiregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "metrics.k8s.io"
- group: "networking.k8s.io"
- group: "node.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "scheduling.k8s.io"
- group: "storage.k8s.io"'
cat <<EOF >"${path}"
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# The following requests were manually identified as high-volume and low-risk,
# so drop them.
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core
resources: ["endpoints", "services", "services/status"]
- level: None
# Ingress controller reads 'configmaps/ingress-uid' through the unsecured port.
# TODO(#46983): Change this to the ingress controller service account.
users: ["system:unsecured"]
namespaces: ["kube-system"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["configmaps"]
- level: None
users: ["kubelet"] # legacy kubelet identity
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes", "nodes/status"]
- level: None
userGroups: ["system:nodes"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes", "nodes/status"]
- level: None
users:
- system:kube-controller-manager
- system:cloud-controller-manager
- system:kube-scheduler
- system:serviceaccount:kube-system:endpoint-controller
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["endpoints"]
- level: None
users: ["system:apiserver"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["namespaces", "namespaces/status", "namespaces/finalize"]
- level: None
users: ["cluster-autoscaler"]
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["configmaps", "endpoints"]
# Don't log HPA fetching metrics.
- level: None
users:
- system:kube-controller-manager
- system:cloud-controller-manager
verbs: ["get", "list"]
resources:
- group: "metrics.k8s.io"
# Don't log these read-only URLs.
- level: None
nonResourceURLs:
- /healthz*
- /version
- /swagger*
- /readyz
# Don't log events requests because of performance impact.
- level: None
resources:
- group: "" # core
resources: ["events"]
# node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
- level: Request
users: ["kubelet", "system:node-problem-detector", "system:serviceaccount:kube-system:node-problem-detector"]
verbs: ["update","patch"]
resources:
- group: "" # core
resources: ["nodes/status", "pods/status"]
omitStages:
- "RequestReceived"
- level: Request
userGroups: ["system:nodes"]
verbs: ["update","patch"]
resources:
- group: "" # core
resources: ["nodes/status", "pods/status"]
omitStages:
- "RequestReceived"
# deletecollection calls can be large, don't log responses for expected namespace deletions
- level: Request
users: ["system:serviceaccount:kube-system:namespace-controller"]
verbs: ["deletecollection"]
omitStages:
- "RequestReceived"
# Secrets, ConfigMaps, TokenRequest and TokenReviews can contain sensitive & binary data,
# so only log at the Metadata level.
- level: Metadata
resources:
- group: "" # core
resources: ["secrets", "configmaps", "serviceaccounts/token"]
- group: authentication.k8s.io
resources: ["tokenreviews"]
omitStages:
- "RequestReceived"
# Get responses can be large; skip them.
- level: Request
verbs: ["get", "list", "watch"]
resources: ${known_apis}
omitStages:
- "RequestReceived"
# Default level for known APIs
- level: RequestResponse
resources: ${known_apis}
omitStages:
- "RequestReceived"
# Default level for all other requests.
- level: Metadata
omitStages:
- "RequestReceived"
EOF
}
createMasterAuditPolicy

View File

@ -1,95 +0,0 @@
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: {{ .Chart.Version }}
clusterName: {{ .Values.global.clusterName }}
#featureGates:
# NonGracefulFailover: true
controlPlaneEndpoint: {{ .Values.api.endpoint }}
networking:
podSubnet: 10.244.0.0/16
etcd:
local:
# imageTag: 3.5.5-0
extraArgs:
### DNS discovery
#discovery-srv: {{ .Values.domain }}
#discovery-srv-name: {{ .Values.global.clusterName }}
advertise-client-urls: https://{{ .Values.etcd.nodeName }}:2379
initial-advertise-peer-urls: https://{{ .Values.etcd.nodeName }}:2380
initial-cluster: {{ include "kubeadm.etcd.initialCluster" .Values.etcd | quote }}
initial-cluster-state: {{ .Values.etcd.state }}
initial-cluster-token: etcd-{{ .Values.global.clusterName }}
name: {{ .Values.etcd.nodeName }}
listen-peer-urls: https://{{ .Values.listenAddress }}:2380
listen-client-urls: https://{{ .Values.listenAddress }}:2379
listen-metrics-urls: http://0.0.0.0:2381
logger: zap
# log-level: "warn"
{{- with .Values.etcd.extraArgs }}
{{- toYaml . | nindent 6 }}
{{- end }}
serverCertSANs:
- "{{ .Values.etcd.nodeName }}"
- "{{ .Values.etcd.nodeName }}.{{ .Values.domain }}"
- "{{ .Values.domain }}"
peerCertSANs:
- "{{ .Values.etcd.nodeName }}"
- "{{ .Values.etcd.nodeName }}.{{ .Values.domain }}"
- "{{ .Values.domain }}"
controllerManager:
extraArgs:
profiling: "false"
terminated-pod-gc-threshold: "300"
leader-elect: {{ .Values.global.highAvailable | quote }}
logging-format: json
feature-gates: {{ include "kubeadm.featuregates" ( dict "return" "csv" ) | trimSuffix "," | quote }}
scheduler:
extraArgs:
profiling: "false"
leader-elect: {{ .Values.global.highAvailable | quote }}
logging-format: json
feature-gates: {{ include "kubeadm.featuregates" ( dict "return" "csv" ) | trimSuffix "," | quote }}
apiServer:
certSANs:
- {{ regexSplit ":" .Values.api.endpoint -1 | first }}
extraArgs:
etcd-servers: {{ .Values.api.etcdServers }}
profiling: "false"
audit-log-path: "/var/log/kubernetes/audit.log"
audit-policy-file: /etc/kubernetes/apiserver/audit-policy.yaml
audit-log-maxage: "7"
audit-log-maxsize: "100"
audit-log-maxbackup: "1"
audit-log-compress: "true"
{{- if .Values.api.falco.enabled }}
audit-webhook-config-file: /etc/kubernetes/apiserver/audit-webhook.yaml
{{- end }}
tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
admission-control-config-file: /etc/kubernetes/apiserver/admission-configuration.yaml
api-audiences: {{ .Values.api.apiAudiences }}
{{- if .Values.api.serviceAccountIssuer }}
service-account-issuer: "{{ .Values.api.serviceAccountIssuer }}"
service-account-jwks-uri: "{{ .Values.api.serviceAccountIssuer }}/openid/v1/jwks"
{{- end }}
{{- if .Values.api.awsIamAuth.enabled }}
authentication-token-webhook-config-file: /etc/kubernetes/apiserver/aws-iam-authenticator.yaml
{{- end }}
feature-gates: {{ include "kubeadm.featuregates" ( dict "return" "csv" ) | trimSuffix "," | quote }}
enable-admission-plugins: DenyServiceExternalIPs,NodeRestriction,EventRateLimit,ExtendedResourceToleration
# {{- if .Values.global.highAvailable }}
# goaway-chance: ".001"
# {{- end }}
logging-format: json
{{- with .Values.api.extraArgs }}
{{- toYaml . | nindent 4 }}
{{- end }}
extraVolumes:
- name: kubezero-apiserver
hostPath: /etc/kubernetes/apiserver
mountPath: /etc/kubernetes/apiserver
readOnly: true
pathType: DirectoryOrCreate
- name: audit-log
hostPath: /var/log/kubernetes
mountPath: /var/log/kubernetes
pathType: DirectoryOrCreate

View File

@ -1,24 +0,0 @@
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: {{ .Values.listenAddress }}
bindPort: {{ .Values.api.listenPort }}
patches:
directory: /tmp/patches
nodeRegistration:
criSocket: "unix:///var/run/crio/crio.sock"
ignorePreflightErrors:
- DirAvailable--var-lib-etcd
- DirAvailable--etc-kubernetes-manifests
- FileAvailable--etc-kubernetes-pki-ca.crt
- FileAvailable--etc-kubernetes-manifests-etcd.yaml
- Swap
- KubeletVersion
kubeletExtraArgs:
node-labels: {{ .Values.nodeLabels | quote }}
{{- with .Values.providerID }}
provider-id: {{ . }}
{{- end }}
{{- if ne .Values.listenAddress "0.0.0.0" }}
node-ip: {{ .Values.listenAddress }}
{{- end }}

View File

@ -1,6 +0,0 @@
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
nodeRegistration:
criSocket: "unix:///var/run/crio/crio.sock"
patches:
directory: /tmp/patches

View File

@ -1,7 +0,0 @@
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy doesnt really support setting dynamic bind-address via config, replaced by cilium long-term anyways
metricsBindAddress: "0.0.0.0:10249"
# calico < 3.22.1 breaks starting with 1.23, see https://github.com/projectcalico/calico/issues/5011
# we go Cilium anyways
mode: "iptables"

View File

@ -1,35 +0,0 @@
# https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
cgroupDriver: cgroupfs
logging:
format: json
hairpinMode: hairpin-veth
{{- if .Values.systemd }}
resolvConf: /run/systemd/resolve/resolv.conf
{{- end }}
protectKernelDefaults: {{ .Values.protectKernelDefaults }}
#eventRecordQPS: 0
# Breaks kubelet at boot time
# tlsCertFile: /var/lib/kubelet/pki/kubelet.crt
# tlsPrivateKeyFile: /var/lib/kubelet/pki/kubelet.key
tlsCipherSuites: [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256]
featureGates:
{{- include "kubeadm.featuregates" ( dict "return" "map" ) | nindent 2 }}
# Minimal unit is 40m per pod
podsPerCore: 25
# cpuCFSQuotaPeriod: 10ms
# Basic OS incl. crio
systemReserved:
memory: 96Mi
#ephemeral-storage: "1Gi"
# kubelet memory should be static as runc,conmon are added to each pod's cgroup
kubeReserved:
cpu: 70m
memory: 96Mi
# Lets use below to reserve memory for system processes as kubeReserved/sytemReserved doesnt go well with systemd it seems
#evictionHard:
# memory.available: "484Mi"
imageGCLowThresholdPercent: 70
# kernelMemcgNotification: true

View File

@ -1,2 +0,0 @@
# aws-iam-authenticator
- https://github.com/kubernetes-sigs/aws-iam-authenticator

View File

@ -1,23 +0,0 @@
{{- /* Feature gates for all control plane components */ -}}
{{- define "kubeadm.featuregates" }}
{{- $gates := list "CustomCPUCFSQuotaPeriod" }}
{{- if eq .return "csv" }}
{{- range $key := $gates }}
{{- $key }}=true,
{{- end }}
{{- else }}
{{- range $key := $gates }}
{{ $key }}: true
{{- end }}
{{- end }}
{{- end }}
{{- /* Etcd default initial cluster */ -}}
{{- define "kubeadm.etcd.initialCluster" -}}
{{- if .initialCluster -}}
{{ .initialCluster }}
{{- else -}}
{{ .nodeName }}=https://{{ .nodeName }}:2380
{{- end -}}
{{- end -}}

View File

@ -1,27 +0,0 @@
{{- if .Values.api.awsIamAuth.enabled }}
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://{{ .Values.api.endpoint }}
name: {{ .Values.global.clusterName }}
contexts:
- context:
cluster: {{ .Values.global.clusterName }}
user: kubernetes-admin
name: kubernetes-admin@{{ .Values.global.clusterName }}
current-context: kubernetes-admin@{{ .Values.global.clusterName }}
preferences: {}
users:
- name: kubernetes-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "{{ .Values.global.clusterName }}"
- "-r"
- "{{ .Values.api.awsIamAuth.kubeAdminRole }}"
{{- end }}

View File

@ -1,7 +0,0 @@
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
metadata:
name: kubezero-admissionconfiguration
plugins:
- name: EventRateLimit
path: /etc/kubernetes/apiserver/event-config.yaml

View File

@ -1,7 +0,0 @@
# Don't Log anything, but audit policy enabled
apiVersion: audit.k8s.io/v1
kind: Policy
metadata:
name: kubezero-auditpolicy
rules:
- level: None

View File

@ -1,164 +0,0 @@
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# The following requests were manually identified as high-volume and low-risk,
# so drop them.
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core
resources: ["endpoints", "services", "services/status"]
- level: None
# Ingress controller reads 'configmaps/ingress-uid' through the unsecured port.
# TODO(#46983): Change this to the ingress controller service account.
users: ["system:unsecured"]
namespaces: ["kube-system"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["configmaps"]
- level: None
users: ["kubelet"] # legacy kubelet identity
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes", "nodes/status"]
- level: None
userGroups: ["system:nodes"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes", "nodes/status"]
- level: None
users:
- system:kube-controller-manager
- system:cloud-controller-manager
- system:kube-scheduler
- system:serviceaccount:kube-system:endpoint-controller
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["endpoints"]
- level: None
users: ["system:apiserver"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["namespaces", "namespaces/status", "namespaces/finalize"]
- level: None
users: ["cluster-autoscaler"]
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["configmaps", "endpoints"]
# Don't log HPA fetching metrics.
- level: None
users:
- system:kube-controller-manager
- system:cloud-controller-manager
verbs: ["get", "list"]
resources:
- group: "metrics.k8s.io"
# Don't log these read-only URLs.
- level: None
nonResourceURLs:
- /healthz*
- /version
- /swagger*
# Don't log events requests because of performance impact.
- level: None
resources:
- group: "" # core
resources: ["events"]
# node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
- level: Request
users: ["kubelet", "system:node-problem-detector", "system:serviceaccount:kube-system:node-problem-detector"]
verbs: ["update","patch"]
resources:
- group: "" # core
resources: ["nodes/status", "pods/status"]
omitStages:
- "RequestReceived"
- level: Request
userGroups: ["system:nodes"]
verbs: ["update","patch"]
resources:
- group: "" # core
resources: ["nodes/status", "pods/status"]
omitStages:
- "RequestReceived"
# deletecollection calls can be large, don't log responses for expected namespace deletions
- level: Request
users: ["system:serviceaccount:kube-system:namespace-controller"]
verbs: ["deletecollection"]
omitStages:
- "RequestReceived"
# Secrets, ConfigMaps, TokenRequest and TokenReviews can contain sensitive & binary data,
# so only log at the Metadata level.
- level: Metadata
resources:
- group: "" # core
resources: ["secrets", "configmaps", "serviceaccounts/token"]
- group: authentication.k8s.io
resources: ["tokenreviews"]
omitStages:
- "RequestReceived"
# Get responses can be large; skip them.
- level: Request
verbs: ["get", "list", "watch"]
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apiextensions.k8s.io"
- group: "apiregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "metrics.k8s.io"
- group: "networking.k8s.io"
- group: "node.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "scheduling.k8s.io"
- group: "storage.k8s.io"
omitStages:
- "RequestReceived"
# Default level for known APIs
- level: RequestResponse
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apiextensions.k8s.io"
- group: "apiregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "metrics.k8s.io"
- group: "networking.k8s.io"
- group: "node.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "scheduling.k8s.io"
- group: "storage.k8s.io"
omitStages:
- "RequestReceived"
# Default level for all other requests.
- level: Metadata
omitStages:
- "RequestReceived"

View File

@ -1,14 +0,0 @@
apiVersion: v1
kind: Config
clusters:
- name: falco
cluster:
server: http://falco-control-plane-k8saudit-webhook:9765/k8s-audit
contexts:
- context:
cluster: falco
user: ""
name: default-context
current-context: default-context
preferences: {}
users: []

View File

@ -1,19 +0,0 @@
{{- if .Values.api.awsIamAuth.enabled }}
# clusters refers to the remote service.
clusters:
- name: aws-iam-authenticator
cluster:
certificate-authority-data: "replaced at runtime"
server: https://localhost:21362/authenticate
# users refers to the API Server's webhook configuration
# (we don't need to authenticate the API server).
users:
- name: apiserver
# kubeconfig files require a context. Provide one for the API Server.
current-context: webhook
contexts:
- name: webhook
context:
cluster: aws-iam-authenticator
user: apiserver
{{- end }}

View File

@ -1,13 +0,0 @@
apiVersion: eventratelimit.admission.k8s.io/v1alpha1
kind: Configuration
metadata:
name: kubezero-eventratelimits
limits:
- type: Namespace
qps: 50
burst: 100
cacheSize: 20
- type: User
qps: 10
burst: 50
cacheSize: 20

View File

@ -1,17 +0,0 @@
apiVersion: kubelet.config.k8s.io/v1beta1
kind: CredentialProviderConfig
providers:
- name: amazon-ecr-credential-helper
matchImages:
- "*.dkr.ecr.*.amazonaws.com"
- "*.dkr.ecr.*.amazonaws.cn"
- "*.dkr.ecr-fips.*.amazonaws.com"
- "*.dkr.ecr.us-iso-east-1.c2s.ic.gov"
- "*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov"
defaultCacheDuration: "12h"
apiVersion: credentialprovider.kubelet.k8s.io/v1alpha1
args:
- get
#env:
# - name: AWS_PROFILE
# value: example_profile

View File

@ -1,14 +0,0 @@
spec:
replicas: {{ ternary 3 1 .Values.global.highAvailable }}
template:
spec:
containers:
- name: coredns
resources:
requests:
cpu: 100m
memory: 32Mi
limits:
memory: 128Mi
nodeSelector:
node-role.kubernetes.io/control-plane: ""

View File

@ -1,8 +0,0 @@
spec:
containers:
- name: etcd
resources:
requests:
cpu: 200m
memory: 192Mi
#ephemeral-storage: 1Gi

View File

@ -1,8 +0,0 @@
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: kube-apiserver
resources:
requests:
cpu: 200m
memory: 1Gi

View File

@ -1,10 +0,0 @@
json:
- op: add
path: /spec/containers/0/command/-
value: --bind-address={{ .Values.listenAddress }}
- op: replace
path: /spec/containers/0/livenessProbe/httpGet/host
value: {{ .Values.listenAddress }}
- op: replace
path: /spec/containers/0/startupProbe/httpGet/host
value: {{ .Values.listenAddress }}

View File

@ -1,7 +0,0 @@
spec:
containers:
- name: kube-controller-manager
resources:
requests:
cpu: 100m
memory: 128Mi

View File

@ -1,10 +0,0 @@
json:
- op: add
path: /spec/containers/0/command/-
value: --bind-address={{ .Values.listenAddress }}
- op: replace
path: /spec/containers/0/livenessProbe/httpGet/host
value: {{ .Values.listenAddress }}
- op: replace
path: /spec/containers/0/startupProbe/httpGet/host
value: {{ .Values.listenAddress }}

View File

@ -1,7 +0,0 @@
spec:
containers:
- name: kube-scheduler
resources:
requests:
cpu: 100m
memory: 64Mi

View File

@ -1,10 +0,0 @@
json:
- op: add
path: /spec/containers/0/command/-
value: --bind-address={{ .Values.listenAddress }}
- op: replace
path: /spec/containers/0/livenessProbe/httpGet/host
value: {{ .Values.listenAddress }}
- op: replace
path: /spec/containers/0/startupProbe/httpGet/host
value: {{ .Values.listenAddress }}

View File

@ -1,8 +0,0 @@
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: crio
handler: crun
overhead:
podFixed:
memory: 4Mi

View File

@ -1,13 +0,0 @@
{{- if .Values.api.serviceAccountIssuer }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:service-account-issuer-discovery
subjects:
- kind: Group
name: system:unauthenticated
{{- end }}

View File

@ -1,46 +0,0 @@
{{- if .Values.api.awsIamAuth.enabled }}
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: iamidentitymappings.iamauthenticator.k8s.aws
spec:
group: iamauthenticator.k8s.aws
scope: Cluster
names:
plural: iamidentitymappings
singular: iamidentitymapping
kind: IAMIdentityMapping
categories:
- all
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
required:
- arn
- username
properties:
arn:
type: string
username:
type: string
groups:
type: array
items:
type: string
status:
type: object
properties:
canonicalARN:
type: string
userID:
type: string
subresources:
status: {}
{{- end }}

View File

@ -1,153 +0,0 @@
{{- if .Values.api.awsIamAuth.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aws-iam-authenticator
rules:
- apiGroups:
- iamauthenticator.k8s.aws
resources:
- iamidentitymappings
verbs:
- get
- list
- watch
- apiGroups:
- iamauthenticator.k8s.aws
resources:
- iamidentitymappings/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- update
- patch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- aws-auth
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-iam-authenticator
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aws-iam-authenticator
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: aws-iam-authenticator
subjects:
- kind: ServiceAccount
name: aws-iam-authenticator
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: aws-iam-authenticator
labels:
k8s-app: aws-iam-authenticator
data:
config.yaml: |
clusterID: {{ .Values.global.clusterName }}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: kube-system
name: aws-iam-authenticator
labels:
k8s-app: aws-iam-authenticator
annotations:
seccomp.security.alpha.kubernetes.io/pod: runtime/default
spec:
selector:
matchLabels:
k8s-app: aws-iam-authenticator
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
k8s-app: aws-iam-authenticator
spec:
priorityClassName: system-cluster-critical
# use service account with access to
serviceAccountName: aws-iam-authenticator
# run on the host network (don't depend on CNI)
hostNetwork: true
# run on each controller
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
containers:
- name: aws-iam-authenticator
image: public.ecr.aws/zero-downtime/aws-iam-authenticator:v0.6.10
args:
- server
- --backend-mode=CRD,MountedFile
- --config=/etc/aws-iam-authenticator/config.yaml
- --state-dir=/var/aws-iam-authenticator
- --kubeconfig-pregenerated=true
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
resources:
requests:
memory: 32Mi
cpu: 10m
limits:
memory: 64Mi
#cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/aws-iam-authenticator/
- name: state
mountPath: /var/aws-iam-authenticator/
volumes:
- name: config
configMap:
name: aws-iam-authenticator
- name: state
secret:
secretName: aws-iam-certs
{{- end }}

View File

@ -1,23 +0,0 @@
{{- if .Values.api.awsIamAuth.enabled }}
apiVersion: iamauthenticator.k8s.aws/v1alpha1
kind: IAMIdentityMapping
metadata:
name: kubezero-worker-nodes
spec:
arn: {{ .Values.api.awsIamAuth.workerNodeRole }}
username: system:node:{{ "{{" }}EC2PrivateDNSName{{ "}}" }}
groups:
- system:bootstrappers:kubeadm:default-node-token
---
# Admin Role for remote access
apiVersion: iamauthenticator.k8s.aws/v1alpha1
kind: IAMIdentityMapping
metadata:
name: kubernetes-admin
spec:
arn: {{ .Values.api.awsIamAuth.kubeAdminRole }}
username: kubernetes-admin
groups:
- system:masters
{{- end }}

View File

@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: {{ regexSplit ":" .Values.api.endpoint -1 | first }}
external-dns.alpha.kubernetes.io/ttl: "60"
name: kubezero-api
namespace: kube-system
spec:
type: ClusterIP
clusterIP: None
selector:
component: kube-apiserver
tier: control-plane

View File

@ -1,38 +0,0 @@
global:
clusterName: pleasechangeme
highAvailable: false
# -- set to $HOSTNAME
nodeName: kubezero-node
domain: changeme.org
# -- Needs to be set to primary node IP
listenAddress: 0.0.0.0
api:
endpoint: kube-api.changeme.org:6443
listenPort: 6443
etcdServers: "https://etcd:2379"
extraArgs: {}
# -- https://s3.${REGION}.amazonaws.com/${CFN[ConfigBucket]}/k8s/$CLUSTERNAME
serviceAccountIssuer: ""
# -- s3://${CFN[ConfigBucket]}/k8s/$CLUSTERNAME
oidcEndpoint: ""
apiAudiences: "istio-ca"
awsIamAuth:
enabled: false
workerNodeRole: "arn:aws:iam::000000000000:role/KubernetesNode"
kubeAdminRole: "arn:aws:iam::000000000000:role/KubernetesNode"
falco:
enabled: false
etcd:
nodeName: etcd
state: new
extraArgs: {}
# -- Set to false for openrc, eg. on Gentoo or Alpine
systemd: false
protectKernelDefaults: false

View File

@ -1,53 +0,0 @@
apiVersion: v2
name: kubezero-addons
description: KubeZero umbrella chart for various optional cluster addons
type: application
version: 0.8.0
appVersion: v1.26
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
- kubezero
- fuse-device-plugin
- neuron-device-plugin
- nvidia-device-plugin
- cluster-autoscaler
- sealed-secrets
- external-dns
- aws-node-termination-handler
- falco
maintainers:
- name: Stefan Reimer
email: stefan@zero-downtime.net
dependencies:
- name: external-dns
version: 1.12.2
repository: https://kubernetes-sigs.github.io/external-dns/
condition: external-dns.enabled
- name: cluster-autoscaler
version: 9.28.0
repository: https://kubernetes.github.io/autoscaler
condition: cluster-autoscaler.enabled
- name: nvidia-device-plugin
version: 0.14.0
# https://github.com/NVIDIA/k8s-device-plugin
repository: https://nvidia.github.io/k8s-device-plugin
condition: nvidia-device-plugin.enabled
- name: sealed-secrets
version: 2.8.1
repository: https://bitnami-labs.github.io/sealed-secrets
condition: sealed-secrets.enabled
- name: aws-node-termination-handler
version: 0.21.0
# repository: https://aws.github.io/eks-charts
condition: aws-node-termination-handler.enabled
- name: aws-eks-asg-rolling-update-handler
version: 1.3.0
# repository: https://twin.github.io/helm-charts
condition: aws-eks-asg-rolling-update-handler.enabled
- name: falco
version: 3.3.0
repository: https://falcosecurity.github.io/charts
condition: falco-control-plane.enabled
alias: falco-control-plane
kubeVersion: ">= 1.26.0"

View File

@ -1,166 +0,0 @@
# kubezero-addons
![Version: 0.7.5](https://img.shields.io/badge/Version-0.7.5-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.25](https://img.shields.io/badge/AppVersion-v1.25-informational?style=flat-square)
KubeZero umbrella chart for various optional cluster addons
**Homepage:** <https://kubezero.com>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Stefan Reimer | <stefan@zero-downtime.net> | |
## Requirements
Kubernetes: `>= 1.25.0`
| Repository | Name | Version |
|------------|------|---------|
| | aws-eks-asg-rolling-update-handler | 1.3.0 |
| | aws-node-termination-handler | 0.21.0 |
| https://bitnami-labs.github.io/sealed-secrets | sealed-secrets | 2.8.1 |
| https://kubernetes-sigs.github.io/external-dns/ | external-dns | 1.12.2 |
| https://kubernetes.github.io/autoscaler | cluster-autoscaler | 9.28.0 |
| https://nvidia.github.io/k8s-device-plugin | nvidia-device-plugin | 0.14.0 |
# MetalLB
# device-plugins
## AWS Neuron
Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/) - [Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/)
## Nvidia
## Cluster AutoScaler
- https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| aws-eks-asg-rolling-update-handler.enabled | bool | `false` | |
| aws-eks-asg-rolling-update-handler.environmentVars[0].name | string | `"CLUSTER_NAME"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[0].value | string | `""` | |
| aws-eks-asg-rolling-update-handler.environmentVars[1].name | string | `"AWS_REGION"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[1].value | string | `"us-west-2"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[2].name | string | `"EXECUTION_INTERVAL"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[2].value | string | `"60"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[3].name | string | `"METRICS"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[3].value | string | `"true"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[4].name | string | `"EAGER_CORDONING"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[4].value | string | `"true"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[5].name | string | `"SLOW_MODE"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[5].value | string | `"true"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[6].name | string | `"AWS_ROLE_ARN"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[6].value | string | `""` | |
| aws-eks-asg-rolling-update-handler.environmentVars[7].name | string | `"AWS_WEB_IDENTITY_TOKEN_FILE"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[7].value | string | `"/var/run/secrets/sts.amazonaws.com/serviceaccount/token"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[8].name | string | `"AWS_STS_REGIONAL_ENDPOINTS"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[8].value | string | `"regional"` | |
| aws-eks-asg-rolling-update-handler.image.tag | string | `"v1.7.0"` | |
| aws-eks-asg-rolling-update-handler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| aws-eks-asg-rolling-update-handler.resources.limits.memory | string | `"128Mi"` | |
| aws-eks-asg-rolling-update-handler.resources.requests.cpu | string | `"10m"` | |
| aws-eks-asg-rolling-update-handler.resources.requests.memory | string | `"32Mi"` | |
| aws-eks-asg-rolling-update-handler.tolerations[0].effect | string | `"NoSchedule"` | |
| aws-eks-asg-rolling-update-handler.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| aws-node-termination-handler.deleteLocalData | bool | `true` | |
| aws-node-termination-handler.emitKubernetesEvents | bool | `true` | |
| aws-node-termination-handler.enableProbesServer | bool | `true` | |
| aws-node-termination-handler.enablePrometheusServer | bool | `false` | |
| aws-node-termination-handler.enableSpotInterruptionDraining | bool | `false` | |
| aws-node-termination-handler.enableSqsTerminationDraining | bool | `true` | |
| aws-node-termination-handler.enabled | bool | `false` | |
| aws-node-termination-handler.extraEnv[0] | object | `{"name":"AWS_ROLE_ARN","value":""}` | "arn:aws:iam::${AWS::AccountId}:role/${AWS::Region}.${ClusterName}.awsNth" |
| aws-node-termination-handler.extraEnv[1].name | string | `"AWS_WEB_IDENTITY_TOKEN_FILE"` | |
| aws-node-termination-handler.extraEnv[1].value | string | `"/var/run/secrets/sts.amazonaws.com/serviceaccount/token"` | |
| aws-node-termination-handler.extraEnv[2].name | string | `"AWS_STS_REGIONAL_ENDPOINTS"` | |
| aws-node-termination-handler.extraEnv[2].value | string | `"regional"` | |
| aws-node-termination-handler.fullnameOverride | string | `"aws-node-termination-handler"` | |
| aws-node-termination-handler.ignoreDaemonSets | bool | `true` | |
| aws-node-termination-handler.jsonLogging | bool | `true` | |
| aws-node-termination-handler.logFormatVersion | int | `2` | |
| aws-node-termination-handler.managedTag | string | `"aws-node-termination-handler/managed"` | "aws-node-termination-handler/${ClusterName}" |
| aws-node-termination-handler.metadataTries | int | `0` | |
| aws-node-termination-handler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| aws-node-termination-handler.podMonitor.create | bool | `false` | |
| aws-node-termination-handler.queueURL | string | `""` | https://sqs.${AWS::Region}.amazonaws.com/${AWS::AccountId}/${ClusterName}_Nth |
| aws-node-termination-handler.rbac.pspEnabled | bool | `false` | |
| aws-node-termination-handler.taintNode | bool | `true` | |
| aws-node-termination-handler.tolerations[0].effect | string | `"NoSchedule"` | |
| aws-node-termination-handler.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| aws-node-termination-handler.useProviderId | bool | `true` | |
| awsNeuron.enabled | bool | `false` | |
| awsNeuron.image.name | string | `"public.ecr.aws/neuron/neuron-device-plugin"` | |
| awsNeuron.image.tag | string | `"1.9.3.0"` | |
| cluster-autoscaler.autoDiscovery.clusterName | string | `""` | |
| cluster-autoscaler.awsRegion | string | `"us-west-2"` | |
| cluster-autoscaler.enabled | bool | `false` | |
| cluster-autoscaler.extraArgs.balance-similar-node-groups | bool | `true` | |
| cluster-autoscaler.extraArgs.ignore-taint | string | `"node.cilium.io/agent-not-ready"` | |
| cluster-autoscaler.extraArgs.scan-interval | string | `"30s"` | |
| cluster-autoscaler.extraArgs.skip-nodes-with-local-storage | bool | `false` | |
| cluster-autoscaler.image.tag | string | `"v1.25.1"` | |
| cluster-autoscaler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cluster-autoscaler.podDisruptionBudget | bool | `false` | |
| cluster-autoscaler.prometheusRule.enabled | bool | `false` | |
| cluster-autoscaler.prometheusRule.interval | string | `"30"` | |
| cluster-autoscaler.serviceMonitor.enabled | bool | `false` | |
| cluster-autoscaler.serviceMonitor.interval | string | `"30s"` | |
| cluster-autoscaler.tolerations[0].effect | string | `"NoSchedule"` | |
| cluster-autoscaler.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| clusterBackup.enabled | bool | `false` | |
| clusterBackup.extraEnv | list | `[]` | |
| clusterBackup.image.name | string | `"public.ecr.aws/zero-downtime/kubezero-admin"` | |
| clusterBackup.password | string | `""` | /etc/cloudbender/clusterBackup.passphrase |
| clusterBackup.repository | string | `""` | s3:https://s3.amazonaws.com/${CFN[ConfigBucket]}/k8s/${CLUSTERNAME}/clusterBackup |
| external-dns.enabled | bool | `false` | |
| external-dns.interval | string | `"3m"` | |
| external-dns.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| external-dns.provider | string | `"inmemory"` | |
| external-dns.sources[0] | string | `"service"` | |
| external-dns.tolerations[0].effect | string | `"NoSchedule"` | |
| external-dns.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| external-dns.triggerLoopOnEvent | bool | `true` | |
| forseti.aws.iamRoleArn | string | `""` | "arn:aws:iam::${AWS::AccountId}:role/${AWS::Region}.${ClusterName}.kubezeroForseti" |
| forseti.aws.region | string | `""` | |
| forseti.enabled | bool | `false` | |
| forseti.image.name | string | `"public.ecr.aws/zero-downtime/forseti"` | |
| forseti.image.tag | string | `"v0.1.2"` | |
| fuseDevicePlugin.enabled | bool | `false` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key | string | `"node.kubernetes.io/instance-type"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator | string | `"In"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[0] | string | `"g5.xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[10] | string | `"g4dn.4xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[11] | string | `"g4dn.8xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[12] | string | `"g4dn.12xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[13] | string | `"g4dn.16xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[1] | string | `"g5.2xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[2] | string | `"g5.4xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[3] | string | `"g5.8xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[4] | string | `"g5.12xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[5] | string | `"g5.16xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[6] | string | `"g5.24xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[7] | string | `"g5.48xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[8] | string | `"g4dn.xlarge"` | |
| nvidia-device-plugin.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[9] | string | `"g4dn.2xlarge"` | |
| nvidia-device-plugin.enabled | bool | `false` | |
| nvidia-device-plugin.tolerations[0].effect | string | `"NoSchedule"` | |
| nvidia-device-plugin.tolerations[0].key | string | `"nvidia.com/gpu"` | |
| nvidia-device-plugin.tolerations[0].operator | string | `"Exists"` | |
| nvidia-device-plugin.tolerations[1].effect | string | `"NoSchedule"` | |
| nvidia-device-plugin.tolerations[1].key | string | `"kubezero-workergroup"` | |
| nvidia-device-plugin.tolerations[1].operator | string | `"Exists"` | |
| sealed-secrets.enabled | bool | `false` | |
| sealed-secrets.fullnameOverride | string | `"sealed-secrets-controller"` | |
| sealed-secrets.keyrenewperiod | string | `"0"` | |
| sealed-secrets.metrics.serviceMonitor.enabled | bool | `false` | |
| sealed-secrets.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| sealed-secrets.resources.limits.memory | string | `"128Mi"` | |
| sealed-secrets.resources.requests.cpu | string | `"10m"` | |
| sealed-secrets.resources.requests.memory | string | `"24Mi"` | |
| sealed-secrets.tolerations[0].effect | string | `"NoSchedule"` | |
| sealed-secrets.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |

View File

@ -1,28 +0,0 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.maintainersSection" . }}
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
# MetalLB
# device-plugins
## AWS Neuron
Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/) - [Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/)
## Nvidia
## Cluster AutoScaler
- https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md
{{ template "chart.valuesSection" . }}

View File

@ -1,8 +0,0 @@
apiVersion: v2
description: Handles rolling upgrades for AWS ASGs for EKS by replacing outdated nodes
by new nodes.
home: https://github.com/TwiN/aws-eks-asg-rolling-update-handler
maintainers:
- name: TwiN
name: aws-eks-asg-rolling-update-handler
version: 1.3.0

View File

@ -1,13 +0,0 @@
# aws-eks-asg-rolling-update-handler
## Configuration
The following table lists the configurable parameters of the aws-eks-asg-rolling-update-handler chart and their default values.
| Parameters | Description | Required | Default |
|:-----------|:------------|:---------|:------------|
| environmentVars | environment variables for aws-eks-asg-rolling-update-handler container, available variables are listed [here](https://github.com/TwiN/aws-eks-asg-rolling-update-handler/blob/master/README.md#usage) | yes |`[{"name":"CLUSTER_NAME","value":"cluster-name"}]`|
| replicaCount | Number of aws-eks-asg-rolling-update-handler replicas | yes |`1` |
| image.repository | Image repository | yes | `twinproduction/aws-eks-asg-rolling-update-handler` |
| image.tag | image tag | yes | `v1.4.3` |
| image.pullPolicy | Image pull policy | yes | `IfNotPresent` |
| resources | CPU/memory resource requests/limits | no | `{}` |
| podAnnotations | Annotations to add to the aws-eks-asg-rolling-update-handler pod configuration | no | `{}` |

View File

@ -1,31 +0,0 @@
{{/*
Create a default app name.
*/}}
{{- define "aws-eks-asg-rolling-update-handler.name" -}}
{{- .Chart.Name -}}
{{- end -}}
{{/*
Create a default namespace.
*/}}
{{- define "aws-eks-asg-rolling-update-handler.namespace" -}}
{{- .Release.Namespace -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "aws-eks-asg-rolling-update-handler.labels" -}}
app.kubernetes.io/name: {{ include "aws-eks-asg-rolling-update-handler.name" . }}
{{- end -}}
{{/*
Create the name of the service account to use.
*/}}
{{- define "aws-eks-asg-rolling-update-handler.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "aws-eks-asg-rolling-update-handler.name" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@ -1,15 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "aws-eks-asg-rolling-update-handler.name" . }}
labels:
{{ include "aws-eks-asg-rolling-update-handler.labels" . | indent 4 }}
roleRef:
kind: ClusterRole
name: {{ template "aws-eks-asg-rolling-update-handler.name" . }}
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ template "aws-eks-asg-rolling-update-handler.serviceAccountName" . }}
namespace: {{ template "aws-eks-asg-rolling-update-handler.namespace" . }}

View File

@ -1,41 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "aws-eks-asg-rolling-update-handler.name" . }}
labels:
{{ include "aws-eks-asg-rolling-update-handler.labels" . | indent 4 }}
rules:
- apiGroups:
- "*"
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- "*"
resources:
- nodes
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
- "*"
resources:
- pods/eviction
verbs:
- get
- list
- create
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list

View File

@ -1,60 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "aws-eks-asg-rolling-update-handler.name" . }}
namespace: {{ template "aws-eks-asg-rolling-update-handler.namespace" . }}
labels:
{{ include "aws-eks-asg-rolling-update-handler.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{ include "aws-eks-asg-rolling-update-handler.labels" . | indent 6 }}
template:
metadata:
labels:
{{ include "aws-eks-asg-rolling-update-handler.labels" . | indent 8 }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
automountServiceAccountToken: true
serviceAccountName: {{ template "aws-eks-asg-rolling-update-handler.serviceAccountName" . }}
restartPolicy: Always
dnsPolicy: Default
containers:
- name: {{ template "aws-eks-asg-rolling-update-handler.name" . }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- toYaml .Values.environmentVars | nindent 12 }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- name: aws-token
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true
volumes:
- name: aws-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 86400
audience: "sts.amazonaws.com"
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -1,13 +0,0 @@
{{ if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "aws-eks-asg-rolling-update-handler.serviceAccountName" . }}
namespace: {{ template "aws-eks-asg-rolling-update-handler.namespace" . }}
labels:
{{ include "aws-eks-asg-rolling-update-handler.labels" . | indent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{ end }}

View File

@ -1,39 +0,0 @@
replicaCount: 1
image:
repository: twinproduction/aws-eks-asg-rolling-update-handler
tag: v1.7.0
pullPolicy: IfNotPresent
#imagePullSecrets:
#- imagePullSecret
environmentVars:
- name: CLUSTER_NAME
value: "cluster-name" # REPLACE THIS WITH THE NAME OF YOUR EKS CLUSTER
#- name: AUTO_SCALING_GROUP_NAMES
# value: "asg-1,asg-2,asg-3" # REPLACE THESE VALUES FOR THE NAMES OF THE ASGs, if CLUSTER_NAME is provided, this is ignored
#- name: IGNORE_DAEMON_SETS
# value: "true"
#- name: DELETE_LOCAL_DATA
# value: "true"
#- name: AWS_REGION
# value: us-west-2
#- name: ENVIRONMENT
# value: ""
resources: {}
# limits:
# cpu: 0.3
# memory: 100Mi
# requests:
# cpu: 0.1
# memory: 50Mi
podAnnotations: {}
# prometheus.io/port: "8080"
# prometheus.io/scrape: "true"
serviceAccount:
create: true
#name: aws-eks-asg-rolling-update-handler
annotations: {}

View File

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
example-values*.yaml

View File

@ -1,25 +0,0 @@
apiVersion: v2
appVersion: 1.19.0
description: A Helm chart for the AWS Node Termination Handler.
home: https://github.com/aws/eks-charts
icon: https://raw.githubusercontent.com/aws/eks-charts/master/docs/logo/aws.png
keywords:
- aws
- eks
- ec2
- node-termination
- spot
kubeVersion: '>= 1.16-0'
maintainers:
- email: bwagner5@users.noreply.github.com
name: Brandon Wagner
url: https://github.com/bwagner5
- email: jillmon@users.noreply.github.com
name: Jillian Kuentz
url: https://github.com/jillmon
name: aws-node-termination-handler
sources:
- https://github.com/aws/aws-node-termination-handler/
- https://github.com/aws/eks-charts/
type: application
version: 0.21.0

View File

@ -1,176 +0,0 @@
# AWS Node Termination Handler
AWS Node Termination Handler Helm chart for Kubernetes. For more information on this project see the project repo at [github.com/aws/aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler).
## Prerequisites
- _Kubernetes_ >= v1.16
## Installing the Chart
Before you can install the chart you will need to add the `aws` repo to [Helm](https://helm.sh/).
```shell
helm repo add eks https://aws.github.io/eks-charts/
```
After you've installed the repo you can install the chart, the following command will install the chart with the release name `aws-node-termination-handler` and the default configuration to the `kube-system` namespace.
```shell
helm upgrade --install --namespace kube-system aws-node-termination-handler eks/aws-node-termination-handler
```
To install the chart on an EKS cluster where the AWS Node Termination Handler is already installed, you can run the following command.
```shell
helm upgrade --install --namespace kube-system aws-node-termination-handler eks/aws-node-termination-handler --recreate-pods --force
```
If you receive an error similar to the one below simply rerun the above command.
> Error: release aws-node-termination-handler failed: <resource> "aws-node-termination-handler" already exists
To uninstall the `aws-node-termination-handler` chart installation from the `kube-system` namespace run the following command.
```shell
helm delete --namespace kube-system aws-node-termination-handler
```
## Configuration
The following tables lists the configurable parameters of the chart and their default values. These values are split up into the [common configuration](#common-configuration) shared by all AWS Node Termination Handler modes, [queue configuration](#queue-processor-mode-configuration) used when AWS Node Termination Handler is in in queue-processor mode, and [IMDS configuration](#imds-mode-configuration) used when AWS Node Termination Handler is in IMDS mode; for more information about the different modes see the project [README](https://github.com/aws/aws-node-termination-handler/blob/main/README.md).
### Common Configuration
The configuration in this table applies to all AWS Node Termination Handler modes.
| Parameter | Description | Default |
| ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
| `image.repository` | Image repository. | `public.ecr.aws/aws-ec2/aws-node-termination-handler` |
| `image.tag` | Image tag. | `v{{ .Chart.AppVersion}}` |
| `image.pullPolicy` | Image pull policy. | `IfNotPresent` |
| `image.pullSecrets` | Image pull secrets. | `[]` |
| `nameOverride` | Override the `name` of the chart. | `""` |
| `fullnameOverride` | Override the `fullname` of the chart. | `""` |
| `serviceAccount.create` | If `true`, create a new service account. | `true` |
| `serviceAccount.name` | Service account to be used. If not set and `serviceAccount.create` is `true`, a name is generated using the full name template. | `nil` |
| `serviceAccount.annotations` | Annotations to add to the service account. | `{}` |
| `rbac.create` | If `true`, create the RBAC resources. | `true` |
| `rbac.pspEnabled` | If `true`, create a pod security policy resource. Note: `PodSecurityPolicy`s will not be created when Kubernetes version is 1.25 or later. | `true` |
| `customLabels` | Labels to add to all resource metadata. | `{}` |
| `podLabels` | Labels to add to the pod. | `{}` |
| `podAnnotations` | Annotations to add to the pod. | `{}` |
| `podSecurityContext` | Security context for the pod. | _See values.yaml_ |
| `securityContext` | Security context for the _aws-node-termination-handler_ container. | _See values.yaml_ |
| `terminationGracePeriodSeconds` | The termination grace period for the pod. | `nil` |
| `resources` | Resource requests and limits for the _aws-node-termination-handler_ container. | `{}` |
| `nodeSelector` | Expressions to select a node by it's labels for pod assignment. In IMDS mode this has a higher priority than `daemonsetNodeSelector` (for backwards compatibility) but shouldn't be used. | `{}` |
| `affinity` | Affinity settings for pod assignment. In IMDS mode this has a higher priority than `daemonsetAffinity` (for backwards compatibility) but shouldn't be used. | `{}` |
| `tolerations` | Tolerations for pod assignment. In IMDS mode this has a higher priority than `daemonsetTolerations` (for backwards compatibility) but shouldn't be used. | `[]` |
| `extraEnv` | Additional environment variables for the _aws-node-termination-handler_ container. | `[]` |
| `probes` | The Kubernetes liveness probe configuration. | _See values.yaml_ |
| `logLevel` | Sets the log level (`info`,`debug`, or `error`) | `info` |
| `logFormatVersion` | Sets the log format version. Available versions: 1, 2. Version 1 refers to the format that has been used through v1.17.3. Version 2 offers more detail for the "event kind" and "reason", especially when operating in Queue Processor mode. | `1` |
| `jsonLogging` | If `true`, use JSON-formatted logs instead of human readable logs. | `false` |
| `enablePrometheusServer` | If `true`, start an http server exposing `/metrics` endpoint for _Prometheus_. | `false` |
| `prometheusServerPort` | Replaces the default HTTP port for exposing _Prometheus_ metrics. | `9092` |
| `dryRun` | If `true`, only log if a node would be drained. | `false` |
| `cordonOnly` | If `true`, nodes will be cordoned but not drained when an interruption event occurs. | `false` |
| `taintNode` | If `true`, nodes will be tainted when an interruption event occurs. Currently used taint keys are `aws-node-termination-handler/scheduled-maintenance`, `aws-node-termination-handler/spot-itn`, `aws-node-termination-handler/asg-lifecycle-termination` and `aws-node-termination-handler/rebalance-recommendation`. | `false` |
| `excludeFromLoadBalancers` | If `true`, nodes will be marked for exclusion from load balancers before they are cordoned. This applies the `node.kubernetes.io/exclude-from-external-load-balancers` label to enable the ServiceNodeExclusion feature gate. The label will not be modified or removed for nodes that already have it. | `false` |
| `deleteLocalData` | If `true`, continue even if there are pods using local data that will be deleted when the node is drained. | `true` |
| `ignoreDaemonSets` | If `true`, skip terminating daemon set managed pods. | `true` |
| `podTerminationGracePeriod` | The time in seconds given to each pod to terminate gracefully. If negative, the default value specified in the pod will be used, which defaults to 30 seconds if not specified for the pod. | `-1` |
| `nodeTerminationGracePeriod` | Period of time in seconds given to each node to terminate gracefully. Node draining will be scheduled based on this value to optimize the amount of compute time, but still safely drain the node before an event. | `120` |
| `emitKubernetesEvents` | If `true`, Kubernetes events will be emitted when interruption events are received and when actions are taken on Kubernetes nodes. In IMDS Processor mode a default set of annotations with all the node metadata gathered from IMDS will be attached to each event. More information [here](https://github.com/aws/aws-node-termination-handler/blob/main/docs/kubernetes_events.md). | `false` |
| `completeLifecycleActionDelaySeconds` | Pause after draining the node before completing the EC2 Autoscaling lifecycle action. This may be helpful if Pods on the node have Persistent Volume Claims. | -1 |
| `kubernetesEventsExtraAnnotations` | A comma-separated list of `key=value` extra annotations to attach to all emitted Kubernetes events (e.g. `first=annotation,sample.annotation/number=two"`). | `""` |
| `webhookURL` | Posts event data to URL upon instance interruption action. | `""` |
| `webhookURLSecretName` | Pass the webhook URL as a Secret using the key `webhookurl`. | `""` |
| `webhookHeaders` | Replace the default webhook headers (e.g. `{"Content-type":"application/json"}`). | `""` |
| `webhookProxy` | Uses the specified HTTP(S) proxy for sending webhook data. | `""` |
| `webhookTemplate` | Replaces the default webhook message template (e.g. `{"text":"[NTH][Instance Interruption] EventID: {{ .EventID }} - Kind: {{ .Kind }} - Instance: {{ .InstanceID }} - Node: {{ .NodeName }} - Description: {{ .Description }} - Start Time: {{ .StartTime }}"}`). | `""` |
| `webhookTemplateConfigMapName` | Pass the webhook template file as a configmap. | "``" |
| `webhookTemplateConfigMapKey` | Name of the Configmap key storing the template file. | `""` |
| `enableSqsTerminationDraining` | If `true`, this turns on queue-processor mode which drains nodes when an SQS termination event is received. | `false` |
### Queue-Processor Mode Configuration
The configuration in this table applies to AWS Node Termination Handler in queue-processor mode.
| Parameter | Description | Default |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- |
| `replicas` | The number of replicas in the deployment when using queue-processor mode (NOTE: increasing replicas may cause duplicate webhooks since pods are stateless). | `1` |
| `strategy` | Specify the update strategy for the deployment. | `{}` |
| `podDisruptionBudget` | Limit the disruption for controller pods, requires at least 2 controller replicas. | `{}` |
| `serviceMonitor.create` | If `true`, create a ServiceMonitor. This requires `enablePrometheusServer: true`. | `false` |
| `serviceMonitor.namespace` | Override ServiceMonitor _Helm_ release namespace. | `nil` |
| `serviceMonitor.labels` | Additional ServiceMonitor metadata labels. | `{}` |
| `serviceMonitor.interval` | _Prometheus_ scrape interval. | `30s` |
| `serviceMonitor.sampleLimit` | Number of scraped samples accepted. | `5000` |
| `priorityClassName` | Name of the PriorityClass to use for the Deployment. | `system-cluster-critical` |
| `awsRegion` | If specified, use the AWS region for AWS API calls, else NTH will try to find the region through the `AWS_REGION` environment variable, IMDS, or the specified queue URL. | `""` |
| `queueURL` | Listens for messages on the specified SQS queue URL. | `""` |
| `workers` | The maximum amount of parallel event processors to handle concurrent events. | `10` |
| `checkTagBeforeDraining` | If `true`, check that the instance is tagged with the `managedTag` before draining the node. | `true` |
| `managedTag` | The node tag to check if `checkTagBeforeDraining` is `true`. | `aws-node-termination-handler/managed` |
| `checkASGTagBeforeDraining` | [DEPRECATED](Use `checkTagBeforeDraining` instead) If `true`, check that the instance is tagged with the `managedAsgTag` before draining the node. If `false`, disables calls ASG API. | `true` |
| `managedAsgTag` | [DEPRECATED](Use `managedTag` instead) The node tag to check if `checkASGTagBeforeDraining` is `true`.
| `useProviderId` | If `true`, fetch node name through Kubernetes node spec ProviderID instead of AWS event PrivateDnsHostname. | `false` |
### IMDS Mode Configuration
The configuration in this table applies to AWS Node Termination Handler in IMDS mode.
| Parameter | Description | Default |
| -------------------------------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------|
| `targetNodeOs` | Space separated list of node OS's to target (e.g. `"linux"`, `"windows"`, `"linux windows"`). Windows support is **EXPERIMENTAL**. | `"linux"` |
| `linuxPodLabels` | Labels to add to each Linux pod. | `{}` |
| `windowsPodLabels` | Labels to add to each Windows pod. | `{}` |
| `linuxPodAnnotations` | Annotations to add to each Linux pod. | `{}` |
| `windowsPodAnnotations` | Annotations to add to each Windows pod. | `{}` |
| `updateStrategy` | Update strategy for the all DaemonSets. | _See values.yaml_ |
| `daemonsetPriorityClassName` | Name of the PriorityClass to use for all DaemonSets. | `system-node-critical` |
| `podMonitor.create` | If `true`, create a PodMonitor. This requires `enablePrometheusServer: true`. | `false` |
| `podMonitor.namespace` | Override PodMonitor _Helm_ release namespace. | `nil` |
| `podMonitor.labels` | Additional PodMonitor metadata labels | `{}` |
| `podMonitor.interval` | _Prometheus_ scrape interval. | `30s` |
| `podMonitor.sampleLimit` | Number of scraped samples accepted. | `5000` |
| `useHostNetwork` | If `true`, enables `hostNetwork` for the Linux DaemonSet. NOTE: setting this to `false` may cause issues accessing IMDSv2 if your account is not configured with an IP hop count of 2 see [Metrics Endpoint Considerations](#metrics-endpoint-considerations) | `true` |
| `dnsPolicy` | If specified, this overrides `linuxDnsPolicy` and `windowsDnsPolicy` with a single policy. | `""` |
| `dnsConfig` | If specified, this sets the dnsConfig: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config | `{}` |
| `linuxDnsPolicy` | DNS policy for the Linux DaemonSet. | `""` |
| `windowsDnsPolicy` | DNS policy for the Windows DaemonSet. | `""` |
| `daemonsetNodeSelector` | Expressions to select a node by it's labels for DaemonSet pod assignment. For backwards compatibility the `nodeSelector` value has priority over this but shouldn't be used. | `{}` |
| `linuxNodeSelector` | Override `daemonsetNodeSelector` for the Linux DaemonSet. | `{}` |
| `windowsNodeSelector` | Override `daemonsetNodeSelector` for the Windows DaemonSet. | `{}` |
| `daemonsetAffinity` | Affinity settings for DaemonSet pod assignment. For backwards compatibility the `affinity` has priority over this but shouldn't be used. | `{}` |
| `linuxAffinity` | Override `daemonsetAffinity` for the Linux DaemonSet. | `{}` |
| `windowsAffinity` | Override `daemonsetAffinity` for the Windows DaemonSet. | `{}` |
| `daemonsetTolerations` | Tolerations for DaemonSet pod assignment. For backwards compatibility the `tolerations` has priority over this but shouldn't be used. | `[]` |
| `linuxTolerations` | Override `daemonsetTolerations` for the Linux DaemonSet. | `[]` |
| `windowsTolerations` | Override `daemonsetTolerations` for the Linux DaemonSet. | `[]` |
| `enableProbesServer` | If `true`, start an http server exposing `/healthz` endpoint for probes. | `false` |
| `metadataTries` | The number of times to try requesting metadata. | `3` |
| `enableSpotInterruptionDraining` | If `true`, drain nodes when the spot interruption termination notice is received. Only used in IMDS mode. | `true` |
| `enableScheduledEventDraining` | If `true`, drain nodes before the maintenance window starts for an EC2 instance scheduled event. Only used in IMDS mode. | `true` |
| `enableRebalanceMonitoring` | If `true`, cordon nodes when the rebalance recommendation notice is received. If you'd like to drain the node in addition to cordoning, then also set `enableRebalanceDraining`. Only used in IMDS mode. | `false` |
| `enableRebalanceDraining` | If `true`, drain nodes when the rebalance recommendation notice is received. Only used in IMDS mode. | `false` |
### Testing Configuration
The configuration in this table applies to AWS Node Termination Handler testing and is **NOT RECOMMENDED** FOR PRODUCTION DEPLOYMENTS.
| Parameter | Description | Default |
| --------------------- | --------------------------------------------------------------------------------- | -------------- |
| `awsEndpoint` | (Used for testing) If specified, use the provided AWS endpoint to make API calls. | `""` |
| `awsSecretAccessKey` | (Used for testing) Pass-thru environment variable. | `nil` |
| `awsAccessKeyID` | (Used for testing) Pass-thru environment variable. | `nil` |
| `instanceMetadataURL` | (Used for testing) If specified, use the provided metadata URL. | `""` |
| `procUptimeFile` | (Used for Testing) Specify the uptime file. | `/proc/uptime` |
## Metrics Endpoint Considerations
AWS Node Termination HAndler in IMDS mode runs as a DaemonSet with `useHostNetwork: true` by default. If the Prometheus server is enabled with `enablePrometheusServer: true` nothing else will be able to bind to the configured port (by default `prometheusServerPort: 9092`) in the root network namespace. Therefore, it will need to have a firewall/security group configured on the nodes to block access to the `/metrics` endpoint.
You can switch NTH in IMDS mode to run w/ `useHostNetwork: false`, but you will need to make sure that IMDSv1 is enabled or IMDSv2 IP hop count will need to be incremented to 2 (see the [IMDSv2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).

View File

@ -1,8 +0,0 @@
***********************************************************************
* AWS Node Termination Handler *
***********************************************************************
Chart version: {{ .Chart.Version }}
App version: {{ .Chart.AppVersion }}
Image tag: {{ include "aws-node-termination-handler.image" . }}
Mode : {{ if .Values.enableSqsTerminationDraining }}Queue Processor{{ else }}IMDS{{ end }}
***********************************************************************

View File

@ -1,124 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "aws-node-termination-handler.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "aws-node-termination-handler.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Equivalent to "aws-node-termination-handler.fullname" except that "-win" indicator is appended to the end.
Name will not exceed 63 characters.
*/}}
{{- define "aws-node-termination-handler.fullnameWindows" -}}
{{- include "aws-node-termination-handler.fullname" . | trunc 59 | trimSuffix "-" | printf "%s-win" -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "aws-node-termination-handler.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "aws-node-termination-handler.labels" -}}
{{ include "aws-node-termination-handler.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/part-of: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "aws-node-termination-handler.chart" . }}
{{- with .Values.customLabels }}
{{ toYaml . }}
{{- end }}
{{- end -}}
{{/*
Deployment labels
*/}}
{{- define "aws-node-termination-handler.labelsDeployment" -}}
{{ include "aws-node-termination-handler.labels" . }}
app.kubernetes.io/component: deployment
{{- end -}}
{{/*
Daemonset labels
*/}}
{{- define "aws-node-termination-handler.labelsDaemonset" -}}
{{ include "aws-node-termination-handler.labels" . }}
app.kubernetes.io/component: daemonset
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "aws-node-termination-handler.selectorLabels" -}}
app.kubernetes.io/name: {{ include "aws-node-termination-handler.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Selector labels for the deployment
*/}}
{{- define "aws-node-termination-handler.selectorLabelsDeployment" -}}
{{ include "aws-node-termination-handler.selectorLabels" . }}
app.kubernetes.io/component: deployment
{{- end -}}
{{/*
Selector labels for the daemonset
*/}}
{{- define "aws-node-termination-handler.selectorLabelsDaemonset" -}}
{{ include "aws-node-termination-handler.selectorLabels" . }}
app.kubernetes.io/component: daemonset
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "aws-node-termination-handler.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "aws-node-termination-handler.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
The image to use
*/}}
{{- define "aws-node-termination-handler.image" -}}
{{- printf "%s:%s" .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }}
{{- end }}
{{/* Get PodDisruptionBudget API Version */}}
{{- define "aws-node-termination-handler.pdb.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "policy/v1") (semverCompare ">= 1.21-0" .Capabilities.KubeVersion.Version) -}}
{{- print "policy/v1" -}}
{{- else -}}
{{- print "policy/v1beta1" -}}
{{- end -}}
{{- end -}}

View File

@ -1,52 +0,0 @@
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- patch
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- get
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
- create
- apiGroups:
- extensions
resources:
- daemonsets
verbs:
- get
- apiGroups:
- apps
resources:
- daemonsets
verbs:
- get
{{- if .Values.emitKubernetesEvents }}
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
{{- end }}
{{- end -}}

View File

@ -1,16 +0,0 @@
{{- if .Values.rbac.create -}}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "aws-node-termination-handler.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "aws-node-termination-handler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}

View File

@ -1,210 +0,0 @@
{{- if and (not .Values.enableSqsTerminationDraining) (lower .Values.targetNodeOs | contains "linux") -}}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labelsDaemonset" . | nindent 4 }}
spec:
{{- with .Values.updateStrategy }}
updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "aws-node-termination-handler.selectorLabelsDaemonset" . | nindent 6 }}
kubernetes.io/os: linux
template:
metadata:
labels:
{{- include "aws-node-termination-handler.selectorLabelsDaemonset" . | nindent 8 }}
kubernetes.io/os: linux
k8s-app: aws-node-termination-handler
{{- with (mergeOverwrite (dict) .Values.podLabels .Values.linuxPodLabels) }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if or .Values.podAnnotations .Values.linuxPodAnnotations }}
annotations:
{{- toYaml (mergeOverwrite (dict) .Values.podAnnotations .Values.linuxPodAnnotations) | nindent 8 }}
{{- end }}
spec:
{{- with .Values.image.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "aws-node-termination-handler.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.daemonsetPriorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ . }}
{{- end }}
hostNetwork: {{ .Values.useHostNetwork }}
dnsPolicy: {{ default .Values.linuxDnsPolicy .Values.dnsPolicy }}
{{- with .Values.dnsConfig }}
dnsConfig:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: aws-node-termination-handler
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
image: {{ include "aws-node-termination-handler.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ENABLE_PROBES_SERVER
value: {{ .Values.enableProbesServer | quote }}
- name: PROBES_SERVER_PORT
value: {{ .Values.probes.httpGet.port | quote }}
- name: PROBES_SERVER_ENDPOINT
value: {{ .Values.probes.httpGet.path | quote }}
- name: LOG_LEVEL
value: {{ .Values.logLevel | quote }}
- name: JSON_LOGGING
value: {{ .Values.jsonLogging | quote }}
- name: LOG_FORMAT_VERSION
value: {{ .Values.logFormatVersion | quote }}
- name: ENABLE_PROMETHEUS_SERVER
value: {{ .Values.enablePrometheusServer | quote }}
- name: PROMETHEUS_SERVER_PORT
value: {{ .Values.prometheusServerPort | quote }}
{{- with .Values.instanceMetadataURL }}
- name: INSTANCE_METADATA_URL
value: {{ . | quote }}
{{- end }}
- name: METADATA_TRIES
value: {{ .Values.metadataTries | quote }}
- name: DRY_RUN
value: {{ .Values.dryRun | quote }}
- name: CORDON_ONLY
value: {{ .Values.cordonOnly | quote }}
- name: TAINT_NODE
value: {{ .Values.taintNode | quote }}
- name: EXCLUDE_FROM_LOAD_BALANCERS
value: {{ .Values.excludeFromLoadBalancers | quote }}
- name: DELETE_LOCAL_DATA
value: {{ .Values.deleteLocalData | quote }}
- name: IGNORE_DAEMON_SETS
value: {{ .Values.ignoreDaemonSets | quote }}
- name: POD_TERMINATION_GRACE_PERIOD
value: {{ .Values.podTerminationGracePeriod | quote }}
- name: NODE_TERMINATION_GRACE_PERIOD
value: {{ .Values.nodeTerminationGracePeriod | quote }}
- name: EMIT_KUBERNETES_EVENTS
value: {{ .Values.emitKubernetesEvents | quote }}
{{- with .Values.kubernetesEventsExtraAnnotations }}
- name: KUBERNETES_EVENTS_EXTRA_ANNOTATIONS
value: {{ . | quote }}
{{- end }}
{{- if or .Values.webhookURL .Values.webhookURLSecretName }}
- name: WEBHOOK_URL
{{- if .Values.webhookURLSecretName }}
valueFrom:
secretKeyRef:
name: {{ .Values.webhookURLSecretName }}
key: webhookurl
{{- else }}
value: {{ .Values.webhookURL | quote }}
{{- end }}
{{- end }}
{{- with .Values.webhookHeaders }}
- name: WEBHOOK_HEADERS
value: {{ . | quote }}
{{- end }}
{{- with .Values.webhookProxy }}
- name: WEBHOOK_PROXY
value: {{ . | quote }}
{{- end }}
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: WEBHOOK_TEMPLATE_FILE
value: {{ print "/config/" .Values.webhookTemplateConfigMapKey | quote }}
{{- else if .Values.webhookTemplate }}
- name: WEBHOOK_TEMPLATE
value: {{ .Values.webhookTemplate | quote }}
{{- end }}
- name: ENABLE_SPOT_INTERRUPTION_DRAINING
value: {{ .Values.enableSpotInterruptionDraining | quote }}
- name: ENABLE_SCHEDULED_EVENT_DRAINING
value: {{ .Values.enableScheduledEventDraining | quote }}
- name: ENABLE_REBALANCE_MONITORING
value: {{ .Values.enableRebalanceMonitoring | quote }}
- name: ENABLE_REBALANCE_DRAINING
value: {{ .Values.enableRebalanceDraining | quote }}
- name: ENABLE_SQS_TERMINATION_DRAINING
value: "false"
- name: UPTIME_FROM_FILE
value: {{ .Values.procUptimeFile | quote }}
{{- with .Values.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if or .Values.enablePrometheusServer .Values.enableProbesServer }}
ports:
{{- if .Values.enableProbesServer }}
- name: liveness-probe
protocol: TCP
containerPort: {{ .Values.probes.httpGet.port }}
{{- end }}
{{- if .Values.enablePrometheusServer }}
- name: http-metrics
protocol: TCP
containerPort: {{ .Values.prometheusServerPort }}
{{- end }}
{{- end }}
{{- if .Values.enableProbesServer }}
livenessProbe:
{{- toYaml .Values.probes | nindent 12 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- name: uptime
mountPath: {{ .Values.procUptimeFile }}
readOnly: true
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: webhook-template
mountPath: /config/
{{- end }}
volumes:
- name: uptime
hostPath:
path: {{ .Values.procUptimeFile | default "/proc/uptime" }}
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: webhook-template
configMap:
name: {{ .Values.webhookTemplateConfigMapName }}
{{- end }}
nodeSelector:
kubernetes.io/os: linux
{{- with default .Values.daemonsetNodeSelector (default .Values.nodeSelector .Values.linuxNodeSelector) }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if or .Values.daemonsetAffinity (or .Values.affinity .Values.linuxAffinity) }}
affinity:
{{- toYaml (default .Values.daemonsetAffinity (default .Values.affinity .Values.linuxAffinity)) | nindent 8 }}
{{- end }}
{{- if or .Values.daemonsetTolerations (or .Values.tolerations .Values.linuxTolerations) }}
tolerations:
{{- toYaml (default .Values.daemonsetTolerations (default .Values.tolerations .Values.linuxTolerations )) | nindent 8 }}
{{- end }}
{{- end -}}

View File

@ -1,204 +0,0 @@
{{- if and (not .Values.enableSqsTerminationDraining) (lower .Values.targetNodeOs | contains "windows") -}}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "aws-node-termination-handler.fullnameWindows" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labelsDaemonset" . | nindent 4 }}
spec:
{{- with .Values.updateStrategy }}
updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "aws-node-termination-handler.selectorLabelsDaemonset" . | nindent 6 }}
kubernetes.io/os: windows
template:
metadata:
labels:
{{- include "aws-node-termination-handler.selectorLabelsDaemonset" . | nindent 8 }}
kubernetes.io/os: windows
k8s-app: aws-node-termination-handler
{{- with (mergeOverwrite (dict) .Values.podLabels .Values.windowsPodLabels) }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if or .Values.podAnnotations .Values.windowsPodAnnotations }}
annotations:
{{- toYaml (mergeOverwrite (dict) .Values.podAnnotations .Values.windowsPodAnnotations) | nindent 8 }}
{{- end }}
spec:
{{- with .Values.image.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "aws-node-termination-handler.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.daemonsetPriorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ . }}
{{- end }}
hostNetwork: false
dnsPolicy: {{ default .Values.windowsDnsPolicy .Values.dnsPolicy }}
{{- with .Values.dnsConfig }}
dnsConfig:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: aws-node-termination-handler
{{- with unset .Values.securityContext "runAsUser" }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
image: {{ include "aws-node-termination-handler.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ENABLE_PROBES_SERVER
value: {{ .Values.enableProbesServer | quote }}
- name: PROBES_SERVER_PORT
value: {{ .Values.probes.httpGet.port | quote }}
- name: PROBES_SERVER_ENDPOINT
value: {{ .Values.probes.httpGet.path | quote }}
- name: LOG_LEVEL
value: {{ .Values.logLevel | quote }}
- name: JSON_LOGGING
value: {{ .Values.jsonLogging | quote }}
- name: LOG_FORMAT_VERSION
value: {{ .Values.logFormatVersion | quote }}
- name: ENABLE_PROMETHEUS_SERVER
value: {{ .Values.enablePrometheusServer | quote }}
- name: PROMETHEUS_SERVER_PORT
value: {{ .Values.prometheusServerPort | quote }}
{{- with .Values.instanceMetadataURL }}
- name: INSTANCE_METADATA_URL
value: {{ . | quote }}
{{- end }}
- name: METADATA_TRIES
value: {{ .Values.metadataTries | quote }}
- name: DRY_RUN
value: {{ .Values.dryRun | quote }}
- name: CORDON_ONLY
value: {{ .Values.cordonOnly | quote }}
- name: TAINT_NODE
value: {{ .Values.taintNode | quote }}
- name: EXCLUDE_FROM_LOAD_BALANCERS
value: {{ .Values.excludeFromLoadBalancers | quote }}
- name: DELETE_LOCAL_DATA
value: {{ .Values.deleteLocalData | quote }}
- name: IGNORE_DAEMON_SETS
value: {{ .Values.ignoreDaemonSets | quote }}
- name: POD_TERMINATION_GRACE_PERIOD
value: {{ .Values.podTerminationGracePeriod | quote }}
- name: NODE_TERMINATION_GRACE_PERIOD
value: {{ .Values.nodeTerminationGracePeriod | quote }}
- name: EMIT_KUBERNETES_EVENTS
value: {{ .Values.emitKubernetesEvents | quote }}
{{- with .Values.kubernetesEventsExtraAnnotations }}
- name: KUBERNETES_EVENTS_EXTRA_ANNOTATIONS
value: {{ . | quote }}
{{- end }}
{{- if or .Values.webhookURL .Values.webhookURLSecretName }}
- name: WEBHOOK_URL
{{- if .Values.webhookURLSecretName }}
valueFrom:
secretKeyRef:
name: {{ .Values.webhookURLSecretName }}
key: webhookurl
{{- else }}
value: {{ .Values.webhookURL | quote }}
{{- end }}
{{- end }}
{{- with .Values.webhookHeaders }}
- name: WEBHOOK_HEADERS
value: {{ . | quote }}
{{- end }}
{{- with .Values.webhookProxy }}
- name: WEBHOOK_PROXY
value: {{ . | quote }}
{{- end }}
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: WEBHOOK_TEMPLATE_FILE
value: {{ print "/config/" .Values.webhookTemplateConfigMapKey | quote }}
{{- else if .Values.webhookTemplate }}
- name: WEBHOOK_TEMPLATE
value: {{ .Values.webhookTemplate | quote }}
{{- end }}
- name: ENABLE_SPOT_INTERRUPTION_DRAINING
value: {{ .Values.enableSpotInterruptionDraining | quote }}
- name: ENABLE_SCHEDULED_EVENT_DRAINING
value: {{ .Values.enableScheduledEventDraining | quote }}
- name: ENABLE_REBALANCE_MONITORING
value: {{ .Values.enableRebalanceMonitoring | quote }}
- name: ENABLE_REBALANCE_DRAINING
value: {{ .Values.enableRebalanceDraining | quote }}
- name: ENABLE_SQS_TERMINATION_DRAINING
value: "false"
{{- with .Values.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if or .Values.enablePrometheusServer .Values.enableProbesServer }}
ports:
{{- if .Values.enableProbesServer }}
- name: liveness-probe
protocol: TCP
containerPort: {{ .Values.probes.httpGet.port }}
hostPort: {{ .Values.probes.httpGet.port }}
{{- end }}
{{- if .Values.enablePrometheusServer }}
- name: http-metrics
protocol: TCP
containerPort: {{ .Values.prometheusServerPort }}
hostPort: {{ .Values.prometheusServerPort }}
{{- end }}
{{- end }}
{{- if .Values.enableProbesServer }}
livenessProbe:
{{- toYaml .Values.probes | nindent 12 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
volumeMounts:
- name: webhook-template
mountPath: /config/
{{- end }}
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
volumes:
- name: webhook-template
configMap:
name: {{ .Values.webhookTemplateConfigMapName }}
{{- end }}
nodeSelector:
kubernetes.io/os: windows
{{- with default .Values.daemonsetNodeSelector (default .Values.nodeSelector .Values.windowsNodeSelector) }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if or .Values.daemonsetAffinity (or .Values.affinity .Values.windowsAffinity) }}
affinity:
{{- toYaml (default .Values.daemonsetAffinity (default .Values.affinity .Values.windowsAffinity )) | nindent 8 }}
{{- end }}
{{- if or .Values.daemonsetTolerations (or .Values.tolerations .Values.windowsTolerations) }}
tolerations:
{{- toYaml (default .Values.daemonsetTolerations (default .Values.tolerations .Values.windowsTolerations )) | nindent 8 }}
{{- end }}
{{- end -}}

View File

@ -1,221 +0,0 @@
{{- if .Values.enableSqsTerminationDraining }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labelsDeployment" . | nindent 4 }}
spec:
replicas: {{ .Values.replicas }}
{{- with .Values.strategy }}
strategy:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "aws-node-termination-handler.selectorLabelsDeployment" . | nindent 6 }}
template:
metadata:
labels:
{{- include "aws-node-termination-handler.selectorLabelsDeployment" . | nindent 8 }}
k8s-app: aws-node-termination-handler
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.image.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "aws-node-termination-handler.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ . }}
{{- end }}
{{- with .Values.dnsConfig }}
dnsConfig:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: aws-node-termination-handler
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
image: {{ include "aws-node-termination-handler.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ENABLE_PROBES_SERVER
value: "true"
- name: PROBES_SERVER_PORT
value: {{ .Values.probes.httpGet.port | quote }}
- name: PROBES_SERVER_ENDPOINT
value: {{ .Values.probes.httpGet.path | quote }}
- name: LOG_LEVEL
value: {{ .Values.logLevel | quote }}
- name: JSON_LOGGING
value: {{ .Values.jsonLogging | quote }}
- name: LOG_FORMAT_VERSION
value: {{ .Values.logFormatVersion | quote }}
- name: ENABLE_PROMETHEUS_SERVER
value: {{ .Values.enablePrometheusServer | quote }}
- name: PROMETHEUS_SERVER_PORT
value: {{ .Values.prometheusServerPort | quote }}
# [DEPRECATED] Use CHECK_TAG_BEFORE_DRAINING instead
- name: CHECK_ASG_TAG_BEFORE_DRAINING
value: {{ .Values.checkASGTagBeforeDraining | quote }}
- name: CHECK_TAG_BEFORE_DRAINING
value: {{ .Values.checkTagBeforeDraining | quote }}
# [DEPRECATED] Use MANAGED_TAG instead
- name: MANAGED_ASG_TAG
value: {{ .Values.managedAsgTag | quote }}
- name: MANAGED_TAG
value: {{ .Values.managedTag | quote }}
- name: USE_PROVIDER_ID
value: {{ .Values.useProviderId | quote }}
- name: DRY_RUN
value: {{ .Values.dryRun | quote }}
- name: CORDON_ONLY
value: {{ .Values.cordonOnly | quote }}
- name: TAINT_NODE
value: {{ .Values.taintNode | quote }}
- name: EXCLUDE_FROM_LOAD_BALANCERS
value: {{ .Values.excludeFromLoadBalancers | quote }}
- name: DELETE_LOCAL_DATA
value: {{ .Values.deleteLocalData | quote }}
- name: IGNORE_DAEMON_SETS
value: {{ .Values.ignoreDaemonSets | quote }}
- name: POD_TERMINATION_GRACE_PERIOD
value: {{ .Values.podTerminationGracePeriod | quote }}
- name: NODE_TERMINATION_GRACE_PERIOD
value: {{ .Values.nodeTerminationGracePeriod | quote }}
- name: EMIT_KUBERNETES_EVENTS
value: {{ .Values.emitKubernetesEvents | quote }}
- name: COMPLETE_LIFECYCLE_ACTION_DELAY_SECONDS
value: {{ .Values.completeLifecycleActionDelaySeconds | quote }}
{{- with .Values.kubernetesEventsExtraAnnotations }}
- name: KUBERNETES_EVENTS_EXTRA_ANNOTATIONS
value: {{ . | quote }}
{{- end }}
{{- if or .Values.webhookURL .Values.webhookURLSecretName }}
- name: WEBHOOK_URL
{{- if .Values.webhookURLSecretName }}
valueFrom:
secretKeyRef:
name: {{ .Values.webhookURLSecretName }}
key: webhookurl
{{- else }}
value: {{ .Values.webhookURL | quote }}
{{- end }}
{{- end }}
{{- with .Values.webhookHeaders }}
- name: WEBHOOK_HEADERS
value: {{ . | quote }}
{{- end }}
{{- with .Values.webhookProxy }}
- name: WEBHOOK_PROXY
value: {{ . | quote }}
{{- end }}
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: WEBHOOK_TEMPLATE_FILE
value: {{ print "/config/" .Values.webhookTemplateConfigMapKey | quote }}
{{- else if .Values.webhookTemplate }}
- name: WEBHOOK_TEMPLATE
value: {{ .Values.webhookTemplate | quote }}
{{- end }}
- name: ENABLE_SQS_TERMINATION_DRAINING
value: "true"
{{- with .Values.awsRegion }}
- name: AWS_REGION
value: {{ . | quote }}
{{- end }}
{{- with .Values.awsEndpoint }}
- name: AWS_ENDPOINT
value: {{ . | quote }}
{{- end }}
{{- if and .Values.awsAccessKeyID .Values.awsSecretAccessKey }}
- name: AWS_ACCESS_KEY_ID
value: {{ .Values.awsAccessKeyID | quote }}
- name: AWS_SECRET_ACCESS_KEY
value: {{ .Values.awsSecretAccessKey | quote }}
{{- end }}
- name: QUEUE_URL
value: {{ .Values.queueURL | quote }}
- name: WORKERS
value: {{ .Values.workers | quote }}
{{- with .Values.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: liveness-probe
protocol: TCP
containerPort: {{ .Values.probes.httpGet.port }}
{{- if .Values.enablePrometheusServer }}
- name: http-metrics
protocol: TCP
containerPort: {{ .Values.prometheusServerPort }}
{{- end }}
livenessProbe:
{{- toYaml .Values.probes | nindent 12 }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- name: aws-token
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: webhook-template
mountPath: /config/
{{- end }}
volumes:
- name: aws-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 86400
audience: "sts.amazonaws.com"
{{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: webhook-template
configMap:
name: {{ .Values.webhookTemplateConfigMapName }}
{{- end }}
nodeSelector:
kubernetes.io/os: linux
{{- with .Values.nodeSelector }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -1,14 +0,0 @@
{{- if and .Values.enableSqsTerminationDraining (and .Values.podDisruptionBudget (gt (int .Values.replicas) 1)) }}
apiVersion: {{ include "aws-node-termination-handler.pdb.apiVersion" . }}
kind: PodDisruptionBudget
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "aws-node-termination-handler.selectorLabelsDeployment" . | nindent 6 }}
{{- toYaml .Values.podDisruptionBudget | nindent 2 }}
{{- end }}

View File

@ -1,29 +0,0 @@
{{- if and (not .Values.enableSqsTerminationDraining) (and .Values.enablePrometheusServer .Values.podMonitor.create) -}}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "aws-node-termination-handler.fullname" . }}
namespace: {{ default .Release.Namespace .Values.podMonitor.namespace }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
{{- with .Values.podMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
jobLabel: app.kubernetes.io/name
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
podMetricsEndpoints:
- port: http-metrics
path: /metrics
{{- with .Values.podMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.podMonitor.sampleLimit }}
sampleLimit: {{ . }}
{{- end }}
selector:
matchLabels:
{{- include "aws-node-termination-handler.selectorLabelsDaemonset" . | nindent 6 }}
{{- end -}}

View File

@ -1,70 +0,0 @@
{{- if and (.Values.rbac.pspEnabled) (semverCompare "<1.25-0" .Capabilities.KubeVersion.GitVersion) }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "aws-node-termination-handler.fullname" . }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: false
hostIPC: false
hostNetwork: {{ .Values.useHostNetwork }}
hostPID: false
{{- if and (and (not .Values.enableSqsTerminationDraining) .Values.useHostNetwork ) (or .Values.enablePrometheusServer .Values.enableProbesServer) }}
hostPorts:
{{- if .Values.enablePrometheusServer }}
- min: {{ .Values.prometheusServerPort }}
max: {{ .Values.prometheusServerPort }}
{{- end }}
{{- if .Values.enableProbesServer }}
- min: {{ .Values.probes.httpGet.port }}
max: {{ .Values.probes.httpGet.port }}
{{- end }}
{{- end }}
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "aws-node-termination-handler.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "aws-node-termination-handler.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "aws-node-termination-handler.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "aws-node-termination-handler.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ template "aws-node-termination-handler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -1,18 +0,0 @@
{{- if and .Values.enableSqsTerminationDraining .Values.enablePrometheusServer -}}
apiVersion: v1
kind: Service
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labelsDeployment" . | nindent 4 }}
spec:
type: ClusterIP
selector:
{{- include "aws-node-termination-handler.selectorLabelsDeployment" . | nindent 4 }}
ports:
- name: http-metrics
port: {{ .Values.prometheusServerPort }}
targetPort: http-metrics
protocol: TCP
{{- end -}}

View File

@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "aws-node-termination-handler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@ -1,29 +0,0 @@
{{- if and .Values.enableSqsTerminationDraining (and .Values.enablePrometheusServer .Values.serviceMonitor.create) -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
namespace: {{ default .Release.Namespace .Values.serviceMonitor.namespace }}
labels:
{{- include "aws-node-termination-handler.labels" . | nindent 4 }}
{{- with .Values.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
jobLabel: app.kubernetes.io/name
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
endpoints:
- port: http-metrics
path: /metrics
{{- with .Values.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.sampleLimit }}
sampleLimit: {{ . }}
{{- end }}
selector:
matchLabels:
{{- include "aws-node-termination-handler.selectorLabelsDeployment" . | nindent 6 }}
{{- end -}}

View File

@ -1,295 +0,0 @@
# Default values for aws-node-termination-handler.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: public.ecr.aws/aws-ec2/aws-node-termination-handler
# Overrides the image tag whose default is {{ printf "v%s" .Chart.AppVersion }}
tag: ""
pullPolicy: IfNotPresent
pullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use. If namenot set and create is true, a name is generated using fullname template
name:
annotations: {}
# eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
rbac:
# Specifies whether RBAC resources should be created
create: true
# Specifies if PodSecurityPolicy resources should be created. PodSecurityPolicy will not be created when Kubernetes version is 1.25 or later.
pspEnabled: true
customLabels: {}
podLabels: {}
podAnnotations: {}
podSecurityContext:
fsGroup: 1000
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
runAsUser: 1000
runAsGroup: 1000
terminationGracePeriodSeconds:
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
# Extra environment variables
extraEnv: []
# Liveness probe settings
probes:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# Set the log level
logLevel: info
# Set the log format version
logFormatVersion: 1
# Log messages in JSON format
jsonLogging: false
enablePrometheusServer: false
prometheusServerPort: 9092
# dryRun tells node-termination-handler to only log calls to kubernetes control plane
dryRun: false
# Cordon but do not drain nodes upon spot interruption termination notice.
cordonOnly: false
# Taint node upon spot interruption termination notice.
taintNode: false
# Exclude node from load balancer before cordoning via the ServiceNodeExclusion feature gate.
excludeFromLoadBalancers: false
# deleteLocalData tells kubectl to continue even if there are pods using
# emptyDir (local data that will be deleted when the node is drained).
deleteLocalData: true
# ignoreDaemonSets causes kubectl to skip Daemon Set managed pods.
ignoreDaemonSets: true
# podTerminationGracePeriod is time in seconds given to each pod to terminate gracefully. If negative, the default value specified in the pod will be used.
podTerminationGracePeriod: -1
# nodeTerminationGracePeriod specifies the period of time in seconds given to each NODE to terminate gracefully. Node draining will be scheduled based on this value to optimize the amount of compute time, but still safely drain the node before an event.
nodeTerminationGracePeriod: 120
# emitKubernetesEvents If true, Kubernetes events will be emitted when interruption events are received and when actions are taken on Kubernetes nodes. In IMDS Processor mode a default set of annotations with all the node metadata gathered from IMDS will be attached to each event
emitKubernetesEvents: false
# completeLifecycleActionDelaySeconds will pause for the configured duration after draining the node before completing the EC2 Autoscaling lifecycle action. This may be helpful if Pods on the node have Persistent Volume Claims.
completeLifecycleActionDelaySeconds: -1
# kubernetesEventsExtraAnnotations A comma-separated list of key=value extra annotations to attach to all emitted Kubernetes events
# Example: "first=annotation,sample.annotation/number=two"
kubernetesEventsExtraAnnotations: ""
# webhookURL if specified, posts event data to URL upon instance interruption action.
webhookURL: ""
# Webhook URL will be fetched from the secret store using the given name.
webhookURLSecretName: ""
# webhookHeaders if specified, replaces the default webhook headers.
webhookHeaders: ""
# webhookProxy if specified, uses this HTTP(S) proxy configuration.
webhookProxy: ""
# webhookTemplate if specified, replaces the default webhook message template.
webhookTemplate: ""
# webhook template file will be fetched from given config map name
# if specified, replaces the default webhook message with the content of the template file
webhookTemplateConfigMapName: ""
# template file name stored in configmap
webhookTemplateConfigMapKey: ""
# enableSqsTerminationDraining If true, this turns on queue-processor mode which drains nodes when an SQS termination event is received
enableSqsTerminationDraining: false
# ---------------------------------------------------------------------------------------------------------------------
# Queue Processor Mode
# ---------------------------------------------------------------------------------------------------------------------
# The number of replicas in the NTH deployment when using queue-processor mode (NOTE: increasing this may cause duplicate webhooks since NTH pods are stateless)
replicas: 1
# Specify the update strategy for the deployment
strategy: {}
# podDisruptionBudget specifies the disruption budget for the controller pods.
# Disruption budget will be configured only when the replicaCount is greater than 1
podDisruptionBudget: {}
# maxUnavailable: 1
serviceMonitor:
# Specifies whether ServiceMonitor should be created
# this needs enableSqsTerminationDraining: true
# and enablePrometheusServer: true
create: false
# Specifies whether the ServiceMonitor should be created in a different namespace than
# the Helm release
namespace:
# Additional labels to add to the metadata
labels: {}
# The Prometheus scrape interval
interval: 30s
# The number of scraped samples that will be accepted
sampleLimit: 5000
priorityClassName: system-cluster-critical
# If specified, use the AWS region for AWS API calls
awsRegion: ""
# Listens for messages on the specified SQS queue URL
queueURL: ""
# The maximum amount of parallel event processors to handle concurrent events
workers: 10
# [DEPRECATED] Use checkTagBeforeDraining instead
checkASGTagBeforeDraining: true
# If true, check that the instance is tagged with "aws-node-termination-handler/managed" as the key before draining the node
checkTagBeforeDraining: true
# [DEPRECATED] Use managedTag instead
managedAsgTag: "aws-node-termination-handler/managed"
# The tag to ensure is on a node if checkTagBeforeDraining is true
managedTag: "aws-node-termination-handler/managed"
# If true, fetch node name through Kubernetes node spec ProviderID instead of AWS event PrivateDnsHostname.
useProviderId: false
# ---------------------------------------------------------------------------------------------------------------------
# IMDS Mode
# ---------------------------------------------------------------------------------------------------------------------
# Create node OS specific daemonset(s). (e.g. "linux", "windows", "linux windows")
targetNodeOs: linux
linuxPodLabels: {}
windowsPodLabels: {}
linuxPodAnnotations: {}
windowsPodAnnotations: {}
# K8s DaemonSet update strategy.
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
daemonsetPriorityClassName: system-node-critical
podMonitor:
# Specifies whether PodMonitor should be created
# this needs enableSqsTerminationDraining: false
# and enablePrometheusServer: true
create: false
# Specifies whether the PodMonitor should be created in a different namespace than
# the Helm release
namespace:
# Additional labels to add to the metadata
labels: {}
# The Prometheus scrape interval
interval: 30s
# The number of scraped samples that will be accepted
sampleLimit: 5000
# Determines if NTH uses host networking for Linux when running the DaemonSet (only IMDS mode; queue-processor never runs with host networking)
# If you have disabled IMDSv1 and are relying on IMDSv2, you'll need to increase the IP hop count to 2 before switching this to false
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html
useHostNetwork: true
# Daemonset DNS policy
dnsPolicy: ""
dnsConfig: {}
linuxDnsPolicy: ClusterFirstWithHostNet
windowsDnsPolicy: ClusterFirst
daemonsetNodeSelector: {}
linuxNodeSelector: {}
windowsNodeSelector: {}
daemonsetAffinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "eks.amazonaws.com/compute-type"
operator: NotIn
values:
- fargate
linuxAffinity: {}
windowsAffinity: {}
daemonsetTolerations:
- operator: Exists
linuxTolerations: []
windowsTolerations: []
# If the probes server is running.
enableProbesServer: false
# Total number of times to try making the metadata request before failing.
metadataTries: 3
# enableSpotInterruptionDraining If false, do not drain nodes when the spot interruption termination notice is received. Only used in IMDS mode.
enableSpotInterruptionDraining: true
# enableScheduledEventDraining If false, do not drain nodes before the maintenance window starts for an EC2 instance scheduled event. Only used in IMDS mode.
enableScheduledEventDraining: true
# enableRebalanceMonitoring If true, cordon nodes when the rebalance recommendation notice is received. Only used in IMDS mode.
enableRebalanceMonitoring: false
# enableRebalanceDraining If true, drain nodes when the rebalance recommendation notice is received. Only used in IMDS mode.
enableRebalanceDraining: false
# ---------------------------------------------------------------------------------------------------------------------
# Testing
# ---------------------------------------------------------------------------------------------------------------------
# (TESTING USE): If specified, use the provided AWS endpoint to make API calls.
awsEndpoint: ""
# (TESTING USE): These should only be used for testing w/ localstack!
awsAccessKeyID:
awsSecretAccessKey:
# (TESTING USE): Override the default metadata URL (default: http://169.254.169.254:80)
instanceMetadataURL: ""
# (TESTING USE): Mount path for uptime file
procUptimeFile: /proc/uptime

View File

@ -1,29 +0,0 @@
diff -tuNr charts/aws-node-termination-handler.orig/templates/deployment.yaml charts/aws-node-termination-handler/templates/deployment.yaml
--- charts/aws-node-termination-handler.orig/templates/deployment.yaml 2022-01-26 18:01:36.123482217 +0100
+++ charts/aws-node-termination-handler/templates/deployment.yaml 2022-01-26 18:08:21.464304621 +0100
@@ -175,13 +175,23 @@
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
- {{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
volumeMounts:
+ - name: aws-token
+ mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
+ readOnly: true
+ {{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: webhook-template
mountPath: /config/
{{- end }}
- {{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
volumes:
+ - name: aws-token
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: token
+ expirationSeconds: 86400
+ audience: "sts.amazonaws.com"
+ {{- if and .Values.webhookTemplateConfigMapName .Values.webhookTemplateConfigMapKey }}
- name: webhook-template
configMap:
name: {{ .Values.webhookTemplateConfigMapName }}

View File

@ -1,30 +0,0 @@
diff -tuNr charts/aws-eks-asg-rolling-update-handler.orig/templates/deployment.yaml charts/aws-eks-asg-rolling-update-handler/templates/deployment.yaml
--- charts/aws-eks-asg-rolling-update-handler.orig/templates/deployment.yaml 2023-04-12 15:49:08.744242462 +0000
+++ charts/aws-eks-asg-rolling-update-handler/templates/deployment.yaml 2023-04-12 15:55:44.399489809 +0000
@@ -34,6 +34,26 @@
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
+ volumeMounts:
+ - name: aws-token
+ mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
+ readOnly: true
+ volumes:
+ - name: aws-token
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: token
+ expirationSeconds: 86400
+ audience: "sts.amazonaws.com"
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}

View File

@ -1,61 +0,0 @@
{{- if .Values.clusterBackup.enabled }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: kubezero-backup
namespace: kube-system
spec:
schedule: "0 * * * *"
concurrencyPolicy: "Replace"
jobTemplate:
spec:
backoffLimit: 1
activeDeadlineSeconds: 300
ttlSecondsAfterFinished: 3600
template:
spec:
containers:
- name: kubezero-admin
image: "{{ .Values.clusterBackup.image.name }}:{{ default .Chart.AppVersion .Values.clusterBackup.image.tag }}"
imagePullPolicy: Always
command: ["kubezero.sh"]
args:
- backup
volumeMounts:
- name: host
mountPath: /host
- name: workdir
mountPath: /tmp
env:
- name: DEBUG
value: ""
- name: RESTIC_REPOSITORY
valueFrom:
secretKeyRef:
name: kubezero-backup-restic
key: repository
- name: RESTIC_PASSWORD
valueFrom:
secretKeyRef:
name: kubezero-backup-restic
key: password
{{- with .Values.clusterBackup.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}
#securityContext:
# readOnlyRootFilesystem: true
hostNetwork: true
volumes:
- name: host
hostPath:
path: /
type: Directory
- name: workdir
emptyDir: {}
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
restartPolicy: Never
{{- end }}

View File

@ -1,11 +0,0 @@
{{- if and .Values.clusterBackup.enabled .Values.clusterBackup.repository .Values.clusterBackup.password }}
apiVersion: v1
kind: Secret
metadata:
name: kubezero-backup-restic
namespace: kube-system
type: Opaque
data:
repository: {{ default "" .Values.clusterBackup.repository | b64enc }}
password: {{ default "" .Values.clusterBackup.password | b64enc }}
{{- end }}

View File

@ -1,70 +0,0 @@
{{- if .Values.awsNeuron.enabled }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: neuron-device-plugin
namespace: kube-system
spec:
selector:
matchLabels:
name: neuron-device-plugin-ds
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: neuron-device-plugin-ds
spec:
serviceAccount: neuron-device-plugin
tolerations:
- key: aws.amazon.com/neuron
operator: Exists
effect: NoSchedule
- key: kubezero-workergroup
effect: NoSchedule
operator: Exists
# Mark this pod as a critical add-on; when enabled, the critical add-on
# scheduler reserves resources for critical add-on pods so that they can
# be rescheduled after a failure.
# See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
priorityClassName: "system-node-critical"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "node.kubernetes.io/instance-type"
operator: In
values:
- inf1.xlarge
- inf1.2xlarge
- inf1.6xlarge
- inf1.24xlarge
containers:
- image: "{{ .Values.awsNeuron.image.name }}:{{ .Values.awsNeuron.image.tag }}"
imagePullPolicy: IfNotPresent
name: neuron-device-plugin
env:
- name: KUBECONFIG
value: /etc/kubernetes/kubelet.conf
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: infa-map
mountPath: /run
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
- name: infa-map
hostPath:
path: /run
{{- end }}

View File

@ -1,59 +0,0 @@
{{- if .Values.awsNeuron.enabled }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: neuron-device-plugin
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- update
- patch
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: neuron-device-plugin
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: neuron-device-plugin
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: neuron-device-plugin
subjects:
- kind: ServiceAccount
name: neuron-device-plugin
namespace: kube-system
{{- end }}

View File

@ -1,32 +0,0 @@
{{- if .Values.fuseDevicePlugin.enabled }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fuse-device-plugin
namespace: kube-system
spec:
selector:
matchLabels:
name: fuse-device-plugin
template:
metadata:
labels:
name: fuse-device-plugin
spec:
hostNetwork: true
containers:
- image: public.ecr.aws/zero-downtime/fuse-device-plugin:v1.1.0
# imagePullPolicy: Always
name: fuse-device-plugin
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
{{- end }}

View File

@ -1,83 +0,0 @@
{{- if .Values.forseti.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubezero-forseti
namespace: kube-system
labels:
app: kubezero-forseti
spec:
replicas: 1
selector:
matchLabels:
app: kubezero-forseti
template:
metadata:
labels:
app: kubezero-forseti
spec:
containers:
- name: kubezero-forseti
image: "{{ .Values.forseti.image.name }}:{{ .Values.forseti.image.tag }}"
imagePullPolicy: Always
args:
- --health-probe-bind-address=:8081
- --metrics-bind-address=:8080
#- --zap-log-level=2
#- --dry-run
#- --leader-elect
command:
- /forseti
env:
- name: AWS_REGION
value: "{{ .Values.forseti.aws.region }}"
- name: AWS_ROLE_ARN
value: "{{ .Values.forseti.aws.iamRoleArn }}"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/sts.amazonaws.com/serviceaccount/token
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
volumeMounts:
- mountPath: /var/run/secrets/sts.amazonaws.com/serviceaccount/
name: aws-token
readOnly: true
securityContext:
runAsNonRoot: true
serviceAccountName: kubezero-forseti
terminationGracePeriodSeconds: 10
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
volumes:
- name: aws-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: sts.amazonaws.com
expirationSeconds: 86400
path: token
{{- end }}

View File

@ -1,104 +0,0 @@
{{- if .Values.forseti.enabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubezero-forseti
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubezero-forseti-manager
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- nodes/finalizers
verbs:
- update
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- get
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: forseti-leader-election
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubezero-forseti-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubezero-forseti-manager
subjects:
- kind: ServiceAccount
name: kubezero-forseti
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: forseti-leader-election
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: forseti-leader-election
subjects:
- kind: ServiceAccount
name: kubezero-forseti
namespace: kube-system
{{- end }}

View File

@ -1,16 +0,0 @@
{{- if .Values.forseti.enabled }}
apiVersion: v1
kind: Service
metadata:
labels:
app: kubezero-forseti
name: forseti-metrics-service
namespace: kube-system
spec:
ports:
- name: http
port: 8080
protocol: TCP
selector:
app: kubezero-forseti
{{- end }}

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -ex
helm repo update
NTH_VERSION=$(yq eval '.dependencies[] | select(.name=="aws-node-termination-handler") | .version' Chart.yaml)
RUH_VERSION=$(yq eval '.dependencies[] | select(.name=="aws-eks-asg-rolling-update-handler") | .version' Chart.yaml)
rm -rf charts/aws-node-termination-handler
helm pull eks/aws-node-termination-handler --untar --untardir charts --version $NTH_VERSION
# diff -tuNr charts/aws-node-termination-handler.orig charts/aws-node-termination-handler > nth.patch
patch -p0 -i nth.patch --no-backup-if-mismatch
rm -rf charts/aws-eks-asg-rolling-update-handler
helm pull twin/aws-eks-asg-rolling-update-handler --untar --untardir charts --version $RUH_VERSION
patch -p0 -i ruh.patch --no-backup-if-mismatch
helm dep update

View File

@ -1,325 +0,0 @@
clusterBackup:
enabled: false
image:
name: public.ecr.aws/zero-downtime/kubezero-admin
# tag: v1.22.8
# -- s3:https://s3.amazonaws.com/${CFN[ConfigBucket]}/k8s/${CLUSTERNAME}/clusterBackup
repository: ""
# -- /etc/cloudbender/clusterBackup.passphrase
password: ""
extraEnv: []
forseti:
enabled: false
image:
name: public.ecr.aws/zero-downtime/forseti
tag: v0.1.2
aws:
region: ""
# -- "arn:aws:iam::${AWS::AccountId}:role/${AWS::Region}.${ClusterName}.kubezeroForseti"
iamRoleArn: ""
sealed-secrets:
enabled: false
# ensure kubeseal default values match
fullnameOverride: sealed-secrets-controller
# Disable auto keyrotation for now
keyrenewperiod: "0"
resources:
requests:
cpu: 10m
memory: 24Mi
limits:
memory: 128Mi
metrics:
serviceMonitor:
enabled: false
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
aws-eks-asg-rolling-update-handler:
enabled: false
image:
tag: v1.7.0
environmentVars:
- name: CLUSTER_NAME
value: ""
- name: AWS_REGION
value: us-west-2
- name: EXECUTION_INTERVAL
value: "60"
- name: METRICS
value: "true"
- name: EAGER_CORDONING
value: "true"
# Only disable if all services have PDBs across AZs
- name: SLOW_MODE
value: "true"
- name: AWS_ROLE_ARN
value: ""
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 128Mi
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
aws-node-termination-handler:
enabled: false
fullnameOverride: "aws-node-termination-handler"
checkASGTagBeforeDraining: false
# -- "zdt:kubezero:nth:${ClusterName}"
managedTag: "zdt:kubezero:nth:${ClusterName}"
useProviderId: true
enableSqsTerminationDraining: true
# otherwise pds fails trying to reach IMDS
enableSpotInterruptionDraining: false
enableProbesServer: true
deleteLocalData: true
ignoreDaemonSets: true
taintNode: true
emitKubernetesEvents: true
# -- https://sqs.${AWS::Region}.amazonaws.com/${AWS::AccountId}/${ClusterName}_Nth
queueURL: ""
metadataTries: 0
extraEnv:
# -- "arn:aws:iam::${AWS::AccountId}:role/${AWS::Region}.${ClusterName}.awsNth"
- name: AWS_ROLE_ARN
value: ""
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
enablePrometheusServer: false
podMonitor:
create: false
jsonLogging: true
logFormatVersion: 2
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
rbac:
pspEnabled: false
fuseDevicePlugin:
enabled: false
awsNeuron:
enabled: false
image:
name: public.ecr.aws/neuron/neuron-device-plugin
tag: 1.9.3.0
nvidia-device-plugin:
enabled: false
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
- key: kubezero-workergroup
effect: NoSchedule
operator: Exists
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "node.kubernetes.io/instance-type"
operator: In
values:
- g5.xlarge
- g5.2xlarge
- g5.4xlarge
- g5.8xlarge
- g5.12xlarge
- g5.16xlarge
- g5.24xlarge
- g5.48xlarge
- g4dn.xlarge
- g4dn.2xlarge
- g4dn.4xlarge
- g4dn.8xlarge
- g4dn.12xlarge
- g4dn.16xlarge
cluster-autoscaler:
enabled: false
image:
tag: v1.25.1
autoDiscovery:
clusterName: ""
awsRegion: "us-west-2"
serviceMonitor:
enabled: false
interval: 30s
prometheusRule:
enabled: false
interval: "30"
# Disable pdb for now
podDisruptionBudget: false
extraArgs:
scan-interval: 30s
skip-nodes-with-local-storage: false
balance-similar-node-groups: true
ignore-taint: "node.cilium.io/agent-not-ready"
#securityContext:
# runAsNonRoot: true
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
# On AWS enable Projected Service Accounts to assume IAM role
#extraEnv:
# AWS_ROLE_ARN: <IamArn>
# AWS_WEB_IDENTITY_TOKEN_FILE: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
# AWS_STS_REGIONAL_ENDPOINTS: "regional"
#extraVolumes:
#- name: aws-token
# projected:
# sources:
# - serviceAccountToken:
# path: token
# expirationSeconds: 86400
# audience: "sts.amazonaws.com"
#extraVolumeMounts:
#- name: aws-token
# mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
# readOnly: true
external-dns:
enabled: false
interval: 3m
triggerLoopOnEvent: true
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
#logLevel: debug
sources:
- service
#- istio-gateway
provider: inmemory
falco-control-plane:
enabled: false
fullnameOverride: falco-control-plane
# -- Disable the drivers since we want to deploy only the k8saudit plugin.
driver:
enabled: false
# -- Disable the collectors, no syscall events to enrich with metadata.
collectors:
enabled: false
nodeSelector:
node-role.kubernetes.io/control-plane: ""
# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurabale.
controller:
kind: deployment
deployment:
# -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
# For more info check the section on Plugins in the README.md file.
replicas: 1
falcoctl:
artifact:
install:
# -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects.
enabled: true
follow:
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
enabled: true
config:
artifact:
install:
# -- Do not resolve the depenencies for artifacts. By default is true, but for our use case we disable it.
resolveDeps: false
# -- List of artifacts to be installed by the falcoctl init container.
# Only rulesfiles, we do no recommend plugins for security reasonts since they are executable objects.
refs: [k8saudit-rules:0.6]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
# Only rulesfiles, we do no recommend plugins for security reasonts since they are executable objects.
refs: [k8saudit-rules:0.6]
services:
- name: k8saudit-webhook
ports:
- port: 9765 # See plugin open_params
protocol: TCP
falco:
rules_file:
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d
plugins:
- name: k8saudit
library_path: libk8saudit.so
init_config:
maxEventBytes: 1048576
# sslCertificate: /etc/falco/falco.pem
open_params: "http://:9765/k8s-audit"
- name: json
library_path: libjson.so
init_config: ""
# Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.
load_plugins: [k8saudit, json]

Some files were not shown because too many files have changed in this diff Show More