feat: First version of KubeZero NATS module

This commit is contained in:
Stefan Reimer 2021-04-22 11:59:18 +02:00
parent 4fdf4a10b0
commit 92d7a56004
20 changed files with 2169 additions and 1 deletions

View File

@ -0,0 +1,17 @@
apiVersion: v2
name: kubezero-nats
description: KubeZero umbrella chart for NATS
type: application
version: 0.1.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
- kubezero
- nats
maintainers:
- name: Quarky9
dependencies:
- name: nats
version: 0.8.3
#repository: https://nats-io.github.io/k8s/helm/charts/
kubeVersion: ">= 1.18.0"

View File

@ -0,0 +1,24 @@
# kubezero-nats
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for NATS
**Homepage:** <https://kubezero.com>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Quarky9 | | |
## Requirements
Kubernetes: `>= 1.18.0`
| Repository | Name | Version |
|------------|------|---------|
| | nats | 0.8.3 |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.5.0](https://github.com/norwoodj/helm-docs/releases/v1.5.0)

View File

@ -0,0 +1,19 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.maintainersSection" . }}
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
## Resources
- https://grafana.com/grafana/dashboards/13707

View File

@ -0,0 +1,21 @@
apiVersion: v2
appVersion: "2.1.9"
description: A Helm chart for the NATS.io High Speed Cloud Native Distributed Communications Technology.
name: nats
keywords:
- nats
- messaging
- cncf
version: 0.8.3
home: http://github.com/nats-io/k8s
maintainers:
- name: Waldemar Quevedo
github: https://github.com/wallyqs
email: wally@nats.io
- name: Colin Sullivan
github: https://github.com/ColinSullivan1
email: colin@nats.io
- name: Jaime Piña
github: https://github.com/variadico
email: jaime@nats.io
icon: https://nats.io/img/nats-icon-color.png

View File

@ -0,0 +1,586 @@
# NATS Server
[NATS](https://nats.io) is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Foundation ([CNCF](https://cncf.io)). NATS has over [30 client language implementations](https://nats.io/download/), and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems.
## TL;DR;
```console
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats
```
## Configuration
### Server Image
```yaml
nats:
image: nats:2.1.7-alpine3.11
pullPolicy: IfNotPresent
```
### Limits
```yaml
nats:
# The number of connect attempts against discovered routes.
connectRetries: 30
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
lameDuckDuration:
# Number of seconds to wait for client connections to end after the pod termination is requested
terminationGracePeriodSeconds: 60
```
### Logging
*Note*: It is not recommended to enable trace or debug in production since enabling it will significantly degrade performance.
```yaml
nats:
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
```
### TLS setup for client connections
You can find more on how to setup and trouble shoot TLS connnections at:
https://docs.nats.io/nats-server/configuration/securing_nats/tls
```yaml
nats:
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Clustering
If clustering is enabled, then a 3-node cluster will be setup. More info at:
https://docs.nats.io/nats-server/configuration/clustering#nats-server-clustering
```yaml
cluster:
enabled: true
replicas: 3
tls:
secret:
name: nats-server-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
Example:
```sh
$ helm install nats nats/nats --set cluster.enabled=true
```
## Leafnodes
Leafnode connections to extend a cluster. More info at:
https://docs.nats.io/nats-server/configuration/leafnodes
```yaml
leafnodes:
enabled: true
remotes:
- url: "tls://connect.ngs.global:7422"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Setting up External Access
### Using HostPorts
In case of both external access and advertisements being enabled, an
initializer container will be used to gather the public ips. This
container will required to have enough RBAC policy to be able to make a
look up of the public ip of the node where it is running.
For example, to setup external access for a cluster and advertise the public ip to clients:
```yaml
nats:
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: true
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
```
Where the service account named `nats-server` has the following RBAC policy for example:
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nats-server
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nats-server
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nats-server-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nats-server
subjects:
- kind: ServiceAccount
name: nats-server
namespace: default
```
The container image of the initializer can be customized via:
```yaml
bootconfig:
image: connecteverything/nats-boot-config:0.5.2
pullPolicy: IfNotPresent
```
### Using LoadBalancers
In case of using a load balancer for external access, it is recommended to disable no advertise
so that internal ips from the NATS Servers are not advertised to the clients connecting through
the load balancer.
```yaml
nats:
image: nats:alpine
cluster:
enabled: true
noAdvertise: true
leafnodes:
enabled: true
noAdvertise: true
natsbox:
enabled: true
```
Then could use an L4 enabled load balancer to connect to NATS, for example:
```yaml
apiVersion: v1
kind: Service
metadata:
name: nats-lb
spec:
type: LoadBalancer
selector:
app: nats
ports:
- protocol: TCP
port: 4222
targetPort: 4222
name: nats
- protocol: TCP
port: 7422
targetPort: 7422
name: leafnodes
- protocol: TCP
port: 7522
targetPort: 7522
name: gateways
```
## Gateways
A super cluster can be formed by pointing to remote gateways.
You can find more about gateways in the NATS documentation:
https://docs.nats.io/nats-server/configuration/gateways
```yaml
gateway:
enabled: false
name: 'default'
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
```
## Auth setup
### Auth with a Memory Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
############################
# #
# Memory resolver settings #
# #
##############################
type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
configMap:
name: nats-accounts
key: resolver.conf
```
### Auth using an Account Server Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
##########################
# #
# URL resolver settings #
# #
##########################
type: URL
url: "http://nats-account-server:9090/jwt/v1/accounts/"
```
## JetStream
### Setting up Memory and File Storage
```yaml
nats:
image: synadia/nats-server:nightly
jetstream:
enabled: true
memStorage:
enabled: true
size: 2Gi
fileStorage:
enabled: true
size: 1Gi
storageDirectory: /data/
storageClassName: default
```
### Using with an existing PersistentVolumeClaim
For example, given the following `PersistentVolumeClaim`:
```yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nats-js-disk
annotations:
volume.beta.kubernetes.io/storage-class: "default"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
```
You can start JetStream so that one pod is bounded to it:
```yaml
nats:
image: synadia/nats-server:nightly
jetstream:
enabled: true
fileStorage:
enabled: true
storageDirectory: /data/
existingClaim: nats-js-disk
claimStorageSize: 3Gi
```
### Clustering example
```yaml
nats:
image: synadia/nats-server:nightly
jetstream:
enabled: true
memStorage:
enabled: true
size: "2Gi"
fileStorage:
enabled: true
size: "1Gi"
storageDirectory: /data/
storageClassName: default
cluster:
enabled: true
# Cluster name is required, by default will be release name.
# name: "nats"
replicas: 3
```
## Misc
### NATS Box
A lightweight container with NATS and NATS Streaming utilities that is deployed along the cluster to confirm the setup.
You can find the image at: https://github.com/nats-io/nats-box
```yaml
natsbox:
enabled: true
image: synadia/nats-box:latest
pullPolicy: IfNotPresent
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
```
### Configuration Reload sidecar
The NATS config reloader image to use:
```yaml
reloader:
enabled: true
image: connecteverything/nats-server-config-reloader:0.6.0
pullPolicy: IfNotPresent
```
### Prometheus Exporter sidecar
You can toggle whether to start the sidecar that can be used to feed metrics to Prometheus:
```yaml
exporter:
enabled: true
image: synadia/prometheus-nats-exporter:0.5.0
pullPolicy: IfNotPresent
```
### Prometheus operator ServiceMonitor support
You can enable prometheus operator ServiceMonitor:
```yaml
exporter:
# You have to enable exporter first
enabled: true
serviceMonitor:
enabled: true
## Specify the namespace where Prometheus Operator is running
# namespace: monitoring
# ...
```
### Pod Customizations
#### Security Context
```yaml
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
fsGroup: 1000
runAsUser: 1000
runAsNonRoot: true
```
#### Affinity
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity>
`matchExpressions` must be configured according to your setup
```yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/purpose
operator: In
values:
- nats
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats
- stan
topologyKey: "kubernetes.io/hostname"
```
#### Service topology
[Service topology](https://kubernetes.io/docs/concepts/services-networking/service-topology/) is disabled by default, but can be enabled by setting `topologyKeys`. For example:
```yaml
topologyKeys:
- "kubernetes.io/hostname"
- "topology.kubernetes.io/zone"
- "topology.kubernetes.io/region"
```
#### CPU/Memory Resource Requests/Limits
Sets the pods cpu/memory requests/limits
```yaml
nats:
resources:
requests:
cpu: 2
memory: 4Gi
limits:
cpu: 4
memory: 6Gi
```
No resources are set by default.
#### Annotations
<https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations>
```yaml
podAnnotations:
key1 : "value1",
key2 : "value2"
```
### Name Overides
Can change the name of the resources as needed with:
```yaml
nameOverride: "my-nats"
```
### Image Pull Secrets
```yaml
imagePullSecrets:
- name: myRegistry
```
Adds this to the StatefulSet:
```yaml
spec:
imagePullSecrets:
- name: myRegistry
```

View File

@ -0,0 +1,26 @@
{{- if or .Values.nats.logging.debug .Values.nats.logging.trace }}
*WARNING*: Keep in mind that running the server with
debug and/or trace enabled significantly affects the
performance of the server!
{{- end }}
You can find more information about running NATS on Kubernetes
in the NATS documentation website:
https://docs.nats.io/nats-on-kubernetes/nats-kubernetes
{{- if .Values.natsbox.enabled }}
NATS Box has been deployed into your cluster, you can
now use the NATS tools within the container as follows:
kubectl exec -n {{ .Release.Namespace }} -it deployment/{{ template "nats.fullname" . }}-box -- /bin/sh -l
nats-box:~# nats-sub test &
nats-box:~# nats-pub test hi
nats-box:~# nc {{ template "nats.fullname" . }} 4222
{{- end }}
Thanks for using NATS!

View File

@ -0,0 +1,95 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "nats.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "nats.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nats.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "nats.labels" -}}
helm.sh/chart: {{ include "nats.chart" . }}
{{ include "nats.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "nats.selectorLabels" -}}
app.kubernetes.io/name: {{ include "nats.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Return the proper NATS image name
*/}}
{{- define "nats.clusterAdvertise" -}}
{{- printf "$(POD_NAME).%s.$(POD_NAMESPACE).svc" (include "nats.fullname" . ) }}
{{- end }}
{{/*
Return the NATS cluster routes.
*/}}
{{- define "nats.clusterRoutes" -}}
{{- $name := default .Release.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- range $i, $e := until (.Values.cluster.replicas | int) -}}
{{- printf "nats://%s-%d.%s.%s.svc:6222," $name $i $name $.Release.Namespace -}}
{{- end -}}
{{- end }}
{{- define "nats.tlsConfig" -}}
tls {
{{- if .cert }}
cert_file: {{ .secretPath }}/{{ .secret.name }}/{{ .cert }}
{{- end }}
{{- if .key }}
key_file: {{ .secretPath }}/{{ .secret.name }}/{{ .key }}
{{- end }}
{{- if .ca }}
ca_file: {{ .secretPath }}/{{ .secret.name }}/{{ .ca }}
{{- end }}
{{- if .insecure }}
insecure: {{ .insecure }}
{{- end }}
{{- if .verify }}
verify: {{ .verify }}
{{- end }}
{{- if .verifyAndMap }}
verify_and_map: {{ .verifyAndMap }}
{{- end }}
{{- if .curvePreferences }}
curve_preferences: {{ .curvePreferences }}
{{- end }}
{{- if .timeout }}
timeout: {{ .timeout }}
{{- end }}
}
{{- end }}

View File

@ -0,0 +1,337 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "nats.fullname" . }}-config
labels:
{{- include "nats.labels" . | nindent 4 }}
data:
nats.conf: |
# PID file shared with configuration reloader.
pid_file: "/var/run/nats/nats.pid"
###############
# #
# Monitoring #
# #
###############
http: 8222
server_name: $POD_NAME
{{- if .Values.nats.tls }}
#####################
# #
# TLS Configuration #
# #
#####################
{{- with .Values.nats.tls }}
{{- $nats_tls := merge (dict) . }}
{{- $_ := set $nats_tls "secretPath" "/etc/nats-certs/clients" }}
{{- include "nats.tlsConfig" $nats_tls | nindent 4}}
{{- end }}
{{- end }}
{{- if .Values.nats.jetstream.enabled }}
###################################
# #
# NATS JetStream #
# #
###################################
jetstream {
{{- if .Values.nats.jetstream.memStorage.enabled }}
max_mem: {{ .Values.nats.jetstream.memStorage.size }}
{{- end }}
{{- if .Values.nats.jetstream.fileStorage.enabled }}
store_dir: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
max_file:
{{- if .Values.nats.jetstream.fileStorage.existingClaim }}
{{- .Values.nats.jetstream.fileStorage.claimStorageSize }}
{{- else }}
{{- .Values.nats.jetstream.fileStorage.size }}
{{- end }}
{{- end }}
}
{{- end }}
{{- if .Values.cluster.enabled }}
###################################
# #
# NATS Full Mesh Clustering Setup #
# #
###################################
cluster {
port: 6222
{{- if .Values.nats.jetstream.enabled }}
{{- if .Values.cluster.name }}
name: {{ .Values.cluster.name }}
{{- else }}
name: {{ template "nats.name" . }}
{{- end }}
{{- else }}
{{- with .Values.cluster.name }}
name: {{ . }}
{{- end }}
{{- end }}
{{- with .Values.cluster.tls }}
{{- $cluster_tls := merge (dict) . }}
{{- $_ := set $cluster_tls "secretPath" "/etc/nats-certs/cluster" }}
{{- include "nats.tlsConfig" $cluster_tls | nindent 6}}
{{- end }}
routes = [
{{ include "nats.clusterRoutes" . }}
]
cluster_advertise: $CLUSTER_ADVERTISE
{{- with .Values.cluster.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
connect_retries: {{ .Values.nats.connectRetries }}
}
{{ end }}
{{- if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/client_advertise.conf"
{{- end }}
{{- if or .Values.leafnodes.enabled .Values.leafnodes.remotes }}
#################
# #
# NATS Leafnode #
# #
#################
leafnodes {
{{- if .Values.leafnodes.enabled }}
listen: "0.0.0.0:7422"
{{- end }}
{{ if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{ end }}
{{- with .Values.leafnodes.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{- $leafnode_tls := merge (dict) . }}
{{- $_ := set $leafnode_tls "secretPath" "/etc/nats-certs/leafnodes" }}
{{- include "nats.tlsConfig" $leafnode_tls | nindent 6}}
{{- end }}
remotes: [
{{- range .Values.leafnodes.remotes }}
{
{{- with .url }}
url: {{ . }}
{{- end }}
{{- with .credentials }}
credentials: "/etc/nats-creds/{{ .secret.name }}/{{ .secret.key }}"
{{- end }}
}
{{- end }}
]
}
{{ end }}
{{- if .Values.gateway.enabled }}
#################
# #
# NATS Gateways #
# #
#################
gateway {
name: {{ .Values.gateway.name }}
port: 7522
{{ if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{ end }}
{{- with .Values.gateway.tls }}
{{- $gateway_tls := merge (dict) . }}
{{- $_ := set $gateway_tls "secretPath" "/etc/nats-certs/gateway" }}
{{- include "nats.tlsConfig" $gateway_tls | nindent 6}}
{{- end }}
# Gateways array here
gateways: [
{{- range .Values.gateway.gateways }}
{
{{- with .name }}
name: {{ . }}
{{- end }}
{{- with .url }}
url: {{ . | quote }}
{{- end }}
{{- with .urls }}
urls: [{{ join "," . }}]
{{- end }}
},
{{- end }}
]
}
{{ end }}
{{- with .Values.nats.logging.debug }}
debug: {{ . }}
{{- end }}
{{- with .Values.nats.logging.trace }}
trace: {{ . }}
{{- end }}
{{- with .Values.nats.logging.logtime }}
logtime: {{ . }}
{{- end }}
{{- with .Values.nats.logging.connectErrorReports }}
connect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.logging.reconnectErrorReports }}
reconnect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxConnections }}
max_connections: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxSubscriptions }}
max_subscriptions: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPending }}
max_pending: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxControlLine }}
max_control_line: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPayload }}
max_payload: {{ . }}
{{- end }}
{{- with .Values.nats.pingInterval }}
ping_interval: {{ . }}
{{- end }}
{{- with .Values.nats.maxPings }}
ping_max: {{ . }}
{{- end }}
{{- with .Values.nats.writeDeadline }}
write_deadline: {{ . | quote }}
{{- end }}
{{- with .Values.nats.writeDeadline }}
lame_duck_duration: {{ . | quote }}
{{- end }}
{{- if .Values.websocket.enabled }}
##################
# #
# Websocket #
# #
##################
ws {
port: {{ .Values.websocket.port }}
{{- if .Values.websocket.tls }}
{{ $secretName := .secret.name }}
tls {
{{- with .cert }}
cert_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .key }}
key_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .ca }}
ca_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
}
{{- else }}
no_tls: {{ .Values.websocket.noTLS }}
{{- end }}
}
{{- end }}
{{- if .Values.auth.enabled }}
##################
# #
# Authorization #
# #
##################
{{- if .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
resolver: MEMORY
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- else }}
{{- with .Values.auth.resolver }}
operator: {{ .operator }}
system_account: {{ .systemAccount }}
{{- end }}
resolver: {
type: full
{{- with .Values.auth.resolver }}
dir: {{ .store.dir | quote }}
allow_delete: {{ .allowDelete }}
interval: {{ .interval | quote }}
{{- end }}
}
{{- end }}
{{- end }}
{{- if .Values.auth.resolver.resolverPreload }}
resolver_preload: {{ toRawJson .Values.auth.resolver.resolverPreload }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
{{- with .Values.auth.resolver.url }}
resolver: URL({{ . }})
{{- end }}
operator: /etc/nats-config/operator/{{ .Values.auth.operatorjwt.configMap.key }}
{{- end }}
{{- end }}
{{- with .Values.auth.systemAccount }}
system_account: {{ . }}
{{- end }}
{{- with .Values.auth.basic }}
{{- with .noAuthUser }}
no_auth_user: {{ . }}
{{- end }}
{{- with .users }}
authorization {
users: [
{{- range . }}
{{- toRawJson . | nindent 4 }},
{{- end }}
]
}
{{- end }}
{{- with .accounts }}
accounts: {{- toRawJson . }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,75 @@
{{- if .Values.natsbox.enabled }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nats.fullname" . }}-box
labels:
app: {{ include "nats.fullname" . }}-box
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ include "nats.fullname" . }}-box
template:
metadata:
labels:
app: {{ include "nats.fullname" . }}-box
spec:
volumes:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
secret:
secretName: {{ .Values.natsbox.credentials.secret.name }}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
containers:
- name: nats-box
image: {{ .Values.natsbox.image }}
imagePullPolicy: {{ .Values.natsbox.pullPolicy }}
env:
- name: NATS_URL
value: {{ template "nats.fullname" . }}
{{- if .Values.natsbox.credentials }}
- name: USER_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
- name: USER2_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp /etc/nats-certs/clients/{{ $secretName }}/* /usr/local/share/ca-certificates && update-ca-certificates
{{- end }}
command:
- "tail"
- "-f"
- "/dev/null"
volumeMounts:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
mountPath: /etc/nats-config/creds
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.podDisruptionBudget }}
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
name: {{ include "nats.fullname" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,31 @@
{{ if and .Values.nats.externalAccess .Values.nats.advertise }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.nats.serviceAccount }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.nats.serviceAccount }}
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.nats.serviceAccount }}-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Values.nats.serviceAccount }}
subjects:
- kind: ServiceAccount
name: {{ .Values.nats.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{ end }}

View File

@ -0,0 +1,38 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "nats.fullname" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.serviceAnnotations}}
annotations:
{{- range $key, $value := .Values.serviceAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
selector:
{{- include "nats.selectorLabels" . | nindent 4 }}
clusterIP: None
{{- if .Values.topologyKeys }}
topologyKeys:
{{- .Values.topologyKeys | toYaml | nindent 4 }}
{{- end }}
ports:
{{- if .Values.websocket.enabled }}
- name: websocket
port: {{ .Values.websocket.port }}
{{- end }}
- name: client
port: 4222
- name: cluster
port: 6222
- name: monitor
port: 8222
- name: metrics
port: 7777
- name: leafnodes
port: 7422
- name: gateways
port: 7522

View File

@ -0,0 +1,40 @@
{{ if and .Values.exporter.enabled .Values.exporter.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "nats.fullname" . }}
{{- if .Values.exporter.serviceMonitor.namespace }}
namespace: {{ .Values.exporter.serviceMonitor.namespace }}
{{- else }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.labels }}
labels:
{{- range $key, $value := .Values.exporter.serviceMonitor.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.annotations }}
annotations:
{{- range $key, $value := .Values.exporter.serviceMonitor.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
endpoints:
- port: metrics
{{- if .Values.exporter.serviceMonitor.path }}
path: {{ .Values.exporter.serviceMonitor.path }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.interval }}
interval: {{ .Values.exporter.serviceMonitor.interval }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.exporter.serviceMonitor.scrapeTimeout }}
{{- end }}
namespaceSelector:
any: true
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,449 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "nats.fullname" . }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.statefulSetAnnotations}}
annotations:
{{- range $key, $value := .Values.statefulSetAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- if .Values.cluster.enabled }}
replicas: {{ .Values.cluster.replicas }}
{{- else }}
replicas: 1
{{- end }}
serviceName: {{ include "nats.fullname" . }}
template:
metadata:
{{- if or .Values.podAnnotations .Values.exporter.enabled }}
annotations:
{{- if .Values.exporter.enabled }}
prometheus.io/path: /metrics
prometheus.io/port: "7777"
prometheus.io/scrape: "true"
{{- end }}
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
{{- include "nats.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- range .Values.topologySpreadConstraints }}
{{- if and .maxSkew .topologyKey }}
- maxSkew: {{ .maxSkew }}
topologyKey: {{ .topologyKey }}
{{- if .whenUnsatisfiable }}
whenUnsatisfiable: {{ .whenUnsatisfiable }}
{{- end }}
labelSelector:
matchLabels:
{{- include "nats.selectorLabels" $ | nindent 12 }}
{{- end}}
{{- end }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
# Common volumes for the containers.
volumes:
- name: config-volume
configMap:
name: {{ include "nats.fullname" . }}-config
# Local volume shared with the reloader.
- name: pid
emptyDir: {}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
configMap:
name: {{ .Values.auth.resolver.configMap.name }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
configMap:
name: {{ .Values.auth.operatorjwt.configMap.name }}
{{- end }}
{{- end }}
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Local volume shared with the advertise config initializer.
- name: advertiseconfig
emptyDir: {}
{{- end }}
{{- if and .Values.nats.jetstream.fileStorage.enabled .Values.nats.jetstream.fileStorage.existingClaim }}
# Persistent volume for jetstream running with file storage option
- name: {{ include "nats.fullname" . }}-js-pvc
persistentVolumeClaim:
claimName: {{ .Values.nats.jetstream.fileStorage.existingClaim | quote }}
{{- end }}
#################
# #
# TLS Volumes #
# #
#################
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-cluster-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-leafnodes-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-gateways-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-ws-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
secret:
secretName: {{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
{{ if and .Values.nats.externalAccess .Values.nats.advertise }}
# Assume that we only use the service account in case we want to
# figure out what is the current external public IP from the server
# in order to be able to advertise correctly.
serviceAccountName: {{ .Values.nats.serviceAccount }}
{{ end }}
# Required to be able to HUP signal and apply config
# reload to the server without restarting the pod.
shareProcessNamespace: true
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Initializer container required to be able to lookup
# the external ip on which this node is running.
initContainers:
- name: bootconfig
command:
- nats-pod-bootconfig
- -f
- /etc/nats-config/advertise/client_advertise.conf
- -gf
- /etc/nats-config/advertise/gateway_advertise.conf
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: {{ .Values.bootconfig.image }}
imagePullPolicy: {{ .Values.bootconfig.pullPolicy }}
volumeMounts:
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
#################
# #
# NATS Server #
# #
#################
terminationGracePeriodSeconds: {{ .Values.nats.terminationGracePeriodSeconds }}
containers:
- name: nats
image: {{ .Values.nats.image }}
imagePullPolicy: {{ .Values.nats.pullPolicy }}
resources:
{{- toYaml .Values.nats.resources | nindent 10 }}
ports:
- containerPort: 4222
name: client
{{- if .Values.nats.externalAccess }}
hostPort: 4222
{{- end }}
- containerPort: 7422
name: leafnodes
{{- if .Values.nats.externalAccess }}
hostPort: 7422
{{- end }}
- containerPort: 7522
name: gateways
{{- if .Values.nats.externalAccess }}
hostPort: 7522
{{- end }}
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
{{- if .Values.websocket.enabled }}
- containerPort: {{ .Values.websocket.port }}
name: websocket
{{- if .Values.nats.externalAccess }}
hostPort: {{ .Values.websocket.port }}
{{- end }}
{{- end }}
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CLUSTER_ADVERTISE
value: {{ include "nats.clusterAdvertise" . }}
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
- name: nats-jwt-pvc
mountPath: {{ .Values.auth.resolver.store.dir }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
mountPath: /etc/nats-config/operator
{{- end }}
{{- end }}
{{- if .Values.nats.jetstream.fileStorage.enabled }}
- name: {{ include "nats.fullname" . }}-js-pvc
mountPath: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-cluster-volume
mountPath: /etc/nats-certs/cluster/{{ $secretName }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-leafnodes-volume
mountPath: /etc/nats-certs/leafnodes/{{ $secretName }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-gateways-volume
mountPath: /etc/nats-certs/gateways/{{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-ws-volume
mountPath: /etc/nats-certs/ws/{{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
mountPath: /etc/nats-creds/{{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
# Liveness/Readiness probes against the monitoring.
#
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
# Gracefully stop NATS Server on pod deletion or image upgrade.
#
lifecycle:
preStop:
exec:
# Using the alpine based NATS image, we add an extra sleep that is
# the same amount as the terminationGracePeriodSeconds to allow
# the NATS Server to gracefully terminate the client connections.
#
command:
- "/bin/sh"
- "-c"
- "nats-server -sl=ldm=/var/run/nats/nats.pid && /bin/sleep {{ .Values.nats.terminationGracePeriodSeconds }}"
#################################
# #
# NATS Configuration Reloader #
# #
#################################
{{ if .Values.reloader.enabled }}
- name: reloader
image: {{ .Values.reloader.image }}
imagePullPolicy: {{ .Values.reloader.pullPolicy }}
command:
- "nats-server-config-reloader"
- "-pid"
- "/var/run/nats/nats.pid"
- "-config"
- "/etc/nats-config/nats.conf"
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{ end }}
##############################
# #
# NATS Prometheus Exporter #
# #
##############################
{{ if .Values.exporter.enabled }}
- name: metrics
image: {{ .Values.exporter.image }}
imagePullPolicy: {{ .Values.exporter.pullPolicy }}
args:
- -connz
- -routez
- -subz
- -varz
- -prefix=nats
- -use_internal_server_id
- http://localhost:8222/
ports:
- containerPort: 7777
name: metrics
{{ end }}
volumeClaimTemplates:
{{- if eq .Values.auth.resolver.type "full" }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
#####################################
# #
# Account Server Embedded JWT #
# #
#####################################
- metadata:
name: nats-jwt-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.auth.resolver.store.size }}
{{- end }}
{{- end }}
{{- if and .Values.nats.jetstream.fileStorage.enabled (not .Values.nats.jetstream.fileStorage.existingClaim) }}
#####################################
# #
# Jetstream New Persistent Volume #
# #
#####################################
- metadata:
name: {{ include "nats.fullname" . }}-js-pvc
{{- if .Values.nats.jetstream.fileStorage.annotations }}
annotations:
{{- range $key, $value := .Values.nats.jetstream.fileStorage.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
accessModes:
{{- range .Values.nats.jetstream.fileStorage.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.nats.jetstream.fileStorage.size }}
storageClassName: {{ .Values.nats.jetstream.fileStorage.storageClassName | quote }}
{{- end }}

View File

@ -0,0 +1,347 @@
###############################
# #
# NATS Server Configuration #
# #
###############################
nats:
image: nats:2.1.9-alpine3.12
pullPolicy: IfNotPresent
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: false
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
# The number of connect attempts against discovered routes.
connectRetries: 30
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
resources: {}
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
lameDuckDuration:
terminationGracePeriodSeconds: 60
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
jetstream:
enabled: false
#############################
# #
# Jetstream Memory Storage #
# #
#############################
memStorage:
enabled: true
size: 1Gi
############################
# #
# Jetstream File Storage #
# #
############################
fileStorage:
enabled: false
storageDirectory: /data
# Set for use with existing PVC
# existingClaim: jetstream-pvc
# claimStorageSize: 1Gi
# Use below block to create new persistent volume
# only used if existingClaim is not specified
size: 1Gi
storageClassName: default
accessModes:
- ReadWriteOnce
annotations:
# key: "value"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
nameOverride: ""
imagePullSecrets: []
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: {}
# securityContext:
# fsGroup: 1000
# runAsUser: 1000
# runAsNonRoot: true
# Affinity for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
## Pod priority class name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: null
# Service topology
# ref: https://kubernetes.io/docs/concepts/services-networking/service-topology/
topologyKeys: []
# Pod Topology Spread Constraints
# ref https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []
# - maxSkew: 1
# topologyKey: zone
# whenUnsatisfiable: DoNotSchedule
# Annotations to add to the NATS pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# key: "value"
## Define a Pod Disruption Budget for the stateful set
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
podDisruptionBudget: null
# minAvailable: 1
# maxUnavailable: 1
# Annotations to add to the NATS StatefulSet
statefulSetAnnotations: {}
# Annotations to add to the NATS Service
serviceAnnotations: {}
cluster:
enabled: false
replicas: 3
noAdvertise: false
# Leafnode connections to extend a cluster:
#
# https://docs.nats.io/nats-server/configuration/leafnodes
#
leafnodes:
enabled: false
noAdvertise: false
# remotes:
# - url: "tls://connect.ngs.global:7422"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# Gateway connections to create a super cluster
#
# https://docs.nats.io/nats-server/configuration/gateways
#
gateway:
enabled: false
name: 'default'
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# In case of both external access and advertisements being
# enabled, an initializer container will be used to gather
# the public ips.
bootconfig:
image: connecteverything/nats-boot-config:0.5.2
pullPolicy: IfNotPresent
# NATS Box
#
# https://github.com/nats-io/nats-box
#
natsbox:
enabled: true
image: synadia/nats-box:0.4.0
pullPolicy: IfNotPresent
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
# The NATS config reloader image to use.
reloader:
enabled: true
image: connecteverything/nats-server-config-reloader:0.6.0
pullPolicy: IfNotPresent
# Prometheus NATS Exporter configuration.
exporter:
enabled: true
image: synadia/prometheus-nats-exporter:0.5.0
pullPolicy: IfNotPresent
# Prometheus operator ServiceMonitor support. Exporter has to be enabled
serviceMonitor:
enabled: false
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
labels: {}
annotations: {}
path: /metrics
# interval:
# scrapeTimeout:
# Authentication setup
auth:
enabled: false
# basic:
# noAuthUser:
# # List of users that can connect with basic auth,
# # that belong to the global account.
# users:
# # List of accounts with users that can connect
# # using basic auth.
# accounts:
# Reference to the Operator JWT.
# operatorjwt:
# configMap:
# name: operator-jwt
# key: KO.jwt
# Public key of the System Account
# systemAccount:
resolver:
# Disables the resolver by default
type: none
##########################################
# #
# Embedded NATS Account Server Resolver #
# #
##########################################
# type: full
# If the resolver type is 'full', delete when enabled will rename the jwt.
allowDelete: false
# Interval at which a nats-server with a nats based account resolver will compare
# it's state with one random nats based account resolver in the cluster and if needed,
# exchange jwt and converge on the same set of jwt.
interval: 2m
# Operator JWT
operator:
# System Account Public NKEY
systemAccount:
# resolverPreload:
# <ACCOUNT>: <JWT>
# Directory in which the account JWTs will be stored.
store:
dir: "/accounts/jwt"
# Size of the account JWT storage.
size: 1Gi
##############################
# #
# Memory resolver settings #
# #
##############################
# type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
# configMap:
# name: nats-accounts
# key: resolver.conf
##########################
# #
# URL resolver settings #
# #
##########################
# type: URL
# url: "http://nats-account-server:9090/jwt/v1/accounts/"
websocket:
enabled: false
port: 443

9
charts/kubezero-nats/update.sh Executable file
View File

@ -0,0 +1,9 @@
#!/bin/bash
set -ex
# get latest chart until they have upstream repo fixed
rm -rf charts/nats && mkdir -p charts/nats
git clone --depth=1 https://github.com/nats-io/k8s.git
cp -r k8s/helm/charts/nats/* charts/nats/
rm -rf k8s

View File

@ -0,0 +1,21 @@
nats:
nats:
image: nats:2.2.1-alpine3.13
advertise: false
jetstream:
enabled: true
memStorage:
enabled: true
size: 128Mi
natsbox:
enabled: false
exporter:
serviceMonitor:
enabled: true
labels:
release: metrics

View File

@ -1,6 +1,6 @@
{{- if not .Values.argo }}
{{- $artifacts := list "calico" "cert-manager" "kiam" "aws-node-termination-handler" "aws-ebs-csi-driver" "aws-efs-csi-driver" "local-volume-provisioner" "local-path-provisioner" "istio" "istio-ingress" "metrics" "logging" "argocd" "timecapsule" }}
{{- $artifacts := list "calico" "cert-manager" "kiam" "aws-node-termination-handler" "aws-ebs-csi-driver" "aws-efs-csi-driver" "local-volume-provisioner" "local-path-provisioner" "istio" "istio-ingress" "metrics" "logging" "argocd" "timecapsule" "nats" }}
{{- if .Values.global }}
global:

View File

@ -0,0 +1,8 @@
{{- define "nats-values" }}
{{- end }}
{{- define "nats-argo" }}
{{- end }}
{{ include "kubezero-app.app" . }}

View File

@ -69,3 +69,7 @@ argocd:
enabled: false
argo: {}
nats:
enabled: false
namespace: nats