Server Setup
To deploy the knot server to a Nomad cluster, you’ll need storage space for the database or access to a MySQL server. In this guide, we’ll use file storage and the built-in BadgerDB.
In this example, we’ll use the domain knot.internal
to access the knot web interface and *.knot.internal
as the wildcard domain for websites within spaces. You can update these to match your Nomad cluster’s configuration.
Step 1: Generate the Configuration File and Encryption Key
Generate an example Nomad job file using the following command:
knot scaffold --nomad > knot.nomad
Generate an encryption key:
knot genkey
Edit the
knot.nomad
file to match your cluster’s configuration.
Step 2: Update the Job File
In the knot.nomad
file:
datacenters
: Update this to the datacenter(s) where you want to deploy the knot server.
Within the knot.toml
configuration section of the job file:
server.url
: Set this to the domain name used to access the knot server.server.wildcard_domain
: Set this to the wildcard domain for websites running within spaces.server.encrypt
: Replace this with the encryption key generated byknot genkey
.server.badgerdb.enabled
: Set this totrue
to enable BadgerDB.server.nomad.token
: Should be set to a token that allows knot to control jobs within the cluster.
Step 3: Configure Storage
Ensure storage is configured and mounted under /data
to store the configuration database between restarts of the knot server.
Step 4: Add Tags for Ingress Controller
To expose the web interface to your ingress controller, add tags to the job file. The following example uses urlprefix
tags for the Fabio load balancer:
tags = [
"urlprefix-knot.internal proto=https tlsskipverify=true",
"urlprefix-*.knot.internal proto=https tlsskipverify=true"
]
Example Nomad Job File
Below is an example knot.nomad
job file:
job "knot-server" {
datacenters = ["dc1"]
update {
max_parallel = 1
min_healthy_time = "30s"
healthy_deadline = "1m"
auto_revert = true
}
group "knot-server" {
count = 1
network {
port "knot_port" {
to = 3000
}
port "knot_agent_port" {
to = 3010
}
}
task "knot-server" {
driver = "docker"
config {
image = "paularlott/knot:latest"
ports = ["knot_port", "knot_agent_port"]
}
mount {
type = "bind"
source = "/data"
target = "/data"
}
env {
KNOT_CONFIG = "/local/knot.toml"
}
template {
data = <<EOF
[log]
level = "info"
[server]
listen = "0.0.0.0:3000"
listen_agent = "0.0.0.0:3010"
url = "https://knot.example.com"
wildcard_domain = "*.knot.example.com"
agent_endpoint = "srv+knot-server-agent.service.consul"
encrypt = "<Replace this using knot genkey>"
# MySQL server
[server.mysql]
database = "knot"
enabled = false
host = ""
password = ""
user = ""
# BadgerDB storage
[server.badgerdb]
enabled = true
path = "/data/"
[server.nomad]
addr = "http://nomad.service.consul:4646"
token = ""
EOF
destination = "local/knot.toml"
}
resources {
cpu = 256
memory = 512
}
# Knot Server Port
service {
name = "${NOMAD_JOB_NAME}"
port = "knot_port"
address = "${attr.unique.network.ip-address}"
tags = [
"urlprefix-knot.internal proto=https tlsskipverify=true",
"urlprefix-*.knot.internal proto=https tlsskipverify=true"
]
check {
name = "alive"
type = "http"
protocol = "https"
tls_skip_verify = true
path = "/health"
interval = "10s"
timeout = "2s"
}
}
service {
name = "${NOMAD_JOB_NAME}-agent"
port = "knot_agent_port"
address = "${attr.unique.network.ip-address}"
check {
name = "alive"
port = "knot_port"
type = "http"
protocol = "https"
tls_skip_verify = true
path = "/health"
interval = "10s"
timeout = "2s"
}
}
}
}
}
Step 5: Deploy the Job
Launch the job in the cluster using the following command:
nomad run knot.nomad