Table of contents
Integration Patterns
This page shows how Kernloom fits into real environments. Each pattern combines one or more of its three mechanisms β progressive enforcement, graph-based path control, and anomaly detection β to solve a specific protection problem.
All patterns share one property: Kernloom runs on the Linux host that already handles the traffic. No new appliance, no sidecar, no separate control plane.
Configuration is driven by PDPConfig YAML files (--pdp-config), which ship under /opt/kernloom/attested/etc/pdp/. Each node type has a bootstrap variant (blocking disabled, wide thresholds) and a production variant (tighter, enforcement active).
| Bootstrap | Production | Use for |
|---|---|---|
ziti-controller-bootstrap.yaml | ziti-controller.yaml | Public Ziti controller |
ziti-router-bootstrap.yaml | ziti-router.yaml | Public Ziti router |
web-server-bootstrap.yaml | web-server.yaml | Public web server |
reverse-proxy-bootstrap.yaml | reverse-proxy.yaml | Reverse proxy / WAF host |
idp-bootstrap.yaml | idp.yaml | Identity provider |
database-bootstrap.yaml | database.yaml | Database server |
api-server-bootstrap.yaml | api-server.yaml | Internal API / app server |
nas-bootstrap.yaml | nas.yaml | NAS / storage |
Public-facing profiles (ziti-*, web-server, reverse-proxy) have graph learning disabled β graph learning is not useful when clients are unknown internet IPs. Internal profiles (idp, database, api-server, nas) have graph learning enabled.
Pattern 1 β OpenZiti Router and Controller (public-facing)
The problem: OpenZiti infrastructure is publicly reachable. Routers handle high-throughput tunnel traffic; the Controller exposes enrollment and management APIs. Both attract scans, SYN floods, and connection abuse from the internet.
Internet
β
βββββββββ΄βββββββββ
β β
βΌ βΌ
βββββββββββββββ βββββββββββββββ
β Kernloom β β Kernloom β
β (Router β β (Controller β
β host) β β host) β
ββββββββ¬βββββββ ββββββββ¬βββββββ
β β
ββββββββ΄βββββββ ββββββββ΄βββββββ
β Ziti β β Ziti β
β Router β β Controller β
βββββββββββββββ βββββββββββββββ
What Kernloom does:
- Router:
ziti-routeris tuned for sustained high-throughput tunnel traffic. Flood attacks are rate-limited without affecting legitimate overlay traffic. - Controller:
ziti-controllerreacts faster to sustained connection patterns. The block gate protects against over-blocking legitimate clients behind NAT.
Graph learning is not used on these nodes β they face the open internet and the graph would never converge with constantly changing client IPs.
Lifecycle:
# On the Router host
# 1 β Bootstrap: dry-run for 7β14 days, autotune learns real traffic
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/ziti-router-bootstrap.yaml \
--dry-run=true --whitelist-learn=true
# 2 β Production: switch to production profile, enable enforcement
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/ziti-router.yaml \
--dry-run=false
# On the Controller host
# 1 β Bootstrap
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/ziti-controller-bootstrap.yaml \
--dry-run=true --whitelist-learn=true
# 2 β Production
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/ziti-controller.yaml \
--dry-run=false
Result: Internet background radiation, SYN floods, and connection abuse are absorbed at the kernel level before the Ziti processes see them.
Pattern 2 β NAS with known-access baseline
The problem: A NAS is accessed by a small, known set of clients: an admin through a jump host, and a group of users via SMB. Everything else should be impossible β no scanning, no unexpected access, no probing of management interfaces.
Admin Users (SMB) Unknown
β β β
ββββββββ΄βββββββ ββββββββ΄βββββββ β
β Jump Host β β SMB Users β β
β 10.0.1.10 β β 10.0.2.0/24β β
ββββββββ¬βββββββ ββββββββ¬βββββββ β
β :443 β :445 β
β β βΌ
ββββββββββββ¬βββββββββββ BLOCK (graph)
β
ββββββββββ΄βββββββββ
β Kernloom β
β (NAS host) β
β β
β frozen graph: β
β β JumpHost:443 β
β β Users:445 β
β β all else β
ββββββββββ¬βββββββββ
β
ββββββ΄βββββ
β NAS β
βββββββββββ
What Kernloom does:
The Graph Learner observes the NAS for a week during normal operations β recording exactly which sources connect on which paths. After review and freeze: admin access to :443, SMB to :445. Any other source or port is immediately blocked. Known sources behaving anomalously (scanning other ports) are caught by progressive enforcement.
Lifecycle:
# 1 β Bootstrap: dry-run + graph learning for 7β14 days
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/nas-bootstrap.yaml \
--dry-run=true --whitelist-learn=true \
--graph --graph-mode=learn
# 2 β Enable enforcement once dry-run output looks stable
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/nas-bootstrap.yaml \
--dry-run=false --graph --graph-mode=learn
# 3 β Review and freeze graph after 7β14 days
kliq graph edges --sort=state # overview by state
kliq graph baselines --sort=obs # EWMA stats per edge
kliq graph freeze --dry-run # check readiness without writing
kliq graph freeze
# 4 β Production: frozen enforcement
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/nas.yaml \
--dry-run=false --graph --graph-mode=frozen-enforce
# 5 β Strongest posture: XDP allow-mode (only known tuples pass at kernel level)
klshield tuple-enforce allow
Result: The NAS is only reachable via the exact communication patterns it was designed for. Any deviation is blocked immediately.
Pattern 3 β Identity provider and authentication endpoint
The problem: Login endpoints, OAuth/OIDC token endpoints, and enrollment APIs attract credential stuffing, password spraying, and connection floods. These are particularly sensitive: a single successful bypass has outsized impact, and the endpoints are often underprovisioned.
Internet Internal services
β β
ββββββββββββ΄ββββββββββββ β
β Kernloom βββββββββββββββ
β (IdP host) β
β β progressive enforcement (internet)
β frozen graph: β + graph-based path control
β β api-server:8080 β (internal services only)
β β monitoring:443 β
β β unexpected β
ββββββββββββ¬ββββββββββββ
β
ββββββββββββ΄ββββββββββββ
β IdP / Auth Server β
β OIDC Β· SAML Β· OAuth β
ββββββββββββββββββββββββ
What Kernloom does:
The idp profile prioritises SYN sensitivity over raw PPS tolerance β authentication abuse looks like sustained low-rate connection pressure, not volumetric flood. Rate limits are tight.
The Graph Learner locks down which internal services are allowed to call the IdP at all. An unexpected internal service attempting token requests triggers an immediate signal. Internet-facing client IPs are too variable for graph learning β only internal service-to-service edges are graph-enforced.
Lifecycle:
# 1 β Bootstrap: dry-run + graph learning for internal edges (7β14 days)
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/idp-bootstrap.yaml \
--dry-run=true --whitelist-learn=true \
--graph --graph-mode=learn
# 2 β Enable enforcement once dry-run looks stable
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/idp-bootstrap.yaml \
--dry-run=false --graph --graph-mode=learn
# 3 β Freeze graph after 7β14 days
kliq graph freeze --dry-run
kliq graph freeze
# 4 β Production
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/idp.yaml \
--dry-run=false --graph --graph-mode=frozen-enforce
Result: Credential stuffing campaigns are rate-limited after a few attempts. Unexpected internal services calling the IdP are caught immediately.
Pattern 4 β East-West lateral movement in a VLAN
The problem: Nodes inside a VLAN or internal network can reach each other freely. If one workload is compromised, it can scan and probe the rest of the segment β the network layer allows it.
VLAN 10 β internal segment
ββββββββββββ :5432 β ββββββββββββ
β Web App ββββββββββββββΊβ DB β
β Node A β β Node B β
ββββββββββββ ββββββββββββ
ββββββββββββ scan/flood ββββββββββββ
βCompromisedββββββββββββββΊβ Node C β β BLOCK
β Node A β unknown β any host β immediately
ββββββββββββ path ββββββββββββ
What Kernloom does:
Run Kernloom on each host that needs protecting. During a learning period, the Graph Learner records every observed sourceβdestination flow. Once the baseline is frozen, any source attempting a path that was never observed β including port scans, connection attempts to new services, or traffic from unexpected sources β is blocked immediately, before a single packet reaches the application.
Progressive enforcement runs in parallel: even on known paths, anomalous behaviour (SYN floods, high PPS, port probing) is caught and rate-limited independently.
Lifecycle:
# 1 β Bootstrap: dry-run + graph learning (7β14 days)
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
--dry-run=true --whitelist-learn=true \
--graph --graph-mode=learn
# 2 β Enable enforcement once dry-run looks stable
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
--dry-run=false --graph --graph-mode=learn
# 3 β Freeze graph after 7β14 days
kliq graph freeze --dry-run
kliq graph freeze
# 4 β Production
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/api-server.yaml \
--dry-run=false --graph --graph-mode=frozen-enforce
# 5 β Optional: XDP allow-mode for strictest posture
klshield tuple-enforce allow
Pick the PDPConfig that matches your node type:
database-bootstrap.yamlfor DB nodes,api-server-bootstrap.yamlfor application servers.
Result: Lateral movement is stopped at the kernel level. A compromised node cannot scan the segment or connect to services it has never talked to before.
Pattern 5 β WAF shielded from DDoS overflow
The problem: A DDoS appliance in front of a WAF absorbs large-volume attacks, but reacts with latency. Targeted, lower-volume floods β persistent SYN pressure, connection exhaustion, slow HTTP floods β still reach the WAF and can overwhelm it before the appliance reclassifies the traffic.
Internet
β
βββββββββββ΄βββββββββββ
β DDoS Appliance β β handles volumetric floods
β (slow reaction to β but slow to react to
β targeted abuse) β targeted lower-volume attacks
βββββββββββ¬βββββββββββ
β β some flood still passes through
βββββββββββ΄βββββββββββ
β Kernloom β β catches what slips through:
β (WAF host) β per-source rate limits,
β β SYN pressure, connection abuse
βββββββββββ¬βββββββββββ
β
βββββββββββ΄βββββββββββ
β WAF β β protected from connection
ββββββββββββββββββββββ exhaustion
What Kernloom does:
Kernloom runs on the Linux host that runs the WAF, acting before the WAF’s network stack. Sources sending excessive SYN packets, maintaining too many connections per second, or showing non-compliance behaviour are progressively rate-limited and eventually blocked β per source IP, not as a blanket drop.
Graph learning is not used here β the node faces internet clients with constantly changing IPs.
Lifecycle:
# 1 β Bootstrap: dry-run for 7β14 days
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/reverse-proxy-bootstrap.yaml \
--dry-run=true --whitelist-learn=true
# 2 β Production
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/reverse-proxy.yaml \
--dry-run=false
Use
web-server-bootstrap.yaml/web-server.yamlif Kernloom runs directly on the origin server rather than the WAF host.
Result: The WAF stays available under sustained targeted pressure. Per-source enforcement means a single abusive source is blocked without affecting others behind the same upstream.
Pattern 6 β SSH bastion hardening
The problem: An SSH bastion or jump host is exposed to the internet. It attracts constant brute-force attempts, credential stuffing, and port scanning. Every failed attempt consumes server resources and pollutes logs.
Internet
β
ββββββββββββ΄ββββββββββββ
β Kernloom β β low PPS tolerance,
β (bastion host) β high SYN sensitivity,
β β fast escalation to BLOCK
ββββββββββββ¬ββββββββββββ
β
ββββββββββββ΄ββββββββββββ
β SSH Bastion β β only sees legitimate
β / Jump Host β connection attempts
ββββββββββββββββββββββββ
What Kernloom does:
The ssh-bastion profile is tuned for this scenario: low trigger thresholds, fast escalation, and a short block TTL. A source that makes a few failed attempts quickly accumulates strikes and is blocked before it can run a meaningful brute-force campaign.
There is no dedicated PDPConfig file for SSH bastions yet. Use
--profile ssh-bastion(legacy shorthand) or start fromweb-server-bootstrap.yamland tighten the thresholds manually.
Lifecycle:
# 1 β Bootstrap: dry-run, whitelist known-good sources (office NAT, monitoring)
sudo /opt/kernloom/attested/kliq \
--profile ssh-bastion \
--dry-run=true --whitelist-learn=true
# 2 β Production
sudo /opt/kernloom/attested/kliq \
--profile ssh-bastion \
--dry-run=false
Result: Brute-force attempts are stopped after a handful of connection attempts. SSH daemon logs are clean. Server resources are not consumed by scripted scanners.
Pattern 7 β Multi-tenant edge node with tenant isolation
The problem: A shared Linux host runs services for multiple tenants or environments. Tenant A’s traffic should not be able to reach Tenant B’s processes.
Internet
β
ββββββββββββ΄ββββββββββββ
β Kernloom β
β (shared host) β
β β
β graph baseline: β
β β Tenant A β :8080 β
β β Tenant B β :9090 β
β β AβB, BβA, cross β
ββββββββββββ¬ββββββββββββ
β
ββββββββ β ββββββββ
β App β β β App β
β A β β β B β
β:8080 β β β:9090 β
ββββββββ β ββββββββ
What Kernloom does:
The Graph Learner observes which sources reach which ports. Once the baseline is frozen, cross-tenant traffic β a source for Tenant A attempting to reach Tenant B’s port β triggers an immediate block. Progressive enforcement catches any tenant abusing shared host resources.
Lifecycle:
# 1 β Bootstrap: dry-run + graph learning (7β14 days)
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
--dry-run=true --whitelist-learn=true \
--graph --graph-mode=learn
# 2 β Enable enforcement
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
--dry-run=false --graph --graph-mode=learn
# 3 β Freeze graph after 7β14 days
kliq graph freeze --dry-run
kliq graph freeze
# 4 β Production
sudo /opt/kernloom/attested/kliq \
--pdp-config=/opt/kernloom/attested/etc/pdp/api-server.yaml \
--dry-run=false --graph --graph-mode=frozen-enforce
# 5 β Optional: XDP allow-mode
klshield tuple-enforce allow
Result: Logical tenant isolation at the kernel level, without container orchestration or separate network namespaces.
Combining patterns
Most real deployments combine multiple patterns:
| Deployment | Patterns combined |
|---|---|
| OpenZiti with internal services | Pattern 1 (public ZT infrastructure) + Pattern 4 (east-west between internal nodes) |
| NAS in a corporate network | Pattern 2 (known-access baseline) + Pattern 6 (SSH for admin access) |
| Multi-tier web app | Pattern 5 (WAF shielding) + Pattern 3 (IdP protection) + Pattern 4 (DB isolation) |
| Edge device with management | Pattern 7 (multi-tenant) + Pattern 6 (SSH bastion for admin) |
The general rule: start every node with the bootstrap PDPConfig and --dry-run=true. Enable enforcement when autotune has stabilised. Layer the Graph Learner on top for internal nodes once you have stable progressive enforcement baselines.
See also
| Getting started | Bootstrap lifecycle, graph freeze, and recovery in detail |
| IQ reference | PDPConfig profiles, feature profiles, all flags |
| Shield reference | Tuple enforcement commands and map capacity limits |
| Architecture | The two-scenario model: DoS prevention vs. microsegmentation |
Current limitation β multi-interface attach works, but kliq sees aggregated telemetry. klshield can now attach to multiple interfaces simultaneously. However, kliq reads telemetry from all attached interfaces combined β it has no way to tell which packet came from which interface. This means per-interface policy separation is not possible today: you cannot run graph learning only on
eth1(internal) while doing DoS-only enforcement oneth0(public). Both interfaces feed into the same source counters and the same graph. Combined patterns that require different enforcement models per interface on the same host (e.g. Pattern 2 public DoS protection + Pattern 4 internal microsegmentation) are therefore not achievable on a single node today. The practical workaround: deploy Kernloom on the interface that matters most for the specific threat, or separate the roles across dedicated hosts.
Answer a few questions to estimate your exposure level, control gaps, and where Kernloom can reduce risk β without installing anything.