Table of contents
Core Concepts
This page explains how Kernloom works — without going into configuration details. For installation, see Getting started. For deep-dive reference, see the individual component pages.
What Kernloom does
Kernloom sits between your network interface and your application stack. It watches incoming traffic, learns what normal looks like, and acts on traffic that deviates — before your application ever processes it.
It does not inspect payload content, decrypt TLS, or act at the application layer. It works at the network level: who is sending, how much, and whether the pattern matches what you have seen before.
The three building blocks
Shield — the enforcement layer
Shield runs inside the Linux kernel, attached directly to your network interface. It is the first thing incoming packets meet.
Shield can:
- allow specific address ranges (everything else may still pass, or be required to match)
- block specific IP addresses
- rate-limit a source that is sending too many packets
- record telemetry — packet counts, protocol breakdown, and drop counters per source
Once a decision is written into Shield, it takes effect at line rate — no userspace process is in the critical path. Shield itself does not make decisions. It only acts on what IQ tells it to do.
IQ — the decision engine
IQ runs in userspace. Every tick (default: 1 second), it reads Shield’s telemetry, scores each active source, and decides whether any enforcement should change.
IQ can move a source through four stages:
| Stage | Effect |
|---|---|
| Observe | Traffic passes normally. IQ is watching. |
| Rate-limit (soft) | Source is slowed. Legitimate traffic still gets through. |
| Rate-limit (hard) | Source is reduced to near-zero. Near-unusable for abusive patterns. |
| Block | Source is dropped entirely at the kernel level. |
Escalation is gradual and evidence-based. IQ requires sustained abnormal behaviour before escalating, and backs off automatically when behaviour improves.
IQ also learns your baseline over time. It adjusts its thresholds using your real traffic data so it becomes more accurate without manual tuning.
Graph Learner — Zero Trust path enforcement
The Graph Learner records which sources communicate with which destinations. Every observed flow becomes an edge in a graph.
Once the graph reflects your real communication patterns, you freeze it. After freeze, any source taking an unknown path is blocked immediately — bypassing the normal gradual escalation.
This gives you Zero Trust microsegmentation on the Linux host you already run, without a service mesh, sidecar, or separate control plane.
Two independent protection layers
The Graph Learner and the progressive enforcement engine answer different questions and operate completely independently:
| Graph Learner | Progressive Enforcement (IQ) | |
|---|---|---|
| Watches | Communication paths — who talks to whom | Traffic behaviour — volume, SYN rate, port-scan patterns |
| Triggers on | A source using an unknown path after freeze | Elevated severity score sustained over multiple ticks |
| Action | Blocks the unknown path immediately | Escalates: observe → rate-limit → block, then auto-recovers |
A source that is “known” in the graph is not exempt from behaviour-based enforcement. If a known, approved node suddenly starts sending a SYN flood, scanning ports, or producing unusual traffic volume, IQ will detect the anomaly and escalate enforcement independently — regardless of what the graph says.
This is intentional. A compromised internal workload might deliberately stay on known communication paths to avoid graph detection, but its traffic behaviour will still look abnormal to IQ. The two layers complement each other: the graph catches unexpected paths, IQ catches unexpected behaviour.
Progressive enforcement in detail
Escalation requires evidence. A brief traffic spike does not move a source to BLOCK. IQ looks for sustained, repeated signals.
OBSERVE → RATE_SOFT → RATE_HARD → BLOCK
↑ each stage has a minimum hold time ↑
Each stage has:
- Entry condition — a strike count threshold reached over multiple ticks
- Minimum hold time — how long IQ stays in a stage before considering a step down
- Exit condition — a streak of clean ticks required before stepping down
For sources that keep hitting the rate limiter while already rate-limited (non-compliance), IQ can escalate to BLOCK faster — this signals deliberate abuse rather than accidental bursts.
For sources where the block gate has a severity and duration requirement, IQ will hold at HARD if the requirements are not met — a safety mechanism for NAT-heavy environments.
Where Kernloom fits
Internet
│
▼
[ NIC ]
│
▼
[ Shield (kernel, line-rate) ] ←── IQ writes decisions into Shield's maps
│ IQ reads telemetry from Shield's maps
▼
[ Reverse proxy / WAF / application ]
Kernloom runs on any Linux host that sees the traffic first. It does not require a separate appliance, a cloud service, or changes to your application.
What Kernloom does not do
- No TLS decryption or payload inspection
- No application-layer (HTTP/DNS/etc.) awareness
- No replacement for a DDoS scrubbing service at multi-Gbit/s scale (see Benchmarks)
- No upstream router or firewall rule management
Coming: Kernloom Forge
Today, each Kernloom installation is configured locally. Kernloom Forge will be the central management layer — compile, sign, and push policies to every registered node in your fleet, with fleet-wide audit and anomaly detection.
Configuration files you write today (PolicyPack and PDPConfig YAML) are already forward-compatible with Forge.
See also
| Getting started | Install and go from dry-run to enforcement in minutes |
| Architecture | The PDP/PEP model, data flow, and two-scenario model |
| Integration Patterns | How these concepts apply to real node types |
Answer a few questions to estimate your exposure level, control gaps, and where Kernloom can reduce risk — without installing anything.