Table of contents

Integration Patterns

This page shows how Kernloom fits into real environments. Each pattern combines one or more of its three mechanisms β€” progressive enforcement, graph-based path control, and anomaly detection β€” to solve a specific protection problem.

All patterns share one property: Kernloom runs on the Linux host that already handles the traffic. No new appliance, no sidecar, no separate control plane.

Configuration is driven by PDPConfig YAML files (--pdp-config), which ship under /opt/kernloom/attested/etc/pdp/. Each node type has a bootstrap variant (blocking disabled, wide thresholds) and a production variant (tighter, enforcement active).

BootstrapProductionUse for
ziti-controller-bootstrap.yamlziti-controller.yamlPublic Ziti controller
ziti-router-bootstrap.yamlziti-router.yamlPublic Ziti router
web-server-bootstrap.yamlweb-server.yamlPublic web server
reverse-proxy-bootstrap.yamlreverse-proxy.yamlReverse proxy / WAF host
idp-bootstrap.yamlidp.yamlIdentity provider
database-bootstrap.yamldatabase.yamlDatabase server
api-server-bootstrap.yamlapi-server.yamlInternal API / app server
nas-bootstrap.yamlnas.yamlNAS / storage

Public-facing profiles (ziti-*, web-server, reverse-proxy) have graph learning disabled β€” graph learning is not useful when clients are unknown internet IPs. Internal profiles (idp, database, api-server, nas) have graph learning enabled.


Pattern 1 β€” OpenZiti Router and Controller (public-facing)

Public-FacingAnomaly DetectionRate LimitingDDoS Resilience

The problem: OpenZiti infrastructure is publicly reachable. Routers handle high-throughput tunnel traffic; the Controller exposes enrollment and management APIs. Both attract scans, SYN floods, and connection abuse from the internet.

              Internet
                 β”‚
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚                β”‚
         β–Ό                β–Ό
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚   Kernloom  β”‚  β”‚   Kernloom  β”‚
  β”‚ (Router     β”‚  β”‚ (Controller β”‚
  β”‚  host)      β”‚  β”‚  host)      β”‚
  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
         β”‚                β”‚
  β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”
  β”‚    Ziti     β”‚  β”‚    Ziti     β”‚
  β”‚   Router    β”‚  β”‚ Controller  β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What Kernloom does:

  • Router: ziti-router is tuned for sustained high-throughput tunnel traffic. Flood attacks are rate-limited without affecting legitimate overlay traffic.
  • Controller: ziti-controller reacts faster to sustained connection patterns. The block gate protects against over-blocking legitimate clients behind NAT.

Graph learning is not used on these nodes β€” they face the open internet and the graph would never converge with constantly changing client IPs.

Lifecycle:

# On the Router host

# 1 β€” Bootstrap: dry-run for 7–14 days, autotune learns real traffic
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/ziti-router-bootstrap.yaml \
  --dry-run=true --whitelist-learn=true

# 2 β€” Production: switch to production profile, enable enforcement
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/ziti-router.yaml \
  --dry-run=false


# On the Controller host

# 1 β€” Bootstrap
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/ziti-controller-bootstrap.yaml \
  --dry-run=true --whitelist-learn=true

# 2 β€” Production
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/ziti-controller.yaml \
  --dry-run=false

Result: Internet background radiation, SYN floods, and connection abuse are absorbed at the kernel level before the Ziti processes see them.


Pattern 2 β€” NAS with known-access baseline

MicrosegmentationZero TrustAccess ControlInternal

The problem: A NAS is accessed by a small, known set of clients: an admin through a jump host, and a group of users via SMB. Everything else should be impossible β€” no scanning, no unexpected access, no probing of management interfaces.

     Admin               Users (SMB)          Unknown
       β”‚                     β”‚                   β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚  Jump Host  β”‚       β”‚  SMB Users  β”‚            β”‚
β”‚  10.0.1.10  β”‚       β”‚  10.0.2.0/24β”‚            β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜       β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜            β”‚
       β”‚ :443                β”‚ :445              β”‚
       β”‚                     β”‚                   β–Ό
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              BLOCK (graph)
                  β”‚                      
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”             
         β”‚    Kernloom     β”‚             
         β”‚   (NAS host)    β”‚             
         β”‚                 β”‚             
         β”‚  frozen graph:  β”‚             
         β”‚  βœ“ JumpHost:443 β”‚             
         β”‚  βœ“ Users:445    β”‚             
         β”‚  βœ— all else     β”‚             
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜             
                  β”‚                      
             β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β”                 
             β”‚   NAS   β”‚                 
             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 

What Kernloom does:

The Graph Learner observes the NAS for a week during normal operations β€” recording exactly which sources connect on which paths. After review and freeze: admin access to :443, SMB to :445. Any other source or port is immediately blocked. Known sources behaving anomalously (scanning other ports) are caught by progressive enforcement.

Lifecycle:

# 1 β€” Bootstrap: dry-run + graph learning for 7–14 days
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/nas-bootstrap.yaml \
  --dry-run=true --whitelist-learn=true \
  --graph --graph-mode=learn

# 2 β€” Enable enforcement once dry-run output looks stable
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/nas-bootstrap.yaml \
  --dry-run=false --graph --graph-mode=learn

# 3 β€” Review and freeze graph after 7–14 days
kliq graph edges --sort=state     # overview by state
kliq graph baselines --sort=obs   # EWMA stats per edge
kliq graph freeze --dry-run       # check readiness without writing
kliq graph freeze

# 4 β€” Production: frozen enforcement
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/nas.yaml \
  --dry-run=false --graph --graph-mode=frozen-enforce

# 5 β€” Strongest posture: XDP allow-mode (only known tuples pass at kernel level)
klshield tuple-enforce allow

Result: The NAS is only reachable via the exact communication patterns it was designed for. Any deviation is blocked immediately.


Pattern 3 β€” Identity provider and authentication endpoint

AuthenticationPublic-FacingRate LimitingZero Trust

The problem: Login endpoints, OAuth/OIDC token endpoints, and enrollment APIs attract credential stuffing, password spraying, and connection floods. These are particularly sensitive: a single successful bypass has outsized impact, and the endpoints are often underprovisioned.

          Internet                Internal services
              β”‚                         β”‚
   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”             β”‚
   β”‚    Kernloom          β”‚β—„β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
   β”‚  (IdP host)          β”‚
   β”‚                      β”‚  progressive enforcement (internet)
   β”‚  frozen graph:       β”‚  + graph-based path control
   β”‚  βœ“ api-server:8080   β”‚    (internal services only)
   β”‚  βœ“ monitoring:443    β”‚
   β”‚  βœ— unexpected        β”‚
   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚  IdP / Auth Server   β”‚
   β”‚  OIDC Β· SAML Β· OAuth β”‚
   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What Kernloom does:

The idp profile prioritises SYN sensitivity over raw PPS tolerance β€” authentication abuse looks like sustained low-rate connection pressure, not volumetric flood. Rate limits are tight.

The Graph Learner locks down which internal services are allowed to call the IdP at all. An unexpected internal service attempting token requests triggers an immediate signal. Internet-facing client IPs are too variable for graph learning β€” only internal service-to-service edges are graph-enforced.

Lifecycle:

# 1 β€” Bootstrap: dry-run + graph learning for internal edges (7–14 days)
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/idp-bootstrap.yaml \
  --dry-run=true --whitelist-learn=true \
  --graph --graph-mode=learn

# 2 β€” Enable enforcement once dry-run looks stable
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/idp-bootstrap.yaml \
  --dry-run=false --graph --graph-mode=learn

# 3 β€” Freeze graph after 7–14 days
kliq graph freeze --dry-run
kliq graph freeze

# 4 β€” Production
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/idp.yaml \
  --dry-run=false --graph --graph-mode=frozen-enforce

Result: Credential stuffing campaigns are rate-limited after a few attempts. Unexpected internal services calling the IdP are caught immediately.


Pattern 4 β€” East-West lateral movement in a VLAN

MicrosegmentationZero TrustInternalAnomaly Detection

The problem: Nodes inside a VLAN or internal network can reach each other freely. If one workload is compromised, it can scan and probe the rest of the segment β€” the network layer allows it.

                    VLAN 10 β€” internal segment
                                                      
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  :5432 βœ“    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              
  β”‚  Web App │────────────►│    DB    β”‚              
  β”‚  Node A  β”‚             β”‚  Node B  β”‚              
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              
                                                      
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  scan/flood  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              
  β”‚Compromised│────────────►│ Node C   β”‚  ← BLOCK    
  β”‚  Node A  β”‚  unknown     β”‚ any host β”‚    immediately
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  path        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              

What Kernloom does:

Run Kernloom on each host that needs protecting. During a learning period, the Graph Learner records every observed source→destination flow. Once the baseline is frozen, any source attempting a path that was never observed — including port scans, connection attempts to new services, or traffic from unexpected sources — is blocked immediately, before a single packet reaches the application.

Progressive enforcement runs in parallel: even on known paths, anomalous behaviour (SYN floods, high PPS, port probing) is caught and rate-limited independently.

Lifecycle:

# 1 β€” Bootstrap: dry-run + graph learning (7–14 days)
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
  --dry-run=true --whitelist-learn=true \
  --graph --graph-mode=learn

# 2 β€” Enable enforcement once dry-run looks stable
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
  --dry-run=false --graph --graph-mode=learn

# 3 β€” Freeze graph after 7–14 days
kliq graph freeze --dry-run
kliq graph freeze

# 4 β€” Production
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/api-server.yaml \
  --dry-run=false --graph --graph-mode=frozen-enforce

# 5 β€” Optional: XDP allow-mode for strictest posture
klshield tuple-enforce allow

Pick the PDPConfig that matches your node type: database-bootstrap.yaml for DB nodes, api-server-bootstrap.yaml for application servers.

Result: Lateral movement is stopped at the kernel level. A compromised node cannot scan the segment or connect to services it has never talked to before.


Pattern 5 β€” WAF shielded from DDoS overflow

DDoS ResiliencePublic-FacingRate LimitingAnomaly Detection

The problem: A DDoS appliance in front of a WAF absorbs large-volume attacks, but reacts with latency. Targeted, lower-volume floods β€” persistent SYN pressure, connection exhaustion, slow HTTP floods β€” still reach the WAF and can overwhelm it before the appliance reclassifies the traffic.

          Internet
              β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚   DDoS Appliance   β”‚  ← handles volumetric floods
    β”‚  (slow reaction to β”‚    but slow to react to
    β”‚   targeted abuse)  β”‚    targeted lower-volume attacks
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚  ← some flood still passes through
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚     Kernloom       β”‚  ← catches what slips through:
    β”‚    (WAF host)      β”‚    per-source rate limits,
    β”‚                    β”‚    SYN pressure, connection abuse
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚        WAF         β”‚  ← protected from connection
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    exhaustion

What Kernloom does:

Kernloom runs on the Linux host that runs the WAF, acting before the WAF’s network stack. Sources sending excessive SYN packets, maintaining too many connections per second, or showing non-compliance behaviour are progressively rate-limited and eventually blocked β€” per source IP, not as a blanket drop.

Graph learning is not used here β€” the node faces internet clients with constantly changing IPs.

Lifecycle:

# 1 β€” Bootstrap: dry-run for 7–14 days
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/reverse-proxy-bootstrap.yaml \
  --dry-run=true --whitelist-learn=true

# 2 β€” Production
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/reverse-proxy.yaml \
  --dry-run=false

Use web-server-bootstrap.yaml / web-server.yaml if Kernloom runs directly on the origin server rather than the WAF host.

Result: The WAF stays available under sustained targeted pressure. Per-source enforcement means a single abusive source is blocked without affecting others behind the same upstream.


Pattern 6 β€” SSH bastion hardening

Access ControlPublic-FacingRate LimitingBrute Force

The problem: An SSH bastion or jump host is exposed to the internet. It attracts constant brute-force attempts, credential stuffing, and port scanning. Every failed attempt consumes server resources and pollutes logs.

          Internet
              β”‚
   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚    Kernloom          β”‚  ← low PPS tolerance,
   β”‚  (bastion host)      β”‚    high SYN sensitivity,
   β”‚                      β”‚    fast escalation to BLOCK
   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚    SSH Bastion       β”‚  ← only sees legitimate
   β”‚    / Jump Host       β”‚    connection attempts
   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What Kernloom does:

The ssh-bastion profile is tuned for this scenario: low trigger thresholds, fast escalation, and a short block TTL. A source that makes a few failed attempts quickly accumulates strikes and is blocked before it can run a meaningful brute-force campaign.

There is no dedicated PDPConfig file for SSH bastions yet. Use --profile ssh-bastion (legacy shorthand) or start from web-server-bootstrap.yaml and tighten the thresholds manually.

Lifecycle:

# 1 β€” Bootstrap: dry-run, whitelist known-good sources (office NAT, monitoring)
sudo /opt/kernloom/attested/kliq \
  --profile ssh-bastion \
  --dry-run=true --whitelist-learn=true

# 2 β€” Production
sudo /opt/kernloom/attested/kliq \
  --profile ssh-bastion \
  --dry-run=false

Result: Brute-force attempts are stopped after a handful of connection attempts. SSH daemon logs are clean. Server resources are not consumed by scripted scanners.


Pattern 7 β€” Multi-tenant edge node with tenant isolation

MicrosegmentationZero TrustInternalTenant Isolation

The problem: A shared Linux host runs services for multiple tenants or environments. Tenant A’s traffic should not be able to reach Tenant B’s processes.

          Internet
              β”‚
   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚    Kernloom          β”‚
   β”‚  (shared host)       β”‚
   β”‚                      β”‚
   β”‚  graph baseline:     β”‚
   β”‚  βœ“ Tenant A β†’ :8080  β”‚
   β”‚  βœ“ Tenant B β†’ :9090  β”‚
   β”‚  βœ— Aβ†’B, Bβ†’A, cross   β”‚
   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
   β”Œβ”€β”€β”€β”€β”€β”€β”   β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”
   β”‚ App  β”‚   β”‚   β”‚ App  β”‚
   β”‚  A   β”‚   β”‚   β”‚  B   β”‚
   β”‚:8080 β”‚   β”‚   β”‚:9090 β”‚
   β””β”€β”€β”€β”€β”€β”€β”˜   β”‚   β””β”€β”€β”€β”€β”€β”€β”˜

What Kernloom does:

The Graph Learner observes which sources reach which ports. Once the baseline is frozen, cross-tenant traffic β€” a source for Tenant A attempting to reach Tenant B’s port β€” triggers an immediate block. Progressive enforcement catches any tenant abusing shared host resources.

Lifecycle:

# 1 β€” Bootstrap: dry-run + graph learning (7–14 days)
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
  --dry-run=true --whitelist-learn=true \
  --graph --graph-mode=learn

# 2 β€” Enable enforcement
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/api-server-bootstrap.yaml \
  --dry-run=false --graph --graph-mode=learn

# 3 β€” Freeze graph after 7–14 days
kliq graph freeze --dry-run
kliq graph freeze

# 4 β€” Production
sudo /opt/kernloom/attested/kliq \
  --pdp-config=/opt/kernloom/attested/etc/pdp/api-server.yaml \
  --dry-run=false --graph --graph-mode=frozen-enforce

# 5 β€” Optional: XDP allow-mode
klshield tuple-enforce allow

Result: Logical tenant isolation at the kernel level, without container orchestration or separate network namespaces.


Combining patterns

Most real deployments combine multiple patterns:

DeploymentPatterns combined
OpenZiti with internal servicesPattern 1 (public ZT infrastructure) + Pattern 4 (east-west between internal nodes)
NAS in a corporate networkPattern 2 (known-access baseline) + Pattern 6 (SSH for admin access)
Multi-tier web appPattern 5 (WAF shielding) + Pattern 3 (IdP protection) + Pattern 4 (DB isolation)
Edge device with managementPattern 7 (multi-tenant) + Pattern 6 (SSH bastion for admin)

The general rule: start every node with the bootstrap PDPConfig and --dry-run=true. Enable enforcement when autotune has stabilised. Layer the Graph Learner on top for internal nodes once you have stable progressive enforcement baselines.


See also

Getting startedBootstrap lifecycle, graph freeze, and recovery in detail
IQ referencePDPConfig profiles, feature profiles, all flags
Shield referenceTuple enforcement commands and map capacity limits
ArchitectureThe two-scenario model: DoS prevention vs. microsegmentation

Current limitation β€” multi-interface attach works, but kliq sees aggregated telemetry. klshield can now attach to multiple interfaces simultaneously. However, kliq reads telemetry from all attached interfaces combined β€” it has no way to tell which packet came from which interface. This means per-interface policy separation is not possible today: you cannot run graph learning only on eth1 (internal) while doing DoS-only enforcement on eth0 (public). Both interfaces feed into the same source counters and the same graph. Combined patterns that require different enforcement models per interface on the same host (e.g. Pattern 2 public DoS protection + Pattern 4 internal microsegmentation) are therefore not achievable on a single node today. The practical workaround: deploy Kernloom on the interface that matters most for the specific threat, or separate the roles across dedicated hosts.

Not sure if Kernloom fits your environment?

Answer a few questions to estimate your exposure level, control gaps, and where Kernloom can reduce risk β€” without installing anything.

Start Exposure Assessment β†’