Sovereign Home–Lab With VPS Ingress & Hosted DNS

Googled777 avatar   
Googled777
This doctrine block describes a hardened, layered infrastructure where a home–lab runs private services behind a VPS ingress and a remote DNS/mail server, without exposing the home IP or SSH to the in..


Sovereign Home–Lab Architecture with Remote Ingress
Doctrine block

Sovereign Home–Lab With VPS Ingress & Hosted DNS

This doctrine block describes a hardened, layered infrastructure where a home–lab runs private services behind a VPS ingress and a remote DNS/mail server, without exposing the home IP or SSH to the internet.

1. High‑level architecture

The design is built around three layers: a public DNS/mail & web layer, a public ingress/proxy layer, and a private home–lab compute layer. Each layer has a single responsibility and protects the next layer behind it.

[ Internet Clients ] │ ▼ [ Remote DNS + Web + Mail Server ] (Plesk on hosted root server) - Authoritative DNS for all domains - Hosts customer websites - Handles mail accounts and policies - Forwards specific HTTP(S) traffic to ingress node │ ▼ [ Ingress VPS ] - Single public entry for self-hosted services - Terminates TLS/SSL - Reverse proxy to home-lab services - Hides home IP completely │ ▼ [ Home-Lab Network ] - Multiple wired nodes (Cat8) - Services: media, photos, notes, snippets, etc. - SSH only reachable from LAN - No direct internet exposure

The guiding principle: only expose what must be public, and keep everything else on private networks with strictly controlled entry points.

2. Roles of each layer

2.1 DNS, mail, and web layer

This is a remote root server running a hosting control panel (e.g. Plesk) at a provider that includes basic DDoS protection. It functions as the public “identity” for all domains.

  • DNS authority: All domains use this server as their nameserver.
  • Web hosting: Customer websites are hosted here, isolated from the home–lab.
  • Mail hosting: Mailboxes and aliases are defined and managed centrally.
  • Traffic forwarding: Selected subdomains are proxied/forwarded to the ingress VPS.
  • IP restrictions: SSH and mail usage are locked down to specific trusted IPs only.
Public DNS Mail Web

2.2 Ingress & reverse‑proxy layer

This is a low‑cost VPS acting as a shield and entry point for all self‑hosted services that actually live on the home–lab. It never exposes the home IP.

  • Single public IP: All public services live behind one well‑defined address.
  • TLS termination: Handles HTTPS certificates and redirects.
  • Reverse proxy: Routes incoming traffic to the appropriate service at home.
  • Noise absorber: Takes the hit from scans, bots, and low‑level attacks.
  • Simplicity: Lightweight, easy to rebuild, no privileged access to the home network.
Public Ingress Reverse proxy

2.3 Private home–lab layer

The home–lab is a cluster of wired machines, each assigned a specific role (media, knowledge, compute, etc.). No home IP or SSH port is exposed to the internet.

  • Services: music streaming, photo management, snippet vault, note system, etc.
  • Networking: all key nodes are wired using Cat8 for stable, low‑latency links.
  • SSH: only reachable on the LAN; not forwarded, not exposed.
  • Flex nodes: additional laptops/hosts can be wired in when needed.
  • Load distribution: different machines handle different types of workloads.
Private Compute Storage Media

3. Home–lab machine roles (generic pattern)

While the exact hostnames are omitted, the pattern is role‑based rather than hardware‑based. Machines are promoted into roles where their strengths matter and their weaknesses are irrelevant.

3.1 Knowledge node

A laptop with a solid CPU, upgraded RAM, and fast NVMe, but damaged input devices or a weak GPU, is ideal as a headless “knowledge engine”.

  • Runs: note system, snippet vault, personal knowledge base, lightweight databases.
  • Focus: CPU, I/O, stability; no GPU‑intensive workloads.
  • Advantages: existing battery acts as a mini‑UPS, damage to keyboard/touchpad is irrelevant.

3.2 Heavy compute & media node

A more powerful workstation‑class machine is retired into server duty to handle indexing, media processing, and heavier container stacks.

  • Runs: photo server, large media indexer, CPU/GPU‑intensive tasks.
  • Storage: large, fast NVMe SSDs with dual‑boot options if needed.
  • Role: stay online, run hot, absorb heavy workflows so the main workstation stays free.

3.3 Daily workstation

A primary workstation handles interactive work, creative tools, and “live” sessions, while most long‑running or heavy tasks are offloaded to the backend nodes.

  • Runs: desktop workloads, creative applications, browser, communication tools.
  • Offloads: backups, intensive processing, long‑running jobs to backend servers.
  • Benefit: remains responsive and uncluttered by server responsibilities.

3.4 Reserve and GPU nodes

Older laptops with working GPUs and sufficient RAM are kept as cold reserves. They stay powered off most of the time to save electricity but can be wired in when needed.

  • Potential roles: GPU inference, transcoding, temporary compute node, emergency workstation.
  • Access: connected via USB–Ethernet dongles (1 Gbps) when activated.
  • Philosophy: powered off by default; only online when the ecosystem requires them.

4. Network design and cabling

The network is intentionally over‑provisioned on cabling and under‑provisioned on always‑on nodes to stay efficient and ready for scaling.

  • Wiring: Cat8 cables are used for all critical links, typically around 10 m per run.
  • Throughput: currently 1 Gbps at the endpoints, but cabling is ready for higher speeds later.
  • Ethernet adapters: laptops without native Ethernet use USB–Ethernet dongles (1 Gbps).
  • Wi‑Fi: considered legacy/slow and avoided for critical services; wired is the default.
  • Power discipline: only machines actively serving a role stay online; reserves remain powered off.

Strategy: invest once in quality cabling so that machines can be rearranged, promoted, or repurposed freely without ever being limited by the physical network.

5. Security posture

Security is achieved by minimizing exposure, not by layering dozens of reactive tools. Services are split into public and private planes, and remote access is restricted by design.

5.1 Public surface

  • Websites: hosted on the remote DNS/web/mail server with provider‑level DDoS protection.
  • Mail: accounts handled on the same server; sending/usage restricted to trusted IPs.
  • Self‑hosted services: exposed via the ingress VPS only (e.g. music, photos, notes).
  • Certificates: TLS managed centrally on the Plesk server and/or ingress VPS.

5.2 Restricted access

  • SSH (hosting server): bound to a single trusted IP; all other SSH traffic dropped.
  • Mail protocols: sending and possibly reading limited to trusted IPs to prevent abuse.
  • SSH (home–lab): only available on the LAN; no port forwarding from the internet.
  • Admin panels: accessible only from trusted locations or via the LAN.

5.3 Design trade‑offs

  • No remote SSH to home–lab: deliberate choice to reduce attack surface to nearly zero.
  • Service‑only exposure: users can reach applications, but not the underlying machines.
  • Simple ingress: ingress VPS is small, rebuildable, and doesn’t hold critical data.

The core idea: remote access to services is allowed; remote access to systems is not. The system is designed to be stable enough that remote SSH is a convenience, not a requirement.

6. Adaptation checklist

To adapt this pattern for another environment, the following checklist can be used as a starting point:

  • DNS: Move domain nameservers to a server you control (or a provider you trust).
  • Mail: Centralize mail hosting; lock SMTP/IMAP/POP to specific IP ranges where possible.
  • Ingress VPS: Deploy a small VPS to act as the single public entry and reverse proxy.
  • Home–lab isolation: Do not expose the home IP; route everything through the ingress VPS.
  • SSH policy: Restrict SSH to trusted IPs and/or to the LAN only.
  • Role‑based hardware: Assign machines based on strengths (CPU, RAM, GPU, storage, stability).
  • Cabling: Overbuild the physical network once so future expansion is trivial.
  • Power discipline: Keep reserve nodes powered off until the ecosystem actually needs them.
0 Comments

No comments found