How I Secured My Linux Server with Firewall Rules

I started by working out who was likely to hit the server and why. Script kiddies scanning common ports. Automated bots probing SSH and web services. Targeted attackers after data or persistence. For a Linux admin, that is the level of threat worth planning for. I ignored the theatrical stuff and focused on the services I actually exposed: SSH, web, and a few management endpoints. That kept the scope tight and the rules simple.

I ran basic checks for open ports and old services. I checked service versions and looked for exposed admin interfaces. Where I could not prove a service was safe, I treated it as risky until access was tightened. That is not paranoia. It is practical risk reduction. The inventory from that pass became the basis for the firewall policy.

Common vectors were brute force on SSH, vulnerable web apps, and unauthenticated services on internal networks. I mapped inbound, outbound and east-west traffic. That showed which interfaces needed strict rules and which could be left alone. The end result was a smaller ruleset that blocked obvious abuse without getting in the way of normal traffic.

Baseline configuration

I checked the distro default. On modern Debian and Ubuntu systems nftables or firewalld can already be present; on minimal installs nothing may be active. I checked the active backend with commands like sudo nft list ruleset or sudo iptables -L -n -v. I never assumed the defaults were safe. I also saved the pre-change state so I could roll back if I broke something.

I made sure the server had a static IP or a reserved DHCP lease. I used a separate unprivileged account for daily work and disabled root SSH login. I hardened SSH by moving it off port 22 where that made sense and restricting authentication to keys. Those steps reduce the damage a firewall mistake can do.

I documented interfaces, public and private networks, and any VPN endpoints. If the server sat behind a cloud provider load balancer I noted the health check ports. That network map told me which ports to expose and which to keep behind tighter filters. It also showed where to place logging and monitoring collectors.

Hardening steps

I used a default-deny posture. My nftables base was: allow established connections, allow loopback, allow specific service ports, drop everything else. For example:

  1. sudo nft add table inet filter
  2. sudo nft add chain inet filter input { type filter hook input priority 0; policy drop; }
  3. Allow SSH from admin IPs and allow web from 0.0.0.0/0 if the service is public.

I limited SSH to my home office and VPN subnets. I avoided clever one-liners. Simple rules are easier to read and easier to audit.

I paired network rules with service-level access. SSH used keys and a specific user. Web admin panels were bound to local interfaces or a VPN only. I used TCP wrappers and service config where possible to add a second check beyond the firewall.

I tested from allowed and blocked networks. I used nc and nmap to confirm ports were closed from the internet and open from allowed hosts. I ran sudo nft list ruleset after changes to check the rules were actually there. If a rule did not behave as expected I rolled back at once and adjusted it.

Every change went into a small changelog: date, file, command used, reason and verification step. I kept it in the server’s repo and kept a local copy too. That made it easy to see why a rule existed and who added it.

Validation checks

I verified rules with command checks and live tests. Commands included sudo nft list ruleset, sudo iptables -L -n -v where relevant, and sudo ufw status numbered if I was using UFW. I tested both permitted and denied paths and wrote down the results. Verification is how you avoid locking yourself out by mistake.

I ran low-effort scans from an external host: nmap for port discovery and basic version checks. I kept the intensity down so it would not trip provider abuse policies. For a deeper look I used an internal Kali VM to mimic lateral movement. The point was to check that blocked paths stayed blocked.

I reviewed auth logs, firewall logs and web server logs for failed attempts and odd patterns. I added short-term rules to block noisy offenders and then decided whether to keep them. The logs showed brute force attempts and random scans within hours, which backed up the case for a strict default policy.

Monitoring and alerts

I set up simple alerts for spikes in dropped packets and repeated failed logins. I used a lightweight stack: a local log forwarder and a remote collector that notifies me by email or chat webhook. Alerts show me when the rules are working and when they need a look.

I scheduled weekly checks and an automatic daily summary of the previous 24 hours. The summary included top blocked IPs and unusual destination ports. Regular review catches drift before it turns into a mess.

I treated rule updates like code changes. I tested them on a staging host, ran the verification steps, and committed them with the changelog entry. When a service needed a new port I added a limited rule and watched for abuse. Rules are not set-and-forget.

Rollback considerations

Creating backup configurations

Before any change I saved the active ruleset to a file.

For nftables:

sudo nft list ruleset > /root/nftables-backup-YYYYMMDD.conf

For UFW I exported the rules. Backups are the fastest way back when a change breaks services.

I rehearsed the revert: load the backup with

sudo nft -f /root/nftables-backup-YYYYMMDD.conf

or restore UFW from its backup. I kept an out-of-band access method, such as a serial console or provider console, in case I locked myself out. Revert steps were part of the changelog entry.

I stored rollback steps alongside the change notes. Each entry included the expected impact and a contact if escalation was needed. That cuts stress and shortens recovery time.

References to standards

I followed the usual principles: default deny, least privilege, defence in depth and logging. Those principles guide choices without forcing one toolset. They are practical for a Linux admin managing a single server.

I mapped the firewall posture to the server’s compliance needs. For personal homelabs this is overkill. For production systems I made sure logging and retention matched the relevant standard. The firewall was one control among several.

I keep a small shelf of practical guides. Read the nftables and iptables manuals. Follow distro security notes. Community threads are also useful when you want straight answers and real fixes.

If you need to harden a server, start with a clear threat model, then apply rules that match it.

Related posts

Vector | vdev-v0.3.3

Vector vdev v0 3 3: patch release with crash, leak and parsing fixes, connector and tooling improvements, upgrade notes on prechecks, rolling updates, compat

Loki | v3.7.2

Loki v3 7 2: security and CVE fixes, updated S3 client to aws sdk v1 97 3, ruler panic fix for unset validation scheme, S3 Object Lock sends SHA256 checksum

Loki | v3.7.2

Loki v3 7 2: Patch release with CVE fixes, AWS S3 SDK update, ruler panic fix, S3 Object Lock SHA256 checksum support