I've run DNS servers in the past - BIND and pdns. I've now gone all in ... because ... well it started with ACME.
As the OP states you can get a registrar to host a domain for you and then you create a subdomain anywhere you fancy and that includes at home. Do get the glue records right and do use dig to work out what is happening.
Now with a domain under your own control, you can use CNAME records in other zones to point at your zones and if you have dynamic DNS support on your zones (RFC 2136) then you can now support ACME ie Lets Encrypt and Zerossl and co.
Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME. However, acme.sh and simple-acme do and both are absolutely rock solid. Both of those projects are used by a lot of people and well trod.
simple-acme is for Windows. It has loads of add on scripts to deal with scenarios. Those scripts seem to be deprecated but work rather well. Quite a lot of magic here that an old school Linux sysadmin is glad of.
PowerDNS auth server supports dynamic DNS and you can filter access by IP and TSIG-KEY, per zone and/or globally.
> don't expect it to automatically set up your webserver to use the certificates it obtains.
This makes me so happy. Acme and certbot trying to do this is annoying, Caddy trying to get certs by default is annoying. I ended up on a mix of dehydrated and Apache mod_md but I think I like the look of uACME because dehydrated just feels clunky
Neat, I've used lego (https://github.com/go-acme/lego) but will certainly have to give uacme a look, love me a simple ACME client.
acme.sh was too garish for my liking, even as a guy that likes his fair share of shell scripts. And obviously certbot is a non-starter because of snap.
Certbot has earned my ire on just about every occasion I've had to interact with it. It is a terrible program and I can't wait to finish replacing it everywhere.
The new setup is using uAcme and nsupdate to do DNS-01 challenges. No more fiddling with any issues in the web server config for a particular virtual host, like some errant rewrite rule that prevents access to .well-known/.
I mean certbot handles the just issue me a cert via DNS-01 and I'll do the rest flow just fine. Massive overkill of a program for just that use-case but it's been humming along for me for years at this point. What's the selling point for uACME?
> Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME.
Are you certain? Not at a real machine at the moment so hard for me to dig into the details but CNAMEing the challenge response to another domain is absolutely supported via DNS-01 [0] and certbot is Let's Encrypt's recommended ACME client: [1]
Since Let’s Encrypt follows the DNS standards when
looking up TXT records for DNS-01 validation, you can
use CNAME records or NS records to delegate answering
the challenge to other DNS zones. This can be used to
delegate the _acme-challenge subdomain to a validation
specific server or zone.
... which is a very common pattern I've seen hundreds (thousands?) of times.
The issue you may have run into is that CNAME records are NOT allowed at the zone apex, for RFC 1033 states:
The CNAME record is used for nicknames. [...] There must not be any other
RRs associated with a nickname of the same class.
... of course making it impossible to enter NS, SOA, etc. records for the zone root when a CNAME exists there.
P.S. doing literally fucking anything on mobile is like pulling teeth encased in concrete. Since this is how the vast majority of the world interfaces with computing I am totally unsurprised that people are claiming 10x speedups with LLMs.
I think CNAME redirections being not supported is reasonable choice. Would make my life easier as well but it opens all kinds of bad possibilities that bad actors would definitely use.
Can you give me an example where this is a problem? If someone can create a CNAME they can create a TXT (ignoring the possibility of an API being restricted to just one).
Without CNAME redirect I wouldn't be able to automatically renew wildcard ssl for client domains with dns that has no API. Even if they do have an API, doing it this way stops me from needing to deal with two different APIs
Seconded. Don’t use certbot; it’s an awful piece of user-hostile software, starting from snap being the only supported installation channel. Everything it does wrong, acme.sh does right.
same on debian trixie. certbot works fine for me. Zone records in bind, generate the dnskey, cronjob to re-sign it daily and your off to the races. no problems no snaps.
Unfortunately, it's more than that: the Linux installation instructions on the certbot website[0] give options for pip or snap. Distro packages are not mentioned.
I feel no need to. I'm quite certain that the certbot folks are aware of the existence of distro packages and even know how to check https://pkgs.org/download/certbot for availability. One might guess that they only want to supply instructions for upstream-managed distribution channels rather than dealing with e.g. some ancient version being shipped on Debian.
I prefer and use the knot DNS server for authoritative DNS (and either knot-resolver or Unbound for caching DNS servers) myself: it is quite feature-rich, including DNSSEC, RFC 2136 support, an easy master-slave setup. Apparently it does support database-based configuration and zone definitions, too, but I find file-based storage to be simpler.
Just remember, if you run your own DNS, and you do so for a mission critical platform, the platform is exposed to a udp DDoS that will be hard to detect let alone prevent.
Unless of course you will invest 5-6 figures worth of US dollars worth of equipment, which by then you can look back and ask yourself, was I better off with Google Cloud DNS, AWS Route 53 and the likes.
Not that I disagree with the fact that these risks exist, but how is that different than running any other service for a mission critical platform?
The main thing I can think of is DNS amplification attacks, but that's more your DNS server being used as part of a DDoS attack rather than being targeted for one. Also (afaik) resolvers are more common targets for DNS amplification than authoritative.
Get a mini-pc with 2x LAN ports + a mediatek Wifi 6/7 module. Install Proxmox. Make 3 VM's: OpenWrt (or router firmware of choice), unbound and adguard home. Plug your fibre into lan port, plug rest of network into other lan port. In proxmox, set pcie passthrough for one of the Lan ports and the wifi card. Setup openwrt to connect to your isp and points its dns to you adguard home server. Point your adguard home server to your unbound server as upstream. This is a good starting point if you want to get a feel for running your own router + dns. You don't need to use off the shelf garbage routers; x86/x64 routers are the best. On openwrt I configure a special traffic queue so that I don't have buffer overflows, so my connection is super stable and low latency. Combined with the adguard + unbound dns setup, my internet connection is amazingly fast compared to traditional routers.
Better yet, set up ssh to the proxmox server and ask claude code to set it up for you, works like a charm! claude can call ssh and dig and verify that your dns chains work, it can test your firewall and ports (basically running pen tests against yourself..), it can sort out almost any issue (I had intel wifi card and had firmware locks on broadcasting in 5GHZ spectrum in AP Mode - mediatek doesn't - claude helped try to override firmware in kernel but intel firmware won't budge). It can setup automatic nightly updates that are safe, it can help you setup recovery/backup plans (which runs before updates), it can automate certain proxmox tasks (periodic snapshotting of vm's) and best of all, it can document the entire infrastructure comprehensively each time I make changes to it.
I've been tempted by this because I self host everything else, but "adding an entry to postgres instead of using namecheap gui" is overkill, just use a DNS with an API.
Last few days I've been migrating everything to luadns format, stored in github and then I have github actions triggering a script to convert it to octodns and apply it.
I could have just used either, but I like the luadns format but didn't want to be stuck using them as a provider
One thing worth noting if you're using your own DNS for Let's Encrypt DNS-01 challenges: make sure your authoritative server supports the RFC 2136 dynamic update protocol, or you'll end up writing custom API shims for every ACME client. PowerDNS has solid RFC 2136 support out of the box and pairs well with Certbot's --preferred-challenges dns-01 flag. BIND works too but the ACL configuration for allowing dynamic updates from specific IPs is fiddly to get right the first time.
dnsmasq on an RPi Zero 2W is the backbone of my self-hosted setup. Combined with Tailscale, it gives me access from anywhere to arbitrary domains I define myself, with full HTTPS thanks to Caddy.
At home, I put all of my network infrastructure software in one basket because that seems like the right path towards maximizing availability[1]: It provides one point of potential hardware failure instead of many.
For me, that means doing routing, DNS, VPN, and associated stuff with one box running OpenWRT. It works. It's ridiculously stable. And rather than having a number of things that could break the network when they die, I only have 1 thing that can do so.
That box currently happens to be a Raspberry Pi 4 that uses VLANs as Ethernet port expanders, but it is also stable AF with a [shock! horror!] USB NIC. I picked that direction years ago mostly because I have a strong affinity towards avoiding critical moving parts (like cooling fans) in infrastructure.
But those details don't matter. Any single box running OpenWRT, OPNsense, pfSense, Debian, FreeBSD, or whatever, can behave more-or-less similarly.
[1]: Yeah, so about that. If the real-world MTBF for a system that relies upon 1 box is 10 years, then the MTBF for a system relying on 2 boxes to both keep working is only 5 years. Less is more.
It can’t be fully secure but you can use a domain or path with a uuid or similar such that no one could guess your dns endpoint, over dot or doh. In theory someone might log your dns query then replay it against your dns server though.
You could also add whitelisting on your dns server to known IPs, or at least ranges to limit exposure, add rate limiting / detection of patterns you wouldn’t exhibit etc.
You could rotate your dns endpoint address every x minutes on some known algorithm implemented client and server side.
But in the end it’s mostly security through obscurity, unless you go via your own tailnet or similar
That assumes a device that can enter a VPN. I’d like to run a DNS server for a group of kids playing Minecraft on a switch. Since they’re not in the same (W)LAN, I can’t do it on the local network level. And the switch doesn’t have a VPN client.
Perhaps it seems obvious to some, but it's not obvious to me so I need to ask: What's the advantage of a selectively-available DNS for kids playing Minecraft with Nintendo Switch instead of regular DNS [whether self-hosted or not]?
All I can think of is that it adds obscurity, in that it makes the address of the Minecraft server more difficult to discover or guess (and thus keeps everything a bit more private/griefing-resistant while still letting kids play the game together).
And AXFR zone transfers are one way that DNS addresses leak. (AXFR is a feature, not a bug.)
As a potential solution:
You can set up DNS that resolves the magic hardcoded Minecraft server name (whatever that is) to the address of your choosing, and that has AXFR disabled. In this way, nobody will be able to discover the game server's address unless they ask that particular DNS server for the address of that particular name.
It's not airtight (obscurity never is), but it's probably fine. It increases the size of the haystack.
(Or... Lacking VPN, you can whitelist only the networks that the kids use to play from. But in my experience with whitelisting, the juice isn't worth the squeeze in a world of uncontrollably-dynamic IP addresses. All someone wants to do is play the game/access the server/whatever Right Now, but the WAN address has changed so that doesn't work until they get someone's attention and wait for them to make time to update the whitelist. By the time this happens, Right Now is in the past. Whitelisting generally seems antithetical towards getting things done in a casual fashion.)
Ok, why would I want to do that? Because when Microsoft bought Minecraft they decided to split the ecosystem into the Java Edition (everyone playing on a computer) and Bedrock Edition (Consoles, Tablets, ...) and cross-play is not possible on the official realms. That leaves out the option to just pay and rent a realm for the group.
So we're hosting our own minecraft server and a suitable connector for cross-play - and it's easy to join on tablets, computers and so on because there's a button that allows you to enter an address. But on the switch, Microsoft in its wisdom decided that there'd be no "join random server" button. But there are some official realm servers, and they just happen to host a lobby and the client understands some interface commands sent by the server (1). Some folks in the community devised a great hack - you just host a lobby yourself that presents a list of servers of your choice. But to do that, you need to bend the DNS entries of a few select hostnames that host the "official" lobbies so that they now point to your lobby. Which means you need to run a resolver that is capable of resolving all hostnames, because you need to set it in the switchs networking settings as the primary DNS server.
Now, there are people that run resolvers in the community and that might be one option, but I'm honestly a bit picky about who gets to see what hostnames my kids switch wants to resolve.
Whitelisting networks is impossible - it's residential internet.
The reason I'd be interested in running this behind a VPN is that I don't want to run an open resolver and become part of an amplification attack. (And sadly, the Switch 1 does not have a sufficiently modern DNS stack so that I can just enable DNS cookies and be done with it. The Switch 2 supports it).
Sorry if this sounds complicated. It's just hacks on hacks on hacks. But it works.
(1) judging from the looks and feel, this is actually implemented as a minecraft game interface and the client just treats that as a game server. It even reports the number of players hanging out in the lobby.
Thanks. I suspected that this is where things were heading. I don't see a problem with using hacks-on-hacks to get a thing done with closed systems; one does what one must.
On the DNS end, it seems the constraints are shaped like this:
1. Provides custom responses for arbitrary DNS requests, and resolves regular [global] DNS
2. Works with residential internet
3. Uses no open resolvers (because of amplification attacks)
4. Works with standalone [Internet-connected] Nintendo Switch devices
5. Avoids VPN (because #4 -- Switch doesn't grok VPN)
With that set of rules, I think the idea is constrained completely out of existence. One or more of them need to be relaxed in order for it to get off the ground.
The most obvious one to relax seems to to be #3, open resolvers. If an open resolver is allowed then the rest of the constraints fit just fine.
DNS amplification can be mitigated well-enough for limited-use things like this Minecraft server in various ways, like implementing per-address rate limiting and denying AXFR completely. These kinds of mitigations can be problematic with popular services, but a handful of Switch devices won't trip over them at all.
Or: VPN could be used. But that will require non-zero hardware for remote players (which can be cheap-ish, but not free), and that hardware will need power, and the software running on that hardware will need configured for each WLAN it is needed to work with. That path is something I wouldn't wish upon a network engineer, much less a kid with a portable game console. It's possible, but it's feels like a complete non-starter.
Knot (as suggested by others) is good. As are BIND and PowerDNS. These are the big authoritative resolvers I think of at least, and all of them allow for basically hands-free DNSSEC; just flip a switch and you'll have it. I've run DNSSEC with all three and have no complaints.
And when using such turn-key DNSSEC support, I think there's very little risk to enabling it. While other commenters pointing out its marginal utility are correct, turn-key DNSSEC support that Just Works™ de-risks it enough for me that the relatively marginal utility just isn't a concern.
Plus, once you've got DNSSEC enabled, you can at the very least start to enjoy stuff like SSHFP records. DANE may not have any real-world traction, but who knows what the future may bring.
That is to say, if you misconfigure it, or try to turn it off, you will have an invalid domain until the TTL runs out, and it's really just not worth the headache unless you have a real use case.
As the OP states you can get a registrar to host a domain for you and then you create a subdomain anywhere you fancy and that includes at home. Do get the glue records right and do use dig to work out what is happening.
Now with a domain under your own control, you can use CNAME records in other zones to point at your zones and if you have dynamic DNS support on your zones (RFC 2136) then you can now support ACME ie Lets Encrypt and Zerossl and co.
Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME. However, acme.sh and simple-acme do and both are absolutely rock solid. Both of those projects are used by a lot of people and well trod.
acme.sh is ideal for unix gear and if you follow this blokes method of installation: https://pieterbakker.com/acme-sh-installation-guide-2025/ usefully centralised.
simple-acme is for Windows. It has loads of add on scripts to deal with scenarios. Those scripts seem to be deprecated but work rather well. Quite a lot of magic here that an old school Linux sysadmin is glad of.
PowerDNS auth server supports dynamic DNS and you can filter access by IP and TSIG-KEY, per zone and/or globally.
Join the dots.
[EDIT: Speling, conjunction switch]
https://github.com/ndilieto/uacme
Tiny, simple, reliable. What more can you ask?
This makes me so happy. Acme and certbot trying to do this is annoying, Caddy trying to get certs by default is annoying. I ended up on a mix of dehydrated and Apache mod_md but I think I like the look of uACME because dehydrated just feels clunky
acme.sh was too garish for my liking, even as a guy that likes his fair share of shell scripts. And obviously certbot is a non-starter because of snap.
The new setup is using uAcme and nsupdate to do DNS-01 challenges. No more fiddling with any issues in the web server config for a particular virtual host, like some errant rewrite rule that prevents access to .well-known/.
Are you certain? Not at a real machine at the moment so hard for me to dig into the details but CNAMEing the challenge response to another domain is absolutely supported via DNS-01 [0] and certbot is Let's Encrypt's recommended ACME client: [1]
... which is a very common pattern I've seen hundreds (thousands?) of times.The issue you may have run into is that CNAME records are NOT allowed at the zone apex, for RFC 1033 states:
... of course making it impossible to enter NS, SOA, etc. records for the zone root when a CNAME exists there.P.S. doing literally fucking anything on mobile is like pulling teeth encased in concrete. Since this is how the vast majority of the world interfaces with computing I am totally unsurprised that people are claiming 10x speedups with LLMs.
[0] https://letsencrypt.org/docs/challenge-types/
[1] https://letsencrypt.org/docs/client-options/
Without CNAME redirect I wouldn't be able to automatically renew wildcard ssl for client domains with dns that has no API. Even if they do have an API, doing it this way stops me from needing to deal with two different APIs
This sounds like you are complaining about Ubuntu, not the software you wish to install in Ubuntu.
[0] https://certbot.eff.org/
Unless of course you will invest 5-6 figures worth of US dollars worth of equipment, which by then you can look back and ask yourself, was I better off with Google Cloud DNS, AWS Route 53 and the likes.
The main thing I can think of is DNS amplification attacks, but that's more your DNS server being used as part of a DDoS attack rather than being targeted for one. Also (afaik) resolvers are more common targets for DNS amplification than authoritative.
Better yet, set up ssh to the proxmox server and ask claude code to set it up for you, works like a charm! claude can call ssh and dig and verify that your dns chains work, it can test your firewall and ports (basically running pen tests against yourself..), it can sort out almost any issue (I had intel wifi card and had firmware locks on broadcasting in 5GHZ spectrum in AP Mode - mediatek doesn't - claude helped try to override firmware in kernel but intel firmware won't budge). It can setup automatic nightly updates that are safe, it can help you setup recovery/backup plans (which runs before updates), it can automate certain proxmox tasks (periodic snapshotting of vm's) and best of all, it can document the entire infrastructure comprehensively each time I make changes to it.
Last few days I've been migrating everything to luadns format, stored in github and then I have github actions triggering a script to convert it to octodns and apply it.
I could have just used either, but I like the luadns format but didn't want to be stuck using them as a provider
For me, that means doing routing, DNS, VPN, and associated stuff with one box running OpenWRT. It works. It's ridiculously stable. And rather than having a number of things that could break the network when they die, I only have 1 thing that can do so.
That box currently happens to be a Raspberry Pi 4 that uses VLANs as Ethernet port expanders, but it is also stable AF with a [shock! horror!] USB NIC. I picked that direction years ago mostly because I have a strong affinity towards avoiding critical moving parts (like cooling fans) in infrastructure.
But those details don't matter. Any single box running OpenWRT, OPNsense, pfSense, Debian, FreeBSD, or whatever, can behave more-or-less similarly.
[1]: Yeah, so about that. If the real-world MTBF for a system that relies upon 1 box is 10 years, then the MTBF for a system relying on 2 boxes to both keep working is only 5 years. Less is more.
You could also add whitelisting on your dns server to known IPs, or at least ranges to limit exposure, add rate limiting / detection of patterns you wouldn’t exhibit etc.
You could rotate your dns endpoint address every x minutes on some known algorithm implemented client and server side.
But in the end it’s mostly security through obscurity, unless you go via your own tailnet or similar
All I can think of is that it adds obscurity, in that it makes the address of the Minecraft server more difficult to discover or guess (and thus keeps everything a bit more private/griefing-resistant while still letting kids play the game together).
And AXFR zone transfers are one way that DNS addresses leak. (AXFR is a feature, not a bug.)
As a potential solution:
You can set up DNS that resolves the magic hardcoded Minecraft server name (whatever that is) to the address of your choosing, and that has AXFR disabled. In this way, nobody will be able to discover the game server's address unless they ask that particular DNS server for the address of that particular name.
It's not airtight (obscurity never is), but it's probably fine. It increases the size of the haystack.
(Or... Lacking VPN, you can whitelist only the networks that the kids use to play from. But in my experience with whitelisting, the juice isn't worth the squeeze in a world of uncontrollably-dynamic IP addresses. All someone wants to do is play the game/access the server/whatever Right Now, but the WAN address has changed so that doesn't work until they get someone's attention and wait for them to make time to update the whitelist. By the time this happens, Right Now is in the past. Whitelisting generally seems antithetical towards getting things done in a casual fashion.)
So we're hosting our own minecraft server and a suitable connector for cross-play - and it's easy to join on tablets, computers and so on because there's a button that allows you to enter an address. But on the switch, Microsoft in its wisdom decided that there'd be no "join random server" button. But there are some official realm servers, and they just happen to host a lobby and the client understands some interface commands sent by the server (1). Some folks in the community devised a great hack - you just host a lobby yourself that presents a list of servers of your choice. But to do that, you need to bend the DNS entries of a few select hostnames that host the "official" lobbies so that they now point to your lobby. Which means you need to run a resolver that is capable of resolving all hostnames, because you need to set it in the switchs networking settings as the primary DNS server.
Now, there are people that run resolvers in the community and that might be one option, but I'm honestly a bit picky about who gets to see what hostnames my kids switch wants to resolve.
Whitelisting networks is impossible - it's residential internet.
The reason I'd be interested in running this behind a VPN is that I don't want to run an open resolver and become part of an amplification attack. (And sadly, the Switch 1 does not have a sufficiently modern DNS stack so that I can just enable DNS cookies and be done with it. The Switch 2 supports it).
Sorry if this sounds complicated. It's just hacks on hacks on hacks. But it works.
(1) judging from the looks and feel, this is actually implemented as a minecraft game interface and the client just treats that as a game server. It even reports the number of players hanging out in the lobby.
On the DNS end, it seems the constraints are shaped like this:
With that set of rules, I think the idea is constrained completely out of existence. One or more of them need to be relaxed in order for it to get off the ground.The most obvious one to relax seems to to be #3, open resolvers. If an open resolver is allowed then the rest of the constraints fit just fine.
DNS amplification can be mitigated well-enough for limited-use things like this Minecraft server in various ways, like implementing per-address rate limiting and denying AXFR completely. These kinds of mitigations can be problematic with popular services, but a handful of Switch devices won't trip over them at all.
Or: VPN could be used. But that will require non-zero hardware for remote players (which can be cheap-ish, but not free), and that hardware will need power, and the software running on that hardware will need configured for each WLAN it is needed to work with. That path is something I wouldn't wish upon a network engineer, much less a kid with a portable game console. It's possible, but it's feels like a complete non-starter.
And when using such turn-key DNSSEC support, I think there's very little risk to enabling it. While other commenters pointing out its marginal utility are correct, turn-key DNSSEC support that Just Works™ de-risks it enough for me that the relatively marginal utility just isn't a concern.
Plus, once you've got DNSSEC enabled, you can at the very least start to enjoy stuff like SSHFP records. DANE may not have any real-world traction, but who knows what the future may bring.
[0]: https://www.knot-dns.cz/docs/3.5/singlehtml/index.html#autom...
That is to say, if you misconfigure it, or try to turn it off, you will have an invalid domain until the TTL runs out, and it's really just not worth the headache unless you have a real use case.
Did DNSSEC for company website, worked with zero maintenance for several years. On a cloud-provided DNS. Would want the same on self-hosted DNS too.
Yes, but with nowadays https/tls usage it's almost irrelevant for normal websites.
If bad actors can create valid tls certs they can solve the dnssec problem.