Cheaper code-signing certs would be great. I don't like how the CA/B is so focused on TLS only. PKI is a slightly wider landscape. I sincerely hope PKI-centric code and package signing makes its way to the Linux world where most influential people in these discussions live, so they can appreciate the importance of having a "letsencrypt" for other types of PKI usage like S/MIME and Authenticode.
There is literally a code-signing working group in the CA/BF. However, the browsers don't really participate in it, since it's irrelevant to browsers. This is the entire point of moving to dedicated hierarchies per use-case---each PKI (web, code signing, etc) can evolve independently.
Glad to see DNS validation from multiple perspectives, that's a scary attack vector.
I wonder if we can ever hope for CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs, authenticated with something like a DNS-01 challenge. I've advocated for this before [1][2], but here's my pitch again:
I want to issue certificates from my own ICA for my homelab and office, to avoid ratelimits and hide hostnames for private services. I submit that issuing a 90-day ICA certificate with a name constraint that only allows it to issue certificates for the specific domain is no more dangerous than issuing a wildcard certificate, and offers enough utility that it should be considered seriously.
Objection 1: "Just use a wildcard cert." Wildcard certs are not sufficient here because they don't support nested wildcards, and — more importantly — they don't allow you to isolate hosts since any host can serve all subdomains. I'd rather not give some rando vibecoded nodejs app the same certificate that I use to handle auth.
Objection 2: "Just install a self-signed CA on all your devices." Installing and managing self-signed CAs on every device is tedious, error prone, and arguably more dangerous than issuing a 90-day name-constrained ICA.
Objection 3: "Aren't name constraints not supported by all clients?" On the contrary, they've had wide support for almost a decade, and for those just set the critical bit.
I understand this is not a "just ship it lmao" kind of change, but if we want this by 2030 planning for it needs to start happening now.
I feel like that BGP attacks (and outright mistakes) haven't gone away, so I wonder how useful MPIC is these days. Also, hosting companies have been known to MITM their customers connections in order to get valid fake certs.
> I feel like that BGP attacks (and outright mistakes) haven't gone away, so I wonder how useful MPIC is these days.
The actual wording is this:
> The goal of this proposal is to make it more difficult for adversaries to successfully launch equally-specific prefix attacks against the domain validation processes described in the TLS BRs. [emphasis added]
Everyone (everyone in the CA system at least) knows that it will not result to 100% security, but raising the attack from "fooling the sole ISP that the CA used" to "you need to effectively hijack the whole world and that would be very obvious". This is part of the "Swiss cheese" defense-in-depth.
> Also, hosting companies have been known to MITM their customers connections in order to get valid fake certs.
I am not sure if there is a feasible solution here except to be very vigilant (like looking at certificate logs). This is a breach of trust between the hosting company and the server operator.
I can’t see freely available intermediates ever happening. The first three reasons I can think of are here, but there’s more I’m sure.
1. No way to enforce what the issued end-entity certificates look like, beyond name constraints. X509 is an overly-flexible format and a lot of the ecosystem depends on a subset of them being used, which is enforced by policy on CAs.
2. Hiding private domains wouldn’t be any different than today. CT requirements are enforced by the clients, and presumably still would be. Some CAs support issuing certs without CT now, but browsers won’t accept them.
3. Allowing effectively unlimited issuance would likely overwhelm CT, and the whole ecosystem collapses.
That's a fair point, though CT is only strictly enforced by Chromium-based browsers at the moment. There would need to be some resolution to this issue, but the CT problem doesn't seem to be insurmountable in a 5 year timeframe if the relevant parties are motivated to solve it.
I want to give every device its own certificate to authenticate it with others via headscale to facilitate web development collaboration and authenticate remote management. I want to have a lightweight forward proxy in a semi-trusted remote VPS to proxy email at a particular domain down to my local mail server. I want to delegate maintenance of some application to a particular department. I want microservices run by different teams to communicate via authenticated TLS. I want to run web services in my mars data center without wasting precious bandwidth on thousands of redundant ACME requests. Etc, etc, etc.
In all of these cases it would be idiotic to distribute the same wildcard cert to each host. And please don't say "you just shouldn't want to do that".
Now that it's a requirement for the whole web PKI, it will be interesting to see the pressure against blanket geoblocking increase. (Or maybe more web hosts will make it easier to use DNS challenge methods to get certificates.)
Geoblocking is one of the most drastically effective ways someone can reduce their attack surface. I'd give up encrypting traffic entirely before I'd give up geoblocking.
Sure, and I think generally speaking this is also not a hard problem: A CA can advertise the networks it expects to be able to validate your control from, and you can also choose to selectively allow access for domain validation purposes to those networks. The modern firewall is quite discriminatory.
I just find a constant frustration that geoblocking is often discussed as "bad" when... if you aren't running a global service, is an incredibly powerful tool. Even among global services, the hesitation to intelligently use risk-based authentication strategies remains deeply frustrating... there's no reason an account which has never been accessed outside the United States should be permitted to suddenly log in from Nigeria. Credit card companies figured this stuff out decades ago.
Your local certificates are not bound by the Baseline Requirements at all; they're irrelevant to you. You can do whatever you want if your CA is not in a root program.
What does this mean for CAs that issue certs for completely internal corporate DNS?
Does this mean the corporations have to reveal all their internal DNS and sites to the public (or at least the CA) and let them do DV, if they want certs issued for their wholly-internal domains that will be valid in normal browsers?
>Does this mean the corporations have to reveal all their internal DNS and sites to the public (or at least the CA) and let them do DV, if they want certs issued for their wholly-internal domains that will be valid in normal browsers?
The blog post has nothing to do with this, because it was already the case with certificate transparency. The solution is to use wildcard certificates. For instance if you don't want secretproject.evil.corp to be visible to everyone, you could get a wildcard certificate for *.evil.corp instead.
There are also plenty of ways to set it up so the only thing the public can see is that the name exists; and you should probably be prepared for that to become public knowledge anyway (even if using a wildcard certificate), if only because there are so many ways for users to accidentally cause DNS leaks.
Using an ACME DNS challenge would be the simplest option if it wasn’t such a pain to integrate with most DNS services; but even HTTP challenges don’t actually need to expose the same server that actually runs the service, just one that serves /.well-known/acme-challenge/* during the validation process. (For example, this could be the same server via access control rules that check what interface the request came in on, or a completely different server with split-horizon DNS and/or routing, or a special service running on port 80 that’s only used for challenges.)
(I was thinking about this a lot recently because I had a service that wanted to do HTTP challenges but I didn’t want to put the whole thing on the Internet. In the end my solution was to assign an IPv6 range which is routed by VPN internally but to a proxy server for public requests: https://search.feep.dev/blog/post/2025-03-18-private-acme)
Cheaper code-signing certs would be great. I don't like how the CA/B is so focused on TLS only. PKI is a slightly wider landscape. I sincerely hope PKI-centric code and package signing makes its way to the Linux world where most influential people in these discussions live, so they can appreciate the importance of having a "letsencrypt" for other types of PKI usage like S/MIME and Authenticode.
There is literally a code-signing working group in the CA/BF. However, the browsers don't really participate in it, since it's irrelevant to browsers. This is the entire point of moving to dedicated hierarchies per use-case---each PKI (web, code signing, etc) can evolve independently.
Glad to see DNS validation from multiple perspectives, that's a scary attack vector.
I wonder if we can ever hope for CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs, authenticated with something like a DNS-01 challenge. I've advocated for this before [1][2], but here's my pitch again:
I want to issue certificates from my own ICA for my homelab and office, to avoid ratelimits and hide hostnames for private services. I submit that issuing a 90-day ICA certificate with a name constraint that only allows it to issue certificates for the specific domain is no more dangerous than issuing a wildcard certificate, and offers enough utility that it should be considered seriously.
Objection 1: "Just use a wildcard cert." Wildcard certs are not sufficient here because they don't support nested wildcards, and — more importantly — they don't allow you to isolate hosts since any host can serve all subdomains. I'd rather not give some rando vibecoded nodejs app the same certificate that I use to handle auth.
Objection 2: "Just install a self-signed CA on all your devices." Installing and managing self-signed CAs on every device is tedious, error prone, and arguably more dangerous than issuing a 90-day name-constrained ICA.
Objection 3: "Aren't name constraints not supported by all clients?" On the contrary, they've had wide support for almost a decade, and for those just set the critical bit.
I understand this is not a "just ship it lmao" kind of change, but if we want this by 2030 planning for it needs to start happening now.
[1]: https://news.ycombinator.com/item?id=37537689
[2]: https://news.ycombinator.com/item?id=29808233
I feel like that BGP attacks (and outright mistakes) haven't gone away, so I wonder how useful MPIC is these days. Also, hosting companies have been known to MITM their customers connections in order to get valid fake certs.
https://isbgpsafeyet.com/ https://notes.valdikss.org.ru/jabber.ru-mitm/
> I feel like that BGP attacks (and outright mistakes) haven't gone away, so I wonder how useful MPIC is these days.
The actual wording is this:
> The goal of this proposal is to make it more difficult for adversaries to successfully launch equally-specific prefix attacks against the domain validation processes described in the TLS BRs. [emphasis added]
Everyone (everyone in the CA system at least) knows that it will not result to 100% security, but raising the attack from "fooling the sole ISP that the CA used" to "you need to effectively hijack the whole world and that would be very obvious". This is part of the "Swiss cheese" defense-in-depth.
> Also, hosting companies have been known to MITM their customers connections in order to get valid fake certs.
I am not sure if there is a feasible solution here except to be very vigilant (like looking at certificate logs). This is a breach of trust between the hosting company and the server operator.
I can’t see freely available intermediates ever happening. The first three reasons I can think of are here, but there’s more I’m sure.
1. No way to enforce what the issued end-entity certificates look like, beyond name constraints. X509 is an overly-flexible format and a lot of the ecosystem depends on a subset of them being used, which is enforced by policy on CAs.
2. Hiding private domains wouldn’t be any different than today. CT requirements are enforced by the clients, and presumably still would be. Some CAs support issuing certs without CT now, but browsers won’t accept them.
3. Allowing effectively unlimited issuance would likely overwhelm CT, and the whole ecosystem collapses.
That's a fair point, though CT is only strictly enforced by Chromium-based browsers at the moment. There would need to be some resolution to this issue, but the CT problem doesn't seem to be insurmountable in a 5 year timeframe if the relevant parties are motivated to solve it.
>Wildcard certs are not sufficient here because they don't support nested wildcards
How many levels of dots do you need?
>I'd rather not give some rando vibecoded nodejs app the same certificate that I use to handle auth.
Use a reverse proxy to handle TLS instead?
I want to give every device its own certificate to authenticate it with others via headscale to facilitate web development collaboration and authenticate remote management. I want to have a lightweight forward proxy in a semi-trusted remote VPS to proxy email at a particular domain down to my local mail server. I want to delegate maintenance of some application to a particular department. I want microservices run by different teams to communicate via authenticated TLS. I want to run web services in my mars data center without wasting precious bandwidth on thousands of redundant ACME requests. Etc, etc, etc.
In all of these cases it would be idiotic to distribute the same wildcard cert to each host. And please don't say "you just shouldn't want to do that".
Hopefully they also adopt the ACME revocation extension proposed in the Revokinator FAQ.
https://pwnedkeys.com/revokinator
ARI is outside the scope of the CABF
Notably, I think LetsEncrypt has been MPIC for some time now.
Yep, they started in 2020: https://letsencrypt.org/2020/02/19/multi-perspective-validat...
This has been challenging for some subscribers who are unaccustomed to receiving any legitimate site traffic from foreign countries.
https://community.letsencrypt.org/t/multi-perspective-valida...
Now that it's a requirement for the whole web PKI, it will be interesting to see the pressure against blanket geoblocking increase. (Or maybe more web hosts will make it easier to use DNS challenge methods to get certificates.)
Geoblocking is one of the most drastically effective ways someone can reduce their attack surface. I'd give up encrypting traffic entirely before I'd give up geoblocking.
You don't have to give up geoblocking, right? You only need enough "unblocked" surface to resolve domain ownership challenges.
Sure, and I think generally speaking this is also not a hard problem: A CA can advertise the networks it expects to be able to validate your control from, and you can also choose to selectively allow access for domain validation purposes to those networks. The modern firewall is quite discriminatory.
I just find a constant frustration that geoblocking is often discussed as "bad" when... if you aren't running a global service, is an incredibly powerful tool. Even among global services, the hesitation to intelligently use risk-based authentication strategies remains deeply frustrating... there's no reason an account which has never been accessed outside the United States should be permitted to suddenly log in from Nigeria. Credit card companies figured this stuff out decades ago.
How will this impact self-signed local certificates? Can we still use a five-year lifespan on those or do we need to reduce it to <398 days?
Your local certificates are not bound by the Baseline Requirements at all; they're irrelevant to you. You can do whatever you want if your CA is not in a root program.
The article doesn't even mention cert lifetimes.
But the answer is no, self-signed certs dont have to folllw c/ab.
The links in the article mentions the hard limit on certificate lifetime.
What does this mean for CAs that issue certs for completely internal corporate DNS?
Does this mean the corporations have to reveal all their internal DNS and sites to the public (or at least the CA) and let them do DV, if they want certs issued for their wholly-internal domains that will be valid in normal browsers?
>Does this mean the corporations have to reveal all their internal DNS and sites to the public (or at least the CA) and let them do DV, if they want certs issued for their wholly-internal domains that will be valid in normal browsers?
The blog post has nothing to do with this, because it was already the case with certificate transparency. The solution is to use wildcard certificates. For instance if you don't want secretproject.evil.corp to be visible to everyone, you could get a wildcard certificate for *.evil.corp instead.
There are also plenty of ways to set it up so the only thing the public can see is that the name exists; and you should probably be prepared for that to become public knowledge anyway (even if using a wildcard certificate), if only because there are so many ways for users to accidentally cause DNS leaks.
Using an ACME DNS challenge would be the simplest option if it wasn’t such a pain to integrate with most DNS services; but even HTTP challenges don’t actually need to expose the same server that actually runs the service, just one that serves /.well-known/acme-challenge/* during the validation process. (For example, this could be the same server via access control rules that check what interface the request came in on, or a completely different server with split-horizon DNS and/or routing, or a special service running on port 80 that’s only used for challenges.)
(I was thinking about this a lot recently because I had a service that wanted to do HTTP challenges but I didn’t want to put the whole thing on the Internet. In the end my solution was to assign an IPv6 range which is routed by VPN internally but to a proxy server for public requests: https://search.feep.dev/blog/post/2025-03-18-private-acme)