> The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
This is patently nonsensical. There is hardly any information in a certificate that matters in practice, except for the subject, the issuer, and the expiration date.
> Shorter lifetimes mitigate the effects of using potentially revoked certificates.
Sure, and if you're worried about your certificates being stolen and not being correctly revoked, then by all means, use a shorter lifetime.
But forcing shorter lifetimes on everyone won't end up being beneficial, and IMO will create a lot of pointless busywork at greater expense. Many issuers still don't support ACME.
paradite 2 minutes ago [-]
All I care about as a certbot user is what do I need to do.
Do I need to update certbot in all my servers? Or would they continue to work without the need to update?
bob1029 13 hours ago [-]
What's the end game here? I agree with the dissent. Why not make it 30 seconds?
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
mcpherrinm 13 hours ago [-]
I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
cm2187 4 minutes ago [-]
All of that in case the previous owner of the domain would attempt a mitm attack against a client of the new owner, which is such a remote scenario. In fact has it happened even once?
klaas- 2 hours ago [-]
I think a very short lived cert (like 7 days) could be a problem on renewal errors/failures that don't self correct but need manual intervention.
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
iqandjoke 2 hours ago [-]
Like Apple case. Apple already ask their developer to re-sign the app every 7 days. It should not be the problem.
kassner 1 hours ago [-]
That’s only a thing if you are not publishing on Apple Store, no?
noveltyaccount 9 hours ago [-]
When I first set up Let's Encrypt I thought I'd manually update the cert one per year. The 90 day limit was a surprise. This blog post helped me understand (it repeats many of your points) https://letsencrypt.org/2015/11/09/why-90-days/
0xbadcafebee 13 hours ago [-]
So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.
da_chicken 10 hours ago [-]
It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
bigstrat2003 22 minutes ago [-]
It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.
cm2187 1 minutes ago [-]
An example of that was the dismissal of privacy concerns by the committee with using MAC addresses in ipv6 addresses.
kcb 10 hours ago [-]
It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.
0xbadcafebee 3 hours ago [-]
Simplest possible, least invasive, most secure thing I can think of: QR code on the router with the CA cert of the router. Open cert manager app on laptop/phone, scan QR code, import CA cert. Comms are now secure (assuming nobody replaced the sticker).
The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).
End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.
hamburglar 9 minutes ago [-]
I’ve actually put a decent amount of thought into this. I envision a raspberry pi sized device, with a simple front panel ui. This serves as your home CA. It bootstraps itself witha generated key and root cert and presents on the network using a self-issued cert signed by the bootstrapped CA. It also shows the root key fingerprint on the front panel. On your computer, you go to its web UI and accept the risk, but you also verify the fingerprint of the cert issuer against what’s displayed on the front panel. Once you do that, you can download and install your newly trusted root. Do this on all your machines that want to trust the CA. There’s your root of trust.
Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.
Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.
GabeIsko 10 hours ago [-]
All my personal and professional feelings aside (they are mixed) it would be fascinating to consider a subnet based TLS scheme. Usually I have to bang on doors to manage certs at the load balancer level anyway.
fiddlerwoaroof 10 hours ago [-]
I wonder what this would look like: for things like routers, you could display a private root in something like a QR code in the documentation and then have some kind of protocol for only trusting that root when connecting to the router and have the router continuously rotate the keys it presents.
da_chicken 8 hours ago [-]
Yeah, what they'll do is put a QR code on the bottom, and it'll direct you to the app store where they want you to pay them $5 so they can permanently connect to your router and gather data from it. Oh, and they'll let you set up your WiFi password, I guess.
That's their "solution".
UltraSane 5 hours ago [-]
Why should your browser trust the router's self-signed certificate? After you verify that it is the correct cert you can configure Firefox or your OS to trust it.
lxgr 3 hours ago [-]
Because local routers by definition control the (proposed?) .internal TLD, while nobody controls the .local mDNS/Zeroconf one, so the router or any local network device should arguably be trusted at the TLS level automatically.
Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.
nine_k 2 hours ago [-]
I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
jabiko 21 minutes ago [-]
But what would be the value of such a certificate over a self-signed one? For example, if the ACME Router Corp uses this special CA to issue a certificate for acmerouter.local and then preloads it on all of its routers, it will sooner or later be extracted by someone.
So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.
JackSlateur 1 hours ago [-]
Yes
Old cruft dying there for decades
That's the reality and that's an issue unrelated to TLS
Running unmanaged compute at home (or elsewhere ..) is the issue here.
ryao 2 hours ago [-]
If the web browsers would adopt DANE, we could bypass CAs and still have TLS.
tptacek 4 hours ago [-]
It is reasonable for the WebPKI of 2025 to assume that the Internet encompasses the entire scope of its problem.
yellowapple 1 hours ago [-]
"Suffer" is a strong word for those of us who've been using things like Let's Encrypt for years now without issue.
mcpherrinm 10 hours ago [-]
It makes the system more reliable and more secure for everyone.
I think that's a big win.
The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.
zmmmmm 8 minutes ago [-]
> It makes the system more reliable
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
fiddlerwoaroof 10 hours ago [-]
It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.
thayne 7 hours ago [-]
Putting a manually generated cert on an embedded device is inherently insecure, unless you have complete physical control over the device.
And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.
Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.
fiddlerwoaroof 6 hours ago [-]
This is a bad definition of security, I think. But you could come up with variations here that would be good enough for most home network use cases. IMO, being able to control the certificate on the device is a crucial consumer right
throwaway2037 3 hours ago [-]
Real question: What is the correct way to handle certs on embedded devices? I never thought about it before I read this comment.
steve_gh 1 hours ago [-]
There are many embedded devices for which TLS is simply not feasible. For remote sensing, when you are relying on battery power and need to maximise device battery life, then the power budget is critical. Telemetry is the biggest drain on the power budget, so anything that means spending more time with the RF system powered up should be avoided. TLS falls into this category.
tptacek 11 hours ago [-]
Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.
ignoramous 12 hours ago [-]
Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
9 hours ago [-]
ryao 2 hours ago [-]
Could you explain why Let's Encrypt is dropping OCSP stabling support, instead of dropping it for must-staple only certificates and letting those of us who want must-staple to deal with the headaches? I believe that resolving the privacy concerns involving OCSP raised did not require eliminating must-staple.
efortis 4 hours ago [-]
4. Encrypted traffic hoarders would have to break more certs.
Since you’ve thought about it a lot, in an ideal world, should CAs exist at all?
mcpherrinm 10 hours ago [-]
There's no such thing as an ideal world, just the one we have.
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
grey-area 3 hours ago [-]
Sorry was not trying to be snarky, was interested in your answer as to what a better system would look like. The current one seems pretty broken but hard to fix.
Ajedi32 11 hours ago [-]
In an ideal world where we rebuilt the whole stack from scratch, the DNS system would securely distribute key material alongside IP addresses and CAs wouldn't be needed. Most modern DNS alternatives (Handshake, Namecoin, etc) do exactly this, but it's very unlikely any of them will be usurping DNS anytime soon, and DNS's attempts to implement similar features have been thus far unsuccessful.
tptacek 11 hours ago [-]
People who idealize this kind of solution should remember that by overloading core Internet infrastructure (which is what name resolution is) with a PKI, they're dooming any realistic mechanism that could revoke trust in the infrastructure operators. You can't "distrust" .com. But the browsers could distrust Verisign, because Verisign had competitors, and customers could switch transparently. Browser root programs also used this leverage to establish transparency logs (though: some hypothetical blockchain name thingy could give you that automatically, I guess; forget about it with the real DNS though).
ysleepy 9 hours ago [-]
.com can issue arbitrary certificates right now, they control what DNS info is given to the CAs. So I don't quite see the change apart from people not talking about that threat vector atm.
tptacek 7 hours ago [-]
Get one to issue a Google.com certificate and see what happens.
throwaway2037 3 hours ago [-]
This is a great point. For all of the "technically correct" arguments going on here, this one is the most practical counterpoint. Yes, in theory, Verisign (now Symantec) could issue some insane wildcard Google.com cert and send the public-private key pair to you personally. In practice, this would never happen, because it is a corporation with rules and security policies that forbid it.
Thinking deeper about it: Verisign (now Symantec) must have some insanely good security, because every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks against major email providers. (I'm pretty sure this already happened in Netherlands.)
codethief 1 hours ago [-]
> every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks
I might be misremembering but I thought one insight from the Snowden documents was that a certain three-letter agency had already accomplished that?
transfire 10 hours ago [-]
Who cares? What does a certificate tell me other than someone paid for a certificate.
And what do certificate buyers gain? The ability for their site to be revoked or expired and thus no longer work.
I’d like to corrected.
squiggleblaz 3 hours ago [-]
A certificate authority is an organisation that pays good money to make sure that their internet connection is not being subjected to MITMs. They put vastly more resources into that than you can.
A certificate is evidence that the server you're connected to has a secret that was also possessed by the server that the certificate authority connected to. This means that whether or not you're subject to MITMs, at least you don't seem to be getting MITMed right now.
The importance of certificates is quite clear if you were around on the web in the last days before universal HTTPS became a thing. You would connect to the internet, and you would somehow notice that the ISP you're connected to had modified the website you're accessing.
Nobody has really had to pay for certificates for quite a number of years.
What certificates get you, as both a website owner and user, is security against man-in-the-middle attacks, which would otherwise be quite trivial, and which would completely defeat the purpose of using encryption.
This is a great question. If we don't have CAs, how do we know if it OK to trust a cert?
Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.
thayne 7 hours ago [-]
In an ideal world we could just trust people not to be malicious, and there wouldn't be any need to encrypt traffic at all.
WJW 11 hours ago [-]
How relevant is that since we don't live in such a world? Unless you have a way to get to to such a world, of course, but even then CAs would need to keep existing until you've managed to bring the ideal world about. It would be a mistake to abolish them first and only then start on idealizing the world.
klysm 9 hours ago [-]
CAs exist on the intersection of reality (far from ideal) and cryptography.
Stefan-H 11 hours ago [-]
What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.
grey-area 3 hours ago [-]
I honestly don’t know enough about it to have an opinion, have vague thoughts that dns is the weak point anyway for identity so can’t certs just live there instead but I’m sure there are reasons (historical and practical).
ocdtrekkie 4 hours ago [-]
Are you aware of a single real world not theoretical security breach caused by an unrevoked certificate that lived too long?
woodruffw 4 hours ago [-]
A real-world example of this would be Heartbleed, where users rotated without revoking their previously compromised certificates[1].
How viable are tls attacks, assuming a signed private cert is compromised, you need network position or other things to trigger routing, no?
So for a bank, a private cert compromise is bad, for a regular low traffic website, probably not so much?
delfinom 12 hours ago [-]
Realistically, how often are domains traded and suddenly put in legitimate use (that isn't some domain parking scam) that (1) and (2) are actual arguments? Lol
zamadatix 12 hours ago [-]
Domain trading (regardless if the previous use was legitimate or not) is only one example, not the sole driving argument for why the revocation system is in place or isn't perfectly handled.
Lammy 12 hours ago [-]
> but customers aren't able to respond on an appropriate timeline
Sounds like your concept of the customer/provider relationship is inverted here.
crote 12 hours ago [-]
No. The customer is violating their contract.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
luckylion 11 hours ago [-]
How would a CA not being able to contact some tiny customer (surely the big ones all can and do respond in less than 90 days?) compromise the safety of the entire internet?
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
detaro 11 hours ago [-]
> surely the big ones all can and do respond in less than 90 days?
LOL. old-fashioned enterprises are the worst at "oh, no, can't do that, need months of warning to change something!", while also handling critical data. A major event in the CA space last year was a health-care company getting a court order against a CA to not revoke a cert that according to the rules for CAs the CA had to revoke (in the end they got a few days extension, everyone grumbled and the CA got told to please write their customer contracts more clearly, but the idea is out there and nobody likes CAs doing things they are not supposed to, even if through external force).
One way to nip that in the bud is making sure even you get your court order preventing the CA from doing the right thing, your certificate will expire soon anyways, so "we are too important to have working IT processes" doesn't work anymore.
brazzy 9 hours ago [-]
Can you de-anonymize that event for me? Wasn't able to find it given the lack of unique keywords to search for.
The "end game" is mentioned explicitly in the article:
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
sitkack 12 hours ago [-]
Don't lower cert times also get people to trust certs that were created just for their session to MITM them?
That is the next step in nation state tapping of the internet.
woodruffw 10 hours ago [-]
I don't see why it would; the same basic requirements around CT apply regardless of certificate longevity. Any CA caught enabling this kind of MITM would be subject to expedient removal from browser root programs, but with the added benefit that their malfeasance would be self-healing over a much shorter period than was traditionally allowed.
ezfe 12 hours ago [-]
lol no? lower cert times still extend the root certificates that are already trusted. It is not a noticeable thing when browsing the web as a user.
A MITM cert would need to be manually trusted, which is a completely different thing.
Lammy 11 hours ago [-]
I think their point is that a hypothetical connection-specific cert would make it difficult/impossible to compare your cert with anybody else to be able to find out that it happened. A CA could be backdoored but only “tapped” for some high-value target to diminish the chance of burning the access.
woodruffw 10 hours ago [-]
> I think their point is that a hypothetical connection-specific cert would make it difficult/impossible to compare your cert with anybody else to be able to find out that it happened.
This is already the case; CT doesn't rely on your specific served cert being comparable with others, but all certs for a domain being monitorable and auditable.
(This does, however, point to a current problem: more companies should be monitoring CT than are currently.)
roblabla 11 hours ago [-]
Well, the cert can still be compared to what's in the CT Log for this purpose.
sitkack 11 hours ago [-]
Yes, precisely.
notatoad 13 hours ago [-]
>unless I've missed some monetary/power advantage
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
michaelt 11 hours ago [-]
> Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours?
Well you see, they also want to be able to break your automation.
For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.
Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.
karlgkk 8 hours ago [-]
> Why not make it 30 seconds?
This is a ridiculous straw man.
> 48 hours. I am willing to bet money this threshold will never be crossed.
That's because it won't be crossed and nobody serious thinks it should.
Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons
- cert transparency logs and other logging would need to be substantially scaled up
- for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond
- this would cause issues with some HTTP3 performance enhancing features
- thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)
> This feels like much more of an ideological mission than a practical one
There are numerous practical reasons, as mentioned here by many other people.
Resisting this without good cause, like you have, is more ideological at this point.
fs111 13 hours ago [-]
Load on the underlying infrastructure is a concern. The signing keys are all in HSMs and don't scale infinitely.
bob1029 12 hours ago [-]
How does cycling out certificates more frequently reduce the load on HSMs?
woodruffw 4 hours ago [-]
Much of the HSM load within a CA is OCSP signing, not subscriber cert issuance.
timmytokyo 11 hours ago [-]
It's all relative. A 47-day cycle increases the load, but a 48-hour cycle would increase it substantially more.
timewizard 13 hours ago [-]
If the service becomes unavailable for 48 straight hours then every certificate expires and nothing works. You probably want a little more room for catastrophic infrastructure problems.
pixl97 2 days ago [-]
Heh, working with a number of large companies I've seen most of them moving to internally signed certs on everything because of ever shortening expiration times. They'll have public certs on edge devices/load balancers but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.
plorkyeran 2 days ago [-]
This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.
tetha 13 hours ago [-]
Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.
It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.
But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.
donnachangstein 13 hours ago [-]
> Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.
Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?
kam 12 hours ago [-]
At least that sounds like it would be a more interesting story than the one where the person who quit a year ago didn't document all the places they manually installed the 2-year certificate.
tetha 12 hours ago [-]
I will. We've been betting Postgres connectivity for a few hundred applications on this over the past three years. If this fucks up, it'll be known without me.
donnachangstein 11 hours ago [-]
I'm curious what requirement drove you to such arbitrarily small TTL, other than "because we can" dick-measuring geekery.
I applaud you for sticking to your guns though.
tetha 2 hours ago [-]
At the end of the day, we were worried about exactly these issues - if an application has to reload certs once every 2 years, it will always end up a mess.
And the conventional wisdom for application management and deployments is - if it's painful, do it more. Like this, applications in the container infrastructure are forced to get certificate deployment and reloading right on day 1.
And yes, some older application that were migrated to the infrastructure went ahead and loaded their credentials and certificates for other dependencies into their database or something like that and then ended up confused when this didn't work at all. Now it's fixed.
wbl 11 hours ago [-]
Why would the cert renewal be manual?
alexchamberlain 10 hours ago [-]
That's how it used to be done. Buy a certificate with a 2 year expiry and manually install it on your server (you only had 1; it was fine).
progmetaldev 7 hours ago [-]
I can tell you that there are still quite a few of us out here that are doing the once a year manual renewal. I have suggested a plan to use Let's Encrypt with automated renewal, but for some companies, they are using old technology and/or old processes that "seniors" are comfortable with since they understand them and suggesting a better process isn't always looked favorably upon (especially if your job relies on the manual renewal process as one of those cryptic things only IT can do).
tptacek 11 hours ago [-]
Some of this rhymes with Colm MacCárthaigh's case against mTLS.
This has been our issue too. We've had mandates for rotating OAuth secrets (client ID & client secret).
Except there are no APIs to rotate those. The infrastructure doesn't exist yet.
And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.
Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.
parliament32 11 hours ago [-]
We've also felt the pain for OAuth secrets. Current mandates for us are 6 months.
Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.
Browsers don't design for internal use though. They insist on HTTPS for various things that are intranet only, such as some browser APIs, PWAs, etc
akerl_ 19 hours ago [-]
As is already described by the comment thread we're replying in, "internal use" and "HTTPS" are very compatible. Corporations can run an internal CA, sign whatever internal certs they want, and trust that CA on their devices.
franga2000 14 hours ago [-]
You use the term "internal use" and "corporations" like they're interchangable, but that's definitely not the case. Lots of small businesses, other organizations or even individuals want to have some internal services and having to "set up" a CA and add the certs to all client devices just to access some app on the local network is absurd!
akerl_ 4 hours ago [-]
The average small business in 2025 is not running custom on-premise infrastructure to solve their problems. Small businesses are paying vendors to provide services, sometimes in the form of on-premise appliances but more often in the form of SaaS offerings. And I'm happy to have the CAB push those vendors to improve their TLS support via efforts like this.
Individuals are in the same boat: if you're running your own custom services at your house, you've self-identified as being in the amazingly small fraction of the population with both the technical literacy and desire to do so. Either set up LetsEncrypt or run your own ACME service; the CAB is making clear here and in prior changes that they're not letting the 1% hold back the security bar for everybody else.
JimBlackwood 14 hours ago [-]
I don't think it's absurd and personally it feels easier to setup an internal CA than some of the alternatives.
In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.
Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.
lucb1e 13 hours ago [-]
> it's a few commands to generate a CA
My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader
Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints
JimBlackwood 13 hours ago [-]
> My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader
This is fair. I assumed all small businesses would be tech startups, haha.
Retric 13 hours ago [-]
The vast majority of companies operate just fine without understanding anything about building codes or vehicle repair etc.
Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.
lucb1e 13 hours ago [-]
Paying an expert to come set up a local CA seems rather silly when you'd normally outsource operating one to the people who professionally run a CA
Retric 12 hours ago [-]
You’d only need internal certificates if someone had set up internal infrastructure. Expecting that person to do a good job means having working certificates be they internal or external.
nilslindemann 12 hours ago [-]
> Paying experts is a perfectly viable option
Congrats for securing your job by selling the free internet and your soul.
Retric 12 hours ago [-]
I’m not going to be doing this, but I care about knowledge being free not labor or infrastructure.
If someone doesn’t want to learn then nobody needs to help them for free.
6 hours ago [-]
disiplus 13 hours ago [-]
We have this, it's not trivial for some small team, and you have to deal with stuff like conda env coming with it's own set of certs so you have to take care of that. It's better then the alternative of fighting with browsers but still it's not without extra complexity
JimBlackwood 13 hours ago [-]
For sure, nothing is without extra complexity. But, to me, it feels like additional complexity for whoever does DevOps (where I think it should be) and takes away complexity from all other users.
11 hours ago [-]
msie 13 hours ago [-]
Wow, amazing how out of touch this is.
JimBlackwood 13 hours ago [-]
Can you explain? I don't see why
Henchman21 12 hours ago [-]
You seem to think every business is a tech startup and is staffed with competent engineers.
Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.
JimBlackwood 11 hours ago [-]
> You seem to think every business is a tech startup and is staffed with competent engineers.
If we’re talking about businesses hosting services on some intranet and concerned about TLS, then yes, I assume it’s either a tech company or they have at least one competent engineer to host these things. Why else would the question be relevant?
> “Out of touch” is apt and you should probably reflect on that at length.
That’s a very weird personal comment based on a few comments on a website that’s inside a tech savvy bubble. Most people here work in IT, so I talk as if most people here work in IT. If you’re a mechanic at a garage or a lawyer at a law firm, I wouldn’t tell you rolling your own CA is easy and just a few commands.
acedTrex 14 hours ago [-]
Sounds like there is a market for a browser that is intranet only and doesnt do various checks
jillyboel 14 hours ago [-]
Good luck getting that distributed everywhere including the iOS app store and random samsung TVs that stopped receiving updates a decade ago.
Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.
JimBlackwood 14 hours ago [-]
Why would you want this? Then on production, you'll run into issues you did not encounter on staging because you skipped various checks.
lxgr 3 hours ago [-]
Yeah, but essentially every home user can only do so after jumping through extremely onerous hoops (many of which also decrease their security when browsing the public web).
I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.
stefan_ 14 hours ago [-]
Do I add the root CA of my router manufacturer so I can visit its web interface on my internal network without having half the page functionality broken because of overbearing browser manufacturers who operate the "web PKI" as a cartel? This nowadays includes things such as basic file downloads.
6 hours ago [-]
jillyboel 14 hours ago [-]
Getting my parents to add a CA to their android, iphone, windows laptop and macbook just so they can use my self hosted nextcloud sounds like an absolute nightmare.
The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).
Not everything is a massive enterprise with an army of IT support personnel.
mysteria 12 hours ago [-]
I actually do this for my homelab setup. Everyone basically gets the local CA installed for internal services as well as a client cert for RADIUS EAP-TLS and VPN authentication. Different devices are automatically routed to the correct VLAN and the initial onboarding doesn't take that long if you're used to the setup. Guests are issued a MSCHAP username and password for simplicity's sake.
For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.
jillyboel 10 hours ago [-]
Personally I'd absolutely refuse to install your CA as your guest. That would give you far too much power to mint certificates for sites you have no business snooping on.
mysteria 9 hours ago [-]
Guests don't install my CA as they don't need to access my internal services. If I wanted to set up an internal web server that's accessible to both guests and family members I'd use Let's Encrypt for that.
crote 12 hours ago [-]
Rolling out LetsEncrypt for a self-hosted Nextcloud instance is absolutely trivial. There are many reasons corporations might want to roll their own internal CA, but simple homelab scenarios like these couldn't be further from them.
GabeIsko 9 hours ago [-]
Would you suggest something? I do this, but I'm not sure I would call maintaining my setup trivial. Got in trouble recently because my domain registrar deprecated an API call and it ends up that broke the camel's back in my automation setup. Or at least it did 90 days later.
andrewmackrodt 6 hours ago [-]
I'm not a nextcloud user but have a homelab and use traefik for my reverse proxy which is configured to use letsencrypt dns challenges to issue wildcard certificates. I use cloudflares free plan to manage dns for my domains, although the registrar is different. This has been a set it and forgot solution for the last several years.
jillyboel 10 hours ago [-]
Sure, which is what I do. But the point is that this is very much internal use and rolling my own CA for it is a nightmare.
richardwhiuk 14 hours ago [-]
Why are your parents on a corporations internal network?
jillyboel 14 hours ago [-]
What corporation are you talking about? Have you never heard of someone self hosting software for their family and friends? You know, an intranet.
smw 13 hours ago [-]
Just buy a domain and use dns verification to get real certs for whatever internal addresses you want to serve? Caddy will trivially go get certs for you with one line of config
Or cheat and use tailscale to do the whole thing.
DiggyJohnson 11 hours ago [-]
Self hosting doesn’t usually apply connecting on a private network usually.
ClumsyPilot 12 hours ago [-]
> Corporations can run an internal CA
Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.
rlpb 17 hours ago [-]
Indeed they are compatible. However HTTPS is often unnecessary, particularly in a smaller organisation, but browsers mandate significant unnecessary complexity there. In that sense, brwosers are not suited to this use in those scenarios.
freeopinion 15 hours ago [-]
If only browsers could understand something besides HTTPS. Somebody should invent something called HTTP that is like HTTPS without certificates.
recursive 15 hours ago [-]
Cool. And when they invent it, it should have browser parity with respect to which API features and capabilities are available, so that we don't need to use HTTPS just so things like `getUserMedia` work.
There’s enough APIs limited to secure contexts that many internal apps become unfeasible.
SoftTalker 15 hours ago [-]
Modern browsers default to trying https first.
tedivm 15 hours ago [-]
I really don't see many scenarios where HTTPS isn't needed for at least some internal services.
donnachangstein 15 hours ago [-]
Then, I'm afraid, you work in a bubble.
A static page that hosts documentation on an internal network does not need encryption.
The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.
Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.
progmetaldev 7 hours ago [-]
Unfortunately, for a small business, there are many software packages that can cause all sorts of havoc on an internal network, and are simple to install. Even just ARP cache poisoning on an internal network can force everyone offline, while even a reboot of all equipment can not immediately fix the problem. A small company that can't handle setting up a CA won't ever be able to handle exploits like this (and I'm not saying that a small company should be able to setup their own CA, just commenting on how defenseless even modern networks are to employees that like to play around or cause havoc).
Of course, then there are the employees who could just intercept HTTP requests, and modify them to include a payload to root an employee's machine. There is so much software out there that can destroy trust in a network, and it's literally download and install, then point and click with no knowledge. Seems like there is a market for simple and cheap solutions for internal networks, for small business. I could see myself making quite a bit off it, which I did in the mid-2000's, but I can't stand doing sales any more in my life, and dealing with support is a whole issue on it's own even with an automated solution.
imroot 15 hours ago [-]
What overhead?
Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.
The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.
donnachangstein 15 hours ago [-]
> What overhead?
[proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]
So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.
tedivm 12 hours ago [-]
I'm afraid you didn't read my response. I explicitly said I can't see a case where it isn't needed for some services. I never said it was required for every service. Once you've got it setup for one thing it's pretty easy to set it up everywhere (unless you're manually deploying, which is an obvious problem).
brendoelfrendo 15 hours ago [-]
Sure it does! You may not need confidentiality, but what about integrity?
donnachangstein 15 hours ago [-]
It's a very myopic take.
Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.
Just because something is possible in theory doesn't make it likely or worth the time invested.
You can put 8 locks on the door to your house but most people suffice with just one.
Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?
But it's not really a concern worth investing resources into for most.
growse 14 hours ago [-]
> Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.
Ah, the "both me and my attackers agree on what's important" fallacy.
What if they modify the man page response to include drive-by malware?
therealpygon 17 hours ago [-]
And it is even more trivial in a small organization to install a Trusted Root for internally signed certificates on their handful of machines. Laziness isn’t a browser issue.
rlpb 14 hours ago [-]
How is that supposed to work for an IoT device that wants to work out of the box using one of these HTTPS-only browser APIs?
metanonsense 13 hours ago [-]
I am not saying I‘d do this, but in theory you could deploy a single reverse proxy in front of your HTTP-only devices and restrict traffic accordingly.
Spooky23 13 hours ago [-]
Desired by who?
There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.
Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.
crote 12 hours ago [-]
CAs fucking up every once in a while is inevitable. It is impossible to write guaranteed bug-free software or train guaranteed flawless humans.
The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.
Spooky23 10 hours ago [-]
Silly me, I’m just a customer, incapable of making my own risk assessments or prioritizing my business processes.
You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.
End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.
christina97 15 hours ago [-]
What do you mean “WebPKI … would like”. The browser vendors want one thing (secure, ubiquitous, etc), the CAs want a very different thing (expensive, confusing, etc)…
ozim 1 days ago [-]
Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.
Non browser things usually don’t care even if cert is expired or trusted.
So I expect people still to use WebPKI for internal sites.
akerl_ 19 hours ago [-]
The browser policies are set by the same entities doing the CAB voting, and basically every prior change around WebPKI has only been enforced by browsers for CAs in the browser root trust stores. Which is exactly what's defined in this CAB vote as well.
Why would browsers "most likely" enforce this change for internal CAs as well?
ryao 1 days ago [-]
Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.
That said, it would be really nice if they supported DANE so that websites do not need CAs.
nickf 24 hours ago [-]
'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.
rsstack 2 days ago [-]
> I've seen most of them moving to internally signed certs
Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.
pavon 2 days ago [-]
Yes, but it is a lot more work to run an internal CA and distribute that CA cert to all the corporate clients. In the past getting a public wildcard cert was the path of least resistance for internal sites - no network access needed, and you aren't leaking much info into the public log. That is changing now, and like you said it is probably a change for the better.
pkaye 2 days ago [-]
What about something like step-ca? I got the free version working easily on my home network.
Not everything that's easy to do on a home network is easy to do on a corporate network. The biggest problem with corporate CAs is how to emit new certificates for a new device in a secure way, a problem which simply doesn't exist on a home network where you have one or at most a handful of people needing new certs to be emitted.
bravetraveler 1 days ago [-]
> A lot more work
'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.
If you're at the scale past what IPA/your domain can manage, well, c'est la vie.
Spivak 1 days ago [-]
I think you're being generous if you think the average "cloud native" company is joining their servers to a domain at all. They've certainly fallen out of fashion in favor of the servers being dumb and user access being mediated by an outside system.
bravetraveler 23 hours ago [-]
Why not? The actual clouds do.
I think folks are being facetious wanting more for 'free'. The solutions have been available for literal decades, I was deliberate in my choice.
Not the average, certainly the majority where I've worked. There are at least two well-known Clouds that enroll their hypervisors to a domain. I'll let you guess which.
My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.
The devices are already managed; you've deployed them to your fleet.
No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!
Don't complain to me about 'your' choices. Self-selected problem if I've heard one.
Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.
Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.
Literal Clouds do this, why can't 'you'?
Spivak 22 hours ago [-]
Adding machines to a domain is far far more common on bare-metal deployments which is why I said "cloud native." Adding a bunch of cloud VMs to a domain is not very common in my experience because they're designed to be ephemeral and thrown away and IPA being stateful isn't about that.
You're managing your machine deployments with something so
of course you just use that
that to include your cert which isn't particularly hard but there's a long-tail of annoying work when dealing with containers and vms you aren't building yourself like k8s node pools. It can be done but it's usually less effort to just get public certs for everything.
bravetraveler 21 hours ago [-]
To be honest, with "cloud-init" and the ability for SSSD to send record updates, I could make a worthwhile cloudy deployment
To your point, people don't, but it's a perfectly viable path.
Containers/kubernetes, that's pipeline city, baby!
24 hours ago [-]
maccard 13 hours ago [-]
I’ve unfortunately seen the opposite - internal apps are now back to being deployed over VPN and HTTP
lokar 15 hours ago [-]
I’ve always felt a major benefit of an internal CA is making it easy to have very sort TTLs
SoftTalker 15 hours ago [-]
Or very long ones. I often generate 10 year certs because then I don't have to worry about renewing them for the lifetime of the hardware.
lokar 9 hours ago [-]
In a production environment with customer data?
SoftTalker 5 hours ago [-]
No for internal stuff.
formerly_proven 14 hours ago [-]
I'm surprised there is no authorization-certificate-based challenge type for ACME yet. That would make ACME practical to use in microsegmented networks.
Does your hosted service know the private keys or are they all on the client?
bigp3t3 14 hours ago [-]
I'd set that up the second it becomes available if it were a standard protocol.
Just went through setting up internal certs on my switches -- it was a chore to say the least!
With a Cert Template on our internal CA (windows), at least we can automate things well enough!
formerly_proven 14 hours ago [-]
Yeah it's almost weird it doesn't seem to exist, at least publicly. My megacorp created their own protocol for this purpose (though it might actually predate ACME, I'm not sure), and a bunch of in-house people and suppliers created the necessary middlewares to integrate it into stuff like cert-manager and such (basically everything that needs a TLS certificate and is deployed more than thrice). I imagine many larger companies have very similar things, with the only material difference being different organizational OIDs for the proprietary extension fields (I found it quite cute when I learned that the corp created a very neat subtree beneath its organization OID).
jiggawatts 21 hours ago [-]
I just got a flashback to trying to automate the certificate issuance process for some ESRI ArcGIS product that used an RPC configuration API over HTTPS to change the certificate.
So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.
Fun times...
shlant 22 hours ago [-]
this is exactly what I do because mongo and TLS is enough of a headache. I am not dealing with rotating certificates regularly on top of that for endpoints not exposed to the internet.
SoftTalker 15 hours ago [-]
Yep letsencrypt is great for public-facing web servers but for stuff that isn't a web server or doesn't allow outside queries none of that "easy" automation works.
procaryote 13 hours ago [-]
Acme dns challenge works for things that aren't webservers.
For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.
Yeroc 11 hours ago [-]
Last time I checked there's no standardized API/protocol to deal with populating the required TXT records on the DNS side. This is all fine if you've out-sourced your DNS services to one of the big players with a supported API but if you're running your own DNS services then doing automation against that is likely not going to be so easy!
I don't have an API or any permission to add TXT records to my DNS. That's a support ticket and has about a 24-hour turnaround best case.
Yeroc 11 hours ago [-]
I was just digging into this a bit and discovered ACME supports a something called DNS alias mode (https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...) which allows you to add a static DNS TXT record on your core domain that delegates to a second domain. This would allow you to setup a second domain with DNS API (if permitted by company policy!)
JackSlateur 11 hours ago [-]
You have people paid to create DNS records ? Haha
SoftTalker 5 hours ago [-]
Yes we do. That’s not the only thing they do of course.
dijit 10 hours ago [-]
its’ not practical to give everyone write access to the google.com root zone.
Someone will fuck up accidentally, so production zones are usually gated somehow, sometimes with humans instead of pure automata.
JackSlateur 10 hours ago [-]
Why not ?
Giving write access does not mean giving unrestricted write access
Also, another way (which I built in a previous compagny) is to create a simple certificate provider (API or whatever), integrated with whatever internal authentication scheme you are using, and are able to sign csr for you. A LE proxy, as you might call it
procaryote 11 hours ago [-]
That's not great, sorry to hear
immibis 11 hours ago [-]
Is this just because your DNS is with some provider, or is it something that leads from your organizational structure?
If it's just because your DNS is at a provider, you should be aware that it's possible to self-host DNS.
SoftTalker 5 hours ago [-]
It’s internal policy. We do run our own DNS.
bsder 11 hours ago [-]
And may the devil help you if you do something wrong and accidentally trip LetsEncrypt's rate limiting.
You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.
JackSlateur 11 hours ago [-]
Haa, yes ! We have that, too ! Accepted warning in browsers ! curl -k ! verify=False ! Glorious future to the hacking industry !
xienze 2 days ago [-]
> but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.
Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.
pixl97 2 days ago [-]
Unless they are web/tech companies they aren't doing that. Banks, finance, large manufacturing are all terminating at F5's and AVI's. I'm pretty sure those update certs just fine, but it's not really what I do these days so I don't have a direct answer.
tikkabhuna 13 hours ago [-]
F5s don't support ACME, which has been a pain for us.
cpach 11 hours ago [-]
It might be possible to run an ACME client on another host in your environment. (IMHO, the DNS-01 challenge is very useful for this.) Then you can (probably) transfer the cert+key to BIG IP, and activate it, via the REST API.
I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.
Obviously it would be much better if BIG IP had native support for ACME. And F5 might implement it some day, but I wouldn’t hold my breath.
For some companies, it might be worth it to throw away a $100000 device and buy something better. For others it might not be worth it.
EvanAnderson 11 hours ago [-]
Exactly. According to posters here you should just throw them away and buy hardware from a vendor who does. >sigh<
Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.
Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.
JackSlateur 11 hours ago [-]
F5 is the pain.
xienze 2 days ago [-]
Sure. The point is, don't bother letting the apps themselves do TLS termination. Too much work that's better handled by something else.
hedora 1 days ago [-]
Also, moving termination off the endpoint server makes it much easier for three letter agencies to intercept + log.
qmarchi 15 hours ago [-]
Most responsible orgs do TLS termination on the public side of a connection, but will still make a backend connection protected by TLS, just with a internal CA.
cryptonym 2 days ago [-]
You now have to build and self-shot a complete CA/PKI.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
> Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.
stackskipton 2 days ago [-]
You could always ask for wildcard for internal subdomain and use that instead so you will leak your internal FQDN but not individual hosts.
pixl97 2 days ago [-]
I'm pretty sure every bank will auto fail wildcard certs these days, at least the ones I've worked with.
Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.
Pxtl 12 hours ago [-]
At this point I wish we could just get all our clients to say "self-signed is fine if you're connecting to a .LOCAL domain name". https is intrinsically useful over raw http, but the overhead of setting up centralized certs for non-public domains is just dumb.
Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.
greatgib 2 days ago [-]
As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain. Only the big one embedded in browser will have the receive to have their own CA certificate with whatever period they want...
And in term of security, I think that it is a double edged sword:
- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.
As a side note, I'm totally laughing at the following explanation in the article:
47 days might seem like an arbitrary number, but it’s a simple cascade:
- 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
lolinder 15 hours ago [-]
> everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.
At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.
gruez 2 days ago [-]
>As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain.
like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.
greatgib 2 days ago [-]
Let's suppose that I'm a competitor of Google and Amazon, and I want to have my Public root CA for mydomain.com to offer my clients subdomains like s3.customer1.mydomain.com, s3.customer2.mydomain.com,...
tptacek 15 hours ago [-]
If you want to be a public root CA, so that every browser in the world needs to trust your keys, you can do all the lifting that the browsers are asking from public CAs.
gruez 2 days ago [-]
Why do you want this when there are wildcard certificates? That's how the hyperscalers do it as well. Amazon doesn't have a separate certificate for each s3 bucket, it's all under a wildcard certificate.
vlovich123 14 hours ago [-]
Amazon did this the absolute worst way - all customers share the same flat namespace for S3 buckets which limits the names available and also makes the bucket names discoverable. Did it a bit more sanely and securely at Cloudflare where it was namespaced to the customer account, but that required registering a wildcard certificate per customer if I recall correctly.
zamadatix 11 hours ago [-]
The only consideration I can think is public wildcard certificates don't allow wildcard nesting so e.g. a cert for *.example.com doesn't offer a way for the operator of example.com to host a.b.example.com. I'm not sure how big of a problem that's really supposed to be though.
anacrolix 15 hours ago [-]
No. Chrome flat out rejects certificates that expire more than 13 months away, last time I tried.
nickf 24 hours ago [-]
Certificate pinning to public roots or CAs is bad. Do not do it. You have no control over the CA or roots, and in many cases neither does the CA - they may have to change based on what trust-store operators say.
Pinning to public CAs or roots or leaf certs, pseudo-pinning (not pinning to a key or cert specifically, but expecting some part of a certificate DN or extension to remain constant), and trust-store limiting are all bad, terrible, no-good practices that cause havoc whenever they are implemented.
szszrk 1 hours ago [-]
Ok, but what's the alternative?
Support for cert and CA pinning is in a state that is much better than I thought it will be, at least for mobile apps. I'm impressed by Apple's ATS.
Yet, for instance, you can't pin a CA for any domain, you always have to provide it up front to audit, otherwise your app may not get accepted.
Doesn't this mean that it's not (realistically) possible to create cert pinning for small solutions? Like homelabs or app vendors that are used by onprem clients?
We'll keep abusing PKI for those use cases.
lucb1e 13 hours ago [-]
> 47 [is?] arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:
> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous
> everyone will be so used to certificates changing all the time, and no certificate pinning anymore
Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.
There are alternatives to pinning, DNS CAA records, monitoring CT logs.
yjftsjthsd-h 2 days ago [-]
If you're in a position to pin certs, aren't you in a position to ignore normal CAs and just keep doing that?
ghusto 2 days ago [-]
I really wish encryption and identity weren't so tightly coupled in certificates. If I've issued a certificate, I _always_ care about encryption, but sometimes do not care about identity.
For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.
Pet peeve.
tptacek 2 days ago [-]
There's minimal security in an unauthenticated encrypted connection, because an attacker can just MITM it.
SoftTalker 14 hours ago [-]
Trust On First Use is the normal thing for these situations.
asmor 13 hours ago [-]
TOFU equates to "might as well never ask" for most users. Just like Windows UAC prompts.
superkuh 13 hours ago [-]
You're right most of the time. But there are two webs. And it's only in the later (far more common) case that things like that matter.
There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.
I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.
TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.
So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.
TheJoeMan 11 hours ago [-]
Not to mention the usage of web browsers for configuring non-internet devices! I mean such as managing a router from the LAN side built-in webserver, how many warnings you have to click through in Firefox nowadays. Hooking an iPhone to an IoT device, the iPhone hates that there's no "internet" and constantly tries to drop the WiFi.
steventhedev 2 days ago [-]
There is a security model where MITM is not viable - and separating that specific threat from that of passive eavesdropping is incredibly useful.
tptacek 2 days ago [-]
MITM scenarios are more common on the 2025 Internet than passive attacks are.
steventhedev 2 days ago [-]
MITM attacks are common, but noisy - BGP hijacks are literally public to the internet by their nature. I believe that insisting on coupling confidentiality to authenticity is counterproductive and prevents the development of more sophisticated security models and network design.
orev 2 days ago [-]
You don’t need to BGP hijack to perform a MITM attack. An HTTPS proxy can be easily and transparently installed at the Internet gateway. Many ISPs were doing this with HTTP to inject their own ads, and only the move to HTTPS put an end to it.
steventhedev 1 days ago [-]
Yes. MITM attacks do happen in reality. But by their nature they require active participation which for practical purposes means leaving some sort of trail. More importantly is that by decoupling confidentionality from authenticity, you can easily prevent eavesdropping attacks at scale.
Which for some threat models is sufficiently good.
tptacek 1 days ago [-]
This thread is dignifying a debate that was decisively resolved over 15 years ago. MITM is a superset of the eavesdropper adversary and is the threat model TLS is designed to risk.
It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.
pyuser583 1 days ago [-]
As someone who had to set up monitoring software for my kids, I can tell you MITM are very real.
It’s how I know what my kids are up to.
It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.
Identity really is security.
steventhedev 1 days ago [-]
TLS chose the threat model that includes MITM - there's no good reason that should ever change. All I'm arguing is that having a middle ground between http and https would prevent eavesdropping, and that investment elsewhere could have been used to mitigate the MITM attacks (to the benefit of all protocols, even those that don't offer confidentiality). Instead we got OpenSSL and the CA model with all it's warts.
More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.
simiones 19 hours ago [-]
It is literally impossible to securely talk to a different party over an insecure channel unless you have a shared key beforehand or use a trusted third-party. And since the physical medium is always inherently insecure, you will always need to trust a third party like a CA to have secure communications over the internet. This is not a limitation of some protocol, it's a fundamental law of nature/mathematics (though maybe we could imagine some secure physical transport based on entanglement effects in some future world?).
So no, IPSec couldn't have fixed the MITM issue without requiring a CA or some equivalent.
YetAnotherNick 14 hours ago [-]
The key could be shared in DNS records or could even literally be in the domain name like Tor. Although each approach has its pros and cons.
tptacek 11 hours ago [-]
On this arm of the thread we're litigating whether authentication is needed at all, not all the different ways authentication can be provided. I'm sure there's another part of the thread somewhere else where people are litigating CAs vs Tor.
BobbyJo 2 days ago [-]
What does their commonality have to do with the use cases where they aren't viable?
panki27 2 days ago [-]
How is an attacker going to MITM an encrypted connection they don't have the keys for, without having rogue DNS or something similar, i.e. faking the actual target?
Ajedi32 2 days ago [-]
It's an unauthenticated encrypted connection, so there's no way for you to know whose keys you're using. The attacker can just tell you "Hi, I'm the server you're looking for. Here's my key." and your client will establish a nice secure, encrypted connection to the malicious attacker's computer. ;)
notTooFarGone 9 minutes ago [-]
There are enough example where this is just a bogus scenario. There are a lot of IoT cases that fall apart anyway when the attacker is able to do a MITM attack.
For example if the MITM requires you to have physical access to the machine, you'd also have to cover the physical security first. As long as that is not the case who cares for some connection hijack.
If the data you are actually communicating is in addition just not worth the encryption but has to be because of regulation you are just doing the dance without it being worth it.
oconnor663 2 days ago [-]
They MITM the key exchange step at the beginning, and now they do have the keys. The thing that prevents this in TLS is the chain of signatures asserting identity.
2mlWQbCK 14 hours ago [-]
You can have TLS with TOFU, like in the Gemini protocol. At least then, in theory, the MTIM has to happen the first time you connect to a site. There is also the possibility for out of band confirmation of some certificate's fingerprint if you want to be really sure that some Gemini server is the one you hope it is.
panki27 2 days ago [-]
You can not MITM a key that is being exchanged through Diffie-Hellman, or have I missed something big?
Ajedi32 2 days ago [-]
Yes, Mallory just pretends to be Alice to Bob and pretends to be Bob to Alice, and they both establish an encrypted connection to Mallory using Diffie-Hellman keys derived from his secrets instead of each other's. Mallory has keys for both of their separate connections at this point and can do whatever he wants. That's why TLS only uses Diffie-Hellman for perfect forward secrecy after Alice has already authenticated Bob. Even if the authentication key gets cracked later Mallory can't reach back into the past and MITM the connection retroactively, so the DH-derived session key remains protected.
oconnor663 15 hours ago [-]
If we know each other's DH public key in advance, then you're totally right, DH is secure over an untrusted network. But if we don't know each other's public keys, we have to get them over that same network, and DH can't protect us if the network lies about our public keys. Solving this requires some notion of "identity", i.e. some way to verify that when I say "my public key is abc123" it's actually me who's saying that. That's why it's hard to have privacy without identity.
simiones 2 days ago [-]
Connections never start as encrypted, they always start as plain text. There are multiple ways of impersonating an IP even if you don't control DNS, especially if you are in the same local network.
Gigachad 10 hours ago [-]
Double especially if it's the ISP or government involved. They can just automatically MITM and reencrypt every connection if there is no identity checks.
gruez 2 days ago [-]
>Connections never start as encrypted, they always start as plain text
Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.
TCP SYN is not encrypted, and neither is Client Hello. Even with TCP cookies and TLS session resumption, the initial packet is still unencrypted, and can be intercepted.
However, ECH relies on a trusted 3rd party to provide the key of the server you are intending to talk to. So, it won't work if you have no way of authenticating the server beforehand the way GP was thinking about.
EE84M3i 1 days ago [-]
Yes but this still depends on identity. It's not unauthenticated.
ekr____ 14 hours ago [-]
The situation is actually somewhat more complicated than this.
ECH gets the key from the DNS, and there's no real authentication for this data (DNSSEC is rare and is not checked by the browser). See S 10.2 [0] for why this is reasonable.
GP means unencrypted at the wire level. ClientHelloOuter is still unencrypted even with HSTS.
2 days ago [-]
jiveturkey 1 days ago [-]
Chrome started doing https-first since April 2021 (v90).
Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.
Firefox 136 (2025) now does https first as well.
simiones 19 hours ago [-]
That is irrelevant. All TCP connections start as a TCP SYN, that can be trivially intercepted and MITMd by anyone. So, if you don't have an out-of-band reason to trust the server certificate (such as trust in the CA that PKI defines, or knowing the signature of the server certificate), you can never be sure your TLS session is secure, regardless of the level of encryption you're using.
gruturo 12 hours ago [-]
After the TCP handshake, the very first payload will be the HTTPS negotiation - and even if you don't use encrypted client hello / encrypted SNI, you still can't spoof it because the certificate chain of trust will not be intact - unless you somehow control the CAs trusted by the browser.
With an intact trust chain, there is NO scenario where a 3rd party can see or modify what the client requests and receives beyond seeing the hostname being requested (and not even that if using ECH/ESNI)
Your "if you don't have an out-of-band reason to trust the server cert" is a fitting description of the global PKI infrastructure, can you explain why you see that as a problem? Apart from the fact that our OSes and browser ship out of the box with a scary long list of trusted CAs, some from fairly dodgy places?
let's not forget that BEFORE that TCP handshake there's probably a DNS lookup where the FQDN of the request is leaked, if you don't have DoH.
jiveturkey 5 hours ago [-]
well yes! that is the entire point / methodology of TLS. Because you have a trust anchor, you can be sure that at the app layer the connection is "secure".
of course the L3/L4 can be (non) trivially intercepted by anyone, but that is exactly what TLS protects you against.
if simple L4 interception were all that is required, enterprises wouldn't have to install a trust root on end devices, in order to MITM all TLS connections.
the comment you were replying to is
> How is an attacker going to MITM an encrypted connection they don't have the keys for
of course they can intercept the connection, but they can't MITM it in the sense that MITM means -- read the communications. the kind of "MITM" / interception that you are talking about is simply what routers do anyway!
jchw 2 days ago [-]
I mean, we do TOFU for SSH server keys* and nobody really seems to bat an eye at that. Today if you want "insecure but encrypted" on the web the main way to go is self-signed which is both more annoying and less secure than TOFU for the same kind of use case. Admittedly, this is a little less concerning of an issue thanks to ACME providers. (But still annoying, especially for local development and intranet.)
*I mistakenly wrote "certificate" here initially. Sorry.
tptacek 2 days ago [-]
SSH TOFU is also deeply problematic, which is why cattle fleet operators tend to use certificates and not piecewise SSH keys.
jchw 2 days ago [-]
I've made some critical mistakes in my argument here. I am definitely not referring to using SSH TOFU in a fleet. I'm talking about using SSH TOFU with long-lived machines, like your own personal computers, or individual long-running servers.
Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.
To be clear, there are a lot of obvious security problems with this:
- It relies on me actually checking the fingerprint.
- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.
- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.
This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.
As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.
That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.
tptacek 2 days ago [-]
I don't understand any of this. If you want TOFU for TLS, just use self-signed certificates. That makes sense for your own internal stuff. For good reason, the browser vendors aren't going to let you do it for public resources, but that doesn't matter for your use case.
jchw 1 days ago [-]
Self-signed certificates have a terrible UX and worse security; browsers won't remember the trusted certificate so you'd have to verify it each time if you wanted to verify it.
In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.
tptacek 1 days ago [-]
Just add the self-signed certificate. It's literally a TOFU system.
jchw 1 days ago [-]
But again, you then get (much) worse UX than plaintext HTTP, it won't even remember the certificate. The thing that makes TOFU work is that you at least only have to verify the certificate once. If you use a self-signed certificate, you have to allow it every session.
A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.
tptacek 1 days ago [-]
Yes, it will.
jchw 14 hours ago [-]
I checked and you seem to be correct, at least for Firefox and Chromium. I tried using:
and when I clicked "Accept the risk and continue", the certificate was added to Certificate Manager. I closed the browser, re-opened it, and it did not prompt again.
I did the same thing in Chromium and it also worked, though I'm not sure if Chromium's are permanent or if they have a lifespan of any kind.
I am absolutely 100% certain that it did not always work that way. I remember a time when Firefox had an option to permanently add an exception, but it was not the default.
Either way, apologies for the misunderstanding. I genuinely did not realize that it worked this way, and it runs contrary to my previous experience dealing with self-signed certificates.
To be honest, this mostly resolves the issues I've had with self-signed certificates for use cases where getting a valid certificate might be a pain. (I have instead been using ACME with DNS challenge for some cases, but I don't like broadcasting all of my internal domains to the CT log nor do I really want to manage a CA. In some cases it might be nice to not have a valid internet domain at all. So, this might just be a better alternative in some cases...)
tptacek 11 hours ago [-]
Every pentester that has ever used Burp (or, for the newcomers, mitmproxy) has solved this problem for themselves. My feeling is that this is not a new thing.
arccy 2 days ago [-]
ssh server certificates should not be TOFU, the point of SSH certs is so you can trust the signing key.
TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.
tptacek 2 days ago [-]
Intercepting and exploiting first-contact SSH sessions is a security conference sport. People definitely do it.
jchw 2 days ago [-]
I just typed the wrong thing, fullstop. I meant to say server keys; fixed now.
Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.
pabs3 2 days ago [-]
You don't have to TOFU SSH server keys, there is a DNSSEC option, or you can transfer the keys via a secure path, or you can sign the keys with a CA.
gruez 2 days ago [-]
>I mean, we do TOFU for SSH server certificates and nobody really seems to bat an eye at that.
Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.
jchw 2 days ago [-]
I'm not arguing for replacing existing uses of HTTPS here, just cases where you would today use self-signed certificates or plaintext.
hedora 1 days ago [-]
TOFU is not less secure than using a certificate authority.
Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.
tptacek 1 days ago [-]
TOFU is less secure than using a trust anchor.
hedora 1 days ago [-]
That’s only true if you operate the trust anchor (possible) and it’s not an attack vector (impossible).
For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.
Alternatively, you could manually verify + pin certs after first use.
tptacek 1 days ago [-]
There are a couple of these concepts --- TOFU (key continuity) is one, PAKEs are another, pinning a third --- that sort of float around and captivate people because they seem easy to reason about, but are (with the exception of Magic Wormhole) not all that useful in the real world. It'd be interesting to flesh out the complete list of them.
The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.
hedora 18 hours ago [-]
There are ~ 200 entries in my password manager. Maybe 25 are important. Pinning their certs would meaningfully reduce the transport layer attack surface for those accounts.
tptacek 11 hours ago [-]
Yes, these ideas bubble around because they all seem reasonable on their face. I was a major fan of pinning!
IshKebab 14 hours ago [-]
I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).
On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.
saurik 13 hours ago [-]
When I use a service over TLS on a network I don't trust, the premise is that I only will trust the connection if it has a certificate from a handful of companies trusted by the people who wrote the software I'm using (my browser/client and/or my operating system) to only issue said certificates to people who are supposed to have them (which these days is increasingly defined to be "who are in control of the DNS for the domain name at a global level", for better or worse, not that everyone wants to admit that).
But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.
woodruffw 13 hours ago [-]
> I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).
Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.
tikkabhuna 12 hours ago [-]
But isn't that exactly the previous posters point? Free WiFI someone can just MITM your connection, you would never know and you think its encrypted. Its the worst possible outcome. At least when there's no encryption browsers can tell the user to be careful.
IshKebab 11 hours ago [-]
They could still tell the user to be careful without authentication.
He wasn't proposing that encryption without authentication gets the full padlock and green text treatment.
Ajedi32 2 days ago [-]
In what situation would you want to encrypt something but not care about the identity of the entity with the key to decrypt it? That seems like a very niche use case to me.
xyzzy123 2 days ago [-]
Because TLS doesn't promise you very much about the entity which holds the key. All you really know is that they they control some DNS records.
You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.
Ajedi32 2 days ago [-]
It tells you the entity which holds the key is the actual owner of myfavouriteshoes.com, and not just a random guy operating the free Wi-Fi hotspot at the coffee shop you're visiting. If you don't care about that then why even bother with encryption in the first place?
xyzzy123 2 days ago [-]
True.
OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.
Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.
yjftsjthsd-h 2 days ago [-]
If you really don't care, sometimes you can just go plantext HTTP. I do this for some internal things that are accessed over VPN links. Of course, that only works if you're not doing anything that browsers require HTTPS for.
Alternatively, I would suggest letsencrypt with DNS verification. Little bit of setup work, but low maintenance work and zero effort on clients.
smw 13 hours ago [-]
Or just run tailscale and let it take care of the certs for you. I hate to sound like a shill, but damn does it make it easier.
akerl_ 19 hours ago [-]
It seems like you have two pretty viable options:
1. Wire up LetsEncrypt certs for things running on your LAN, and all the "dire certificate warnings" go away.
2. Run a local ACME service, wire up ACME clients to point to that, make your private CA valid for 100 years, trust your private CA on the devices of the Regular People in your house.
I did this dance a while back, and things like acme.sh have plugins for everything from my Unifi gear to my network printer. If you're running a bunch of servers on your LAN, the added effort of having certs is tiny by comparison.
arccy 2 days ago [-]
at least it's not evil-government-proxy.com that decided to mitm you and look at your favorite shoes.
xyzzy123 2 days ago [-]
Indeed and the system is practically foolproof because the government cannot take over DNS records, influence CAs, compromise cloud infrastructure / hosting, or rubber hose the counter-party to your communications.
Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.
pizzafeelsright 14 hours ago [-]
Seems logical.
If we encrypt everything we don't need AuthN/Z.
Encrypt locally to the target PK. Post a link to the data.
lucb1e 13 hours ago [-]
What? I work in this field and I have no idea what you mean. (I get the abbreviations like authz and pk, but not how "encrypting everything" and "posting links" is supposed to remove the need for authentication)
mannyv 14 hours ago [-]
All our door locks suck, but everyone has a door lock.
The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.
That's an important practical distinction that's overlooked by security bozos.
panki27 2 days ago [-]
Isn't this excatly the reason why LetsEncrypt was brought to life?
silverwind 1 days ago [-]
I agree, there needs to be a TLS without certificates. Pre-shared secrets would be much more convenient in many scenarios.
ryao 1 days ago [-]
How about TLS without CAs? See DANE. If only web browsers would support it.
pornel 14 hours ago [-]
DANE is a TLS with too-big-to-fail CAs that are tied to the top-level domains they own, and can't be replaced.
Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.
I want a middle ground. Identity verification is useful for TLS, but I really wish there was no reliance on ultimately trusted third parties for that. Maybe put some sort of identity proof into DNS instead, since the whole thing relies on DNS anyway.
immibis 13 hours ago [-]
Makes it trivial for your DNS provider to MITM you, and you can't even use certificate transparency to detect it.
2 days ago [-]
charcircuit 2 days ago [-]
Having them always coupled disincentivizes bad ISP's from MITM the connection.
Vegenoid 1 days ago [-]
Isn't identity the entire point of certificates? Why use certificates if you only care about encryption?
ryao 1 days ago [-]
If web browsers supported DANE, we would not need CAs for encryption.
Avamander 13 hours ago [-]
DNSSEC is just a shittier PKI with CAs that are too big to ever fail.
immibis 13 hours ago [-]
It is, but since we rely on DNS anyway, no matter what, and your DNS provider can get a certificate from Let's Encrypt for your site, without asking you, there's merit to combining them. It doesn't add any security to have PKI separate from DNS.
However, we could use some form of Certificate Transparency that would somehow work with DANE.
Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.
philsnow 11 hours ago [-]
> As a certificate authority, one of the most common questions we hear from customers is whether they’ll be charged more to replace certificates more frequently. The answer is no. Cost is based on an annual subscription […]
(emphasis added)
Pump the brakes there, digicert. Price is based on an annual subscription. CA costs will actually go up an infinitesimal amount, but they’re already nearly zero to begin with. Running a CA has got to be one of the easiest rackets in the world.
jwnin 7 hours ago [-]
Costs to buy certs will not materially change. Costs to manage certs will increase.
bityard 13 hours ago [-]
I see that there is a timeline for progressive shortening, so if anyone has any "inside baseball" on this, I'm very curious to know:
Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?
When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?
Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?
captn3m0 2 days ago [-]
This is great news. This would blow a hole in two interesting places where leaf-level certificate pinning is relied upon:
1. mobile apps.
2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.
bearjaws 13 hours ago [-]
Giving me PTSD for working in healthcare.
Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.
Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.
DiggyJohnson 2 days ago [-]
Do you (or anyone) recommend any text based resources laying out the state of enterprise TLS management in 2025?
It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.
grishka 2 days ago [-]
Isn't it usually the server's public key that's pinned? The key pair isn't regenerated when you renew the certificate.
toast0 2 days ago [-]
Typical guidance is to pin the CA or intermediate, because in case of a key compromise, you're going to need to generate a new key.
You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.
What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.
nickf 24 hours ago [-]
I've said it up-thread, but never ever never never pin to anything public. Don't do it. It's bad. You, and even the CA have no control over the certificates and cannot rely on them remaining in any way constant.
Don't do it. If you must pin, pin to private CAs you control. Otherwise, don't do it. Seriously. Don't.
toast0 15 hours ago [-]
There's not really a better option if you need your urls to work with public browsers and also an app you control. You can't use a private CA for those urls, because the public browsers won't accept it; you need to include a public CA in your app so you don't have to rely on the user's device having a reasonable trust store. Including all the CAs you're never going to use is silly, so picking a few makes sense.
richardwhiuk 14 hours ago [-]
You don't need both of those things. Give your app a different url.
ori_b 13 hours ago [-]
Why should I trust a CA that has no control over the certificate chains?
nickf 10 hours ago [-]
Because they operate in a regulated, security industry where changes happen - sometimes beyond their control?
einsteinx2 21 hours ago [-]
Repeating it doesn’t make it any more true. Cert providers publish their root certs, you pin those root certs, zero problems.
nickf 10 hours ago [-]
Then the CA goes away, like Entrust. Huge problems. I speak (sadly) from experience.
Plasmoid 5 hours ago [-]
They rotate those often enough.
1a527dd5 9 hours ago [-]
Dealing with enterprise is going to be fun, we work with a lot of car companies around the world. A good chunk of them love to whitelist by thumbprint. That is going to be fun for them.
umvi 10 hours ago [-]
So does this mean all of our Chromecasts are going to stop working again once this takes effect since (judging by Google's response during the week long Chromecast outage earlier this year) Chromecast is run by a skeleton crew and won't have the resources to automate certificate renewal?
peanut-walrus 14 hours ago [-]
So the assumption here is that somehow your private key is easier to compromise than whatever secret/mechanism you use to provision certs?
Yeah not sure about that one...
ori_b 13 hours ago [-]
Can someone point me to specific exploits that this key rotation schedule would have stopped?
It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Avamander 13 hours ago [-]
> Can someone point me to specific exploits that this key rotation schedule would have stopped?
It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.
In essence it brings a working method of revocation to WebPKI.
> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Compared to a year?
ori_b 12 hours ago [-]
> You also have to prove more frequently that you have control of the domain or IP in the certificate.
That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.
The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.
In short: Don't lose your domain.
> Compared to a year?
Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.
But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.
Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?
kbolino 4 hours ago [-]
I just downloaded one of DigiCert's CRLs and it was half a megabyte. There are probably thousands of revoked certificates in there. If you're not checking CRLs, and a lot of non-browser clients (think programming languages, software libraries, command-line tools, etc.) aren't, then you would trust one of those certificates if it was presented to you. With certificate lifetimes of 47 days instead of a year, 87% of those revoked certificates become unusable regardless of CRL checking.
crote 11 hours ago [-]
The 47 days are (mostly) irrelevant when it comes to compromised keys. The certificate will be revoked by the CA at most 24 hours after compromise becomes known, so a shorter cert isn't really "more secure" than a longer one.
At least, that's what the rules say. In practice CAs have a really hard time saying no to a multi-week extension because a too-big-to-fail company running "critical infrastructure" isn't capable of rotating their certs.
Short cert duration forces companies to automate cert renewal, and with automation it becomes trivial to rotate certs in an acceptable time frame.
avodonosov 11 hours ago [-]
First impression: with automation and short lived certificates, the Certifying Authorities become similar to Identity Providers / OpenID Provider in openid / openid-connect. The certificates are tokens.
And significant part of security is concentrated around the way Certifying Authorities validate the domain ownership. (So called challenges).
Next, maybe clients can run those challenges directly, instead of relying onto certificates? For example, when connecting a server, client client sends two unique values, and the server must create DNS record <unique-val-1>.server.com with record value of the <unique-val-2>. Client check that such record is created and thus the server has proven it controls the domain name.
Auth through DNS, that's what it is. We will just need to speed up the DNS system.
fpoling 11 hours ago [-]
That does not work as DNS is insecure. DNSSEC is not there and may never will.
ryandv 10 hours ago [-]
But this is already basically how Let's Encrypt challenges certificate applicants over ACME DNS01 [0].
I would be more concerned about the number of certificates that would need to be issued and maintained over their lifecycle - which now scales with the number of unique clients challenging your server (or maybe I misunderstand, and maybe there aren't even certificates any more in this scheme).
Not to mention the difficulties of assuring reasonable DNS response times and fresh, up-to-date results when querying a global eventually consistent database with multiple levels of caching...
In the scheme I descrubed where client directly runs challenges the certificates are not issued at all.
I am not saying this scheme is really practical currently.
That's just an imaginary situation coming to mind, illustrating the increased importance of domain ownership validation procedures used by Certifying Authorities. Essentially the security now comes down to the domain ownership validation.
Also a correction. The server not simply puts <unique-val-2>, it puts sha256(<unique-val-2> || '.' || <fingerprint of the public key of the account>).
Yes, the ACME protocol uses some account keys. Private key signs a requests for new cert, and public key fingerprint during domain ownership validation confirms that the challenge response was intended for that specific account.
I am not suggesting ACME can be trivially broken.
I just realized that risks of TLS certs breaking is not just risk of public key crypto being broken, but also includes the risks of domain ownership validation protocols.
detaro 11 hours ago [-]
And would be replacing the CA PKI with an even more centralized PKI.
webprofusion 3 hours ago [-]
Pretty sure this only refers to publicly trusted certs. What percentage of public certs are still being manually managed?
I've been in the cert automation industry for 8 years (https://certifytheweb.com) and I do still hear of manual work going on, but the majority of stuff can be automated.
For stuff that genuinely cannot be automated (are you sure you're sure) these become monthly maintenance tasks, something cert management tools are also now starting to help with.
We're planning to add tracking tasks for manual deployments to Certify Management Hub shortly (https://docs.certifytheweb.com/docs/hub/), for those few remaining items that need manual intervention.
throwaway96751 2 days ago [-]
Off-topic: What is a good learning resource about TLS?
I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?
ivanr 14 hours ago [-]
I have a bunch of useful resources, most of which are free:
> SSL is one of those weird niche subjects that no one learns until they run into a problem
Yep, that me.
Thanks for the blog post!
2 hours ago [-]
2 hours ago [-]
dextercd 2 days ago [-]
I learned a lot from TLS Mastery by Michael W. Lucas.
throwaway96751 2 days ago [-]
Thanks, looks exactly like what I wanted
physicles 18 hours ago [-]
Use ECDSA if you can, since it reduces the size of the handshake on the wire (keys are smaller). Don’t bake in intermediate certs unless you have a very good reason.
No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.
throwaway96751 2 hours ago [-]
I've been reading a little since then, and I think it worked with RSA root cert because this cert was a trust anchor of the Chain of Trust of my server's ECDSA certificate.
2 hours ago [-]
pizzafeelsright 13 hours ago [-]
Curious why you wouldn't have a Q and A with AI?
If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?
The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.
throwaway96751 2 hours ago [-]
I think this method works best when you can verify the answer. So it has to be either a specific type of question (a request to generate code, which you can then run and test), or you have to know enough about the subject to be able to spot mistakes.
jsheard 2 days ago [-]
This change will have a steady roll-out, but if you want to get ahead of the curve then Let's Encrypt will be offering 6 day certs as an option soon.
Is there an actual issue with widespread cert theft? That seems like the primary valid reason to do this, not forcing automation.
cryptonym 2 days ago [-]
Let's Encrypt dropped support for OCSP. CRL doesn't scale well. Short lived certificate probably are a way to avoid certificate revocation quirks.
Ajedi32 2 days ago [-]
It's a real shame. OCSP with Must-Staple seemed like the perfect solution to this, it just never got widespread support.
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
NoahZuniga 13 hours ago [-]
Also certificate transparency is moving to a new standard (sunlight CT) that has immediate merges. Google requires maximum merge delay to be 1 minute or less, but they've said on google groups that they expect merges to be way faster.
lokar 15 hours ago [-]
The log is not really for real time use. It’s to catch CA non-compliance.
dboreham 2 days ago [-]
I think it's more about revocation not working in practice. So the only solution is a short TTL.
trothamel 2 days ago [-]
I suspect it's to limit how long a malicious or compromised CA can impact security.
hedora 1 days ago [-]
Equivalently, it also maximizes the number of sites impacted when a CA is compromised.
It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)
lokar 15 hours ago [-]
Mostly this. Today of a big CA is caught breaking the rules, actually enforcing repairs (eg prompt revocation ) is a hard pill to swallow.
rat9988 2 days ago [-]
I think op is asking has there been many real case scenarios in practice that pushed for this change?
chromanoid 2 days ago [-]
I guess the main reason behind this move is platform capitalism. It's an easy way to cut off grassroots internet.
bshacklett 2 days ago [-]
How does this cut off the grassroots internet?
chromanoid 2 days ago [-]
It makes end to end responsibility more cumbersome. There were days people just stored MS Frontpage output on their home server.
icedchai 2 days ago [-]
Many folks switched to Lets Encrypt ages ago. Certificates are way easier to acquire now than they were in "Frontpage' days. I remember paying 100's of dollars and sending a fax for "verification."
whs 1 days ago [-]
Do they offer any long term commitment for the API though. I remembered that they were blocking old cert manager clients that were hammering their server. You can't automate that (as it could be unsafe, like Solarwinds) and they didn't give one year window to do it manually either.
icedchai 1 days ago [-]
You do have a point. I still feel that upgrading your client is less work than manual cert renewals.
chromanoid 2 days ago [-]
I agree, but I think the pendulum just went too far on the tradeoff scale.
ezfe 11 hours ago [-]
I've done the work to set up, by hand, a self-hosted Linux server that uses an auto-renewing Let's Encrypt cert and it was totally fine. Just read some documentation.
jack0813 2 days ago [-]
There are very convenient tools to do https easily these days, e.g. Caddy. You can use it to reverse proxy any http server and it will do the cert stuff for you automatically.
chromanoid 2 days ago [-]
Ofc, but you have to be quite techsavy to know this and to set this up. It's also cumbersome in many low-tech situations. There is certificate revocation, I would really like to see the threat model here. I am not even sure if automation helps or just shifts the threat vector to certificate issuing.
gjsman-1000 2 days ago [-]
If that were true, we would not have Let's Encrypt and tools which can give us certificates in 30 seconds flat once we prove ownership.
The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)
(Edit because I'm posting too fast, for the reply):
> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.
nottorp 2 days ago [-]
How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?
icedchai 2 days ago [-]
You are always dependent on a 3rd party to some extent: DNS registration, upstream ISP(s), cloud / hosting providers, etc.
nottorp 2 days ago [-]
And now your list has 2 more items in it …
icedchai 5 hours ago [-]
Does it? I need to get a cert from somewhere, whether that's Lets Encrypt for free, or some other company that charges $300/year for effectively the same thing.
chromanoid 2 days ago [-]
I dunno. Self-hosting w/o automation was feasible. Now you have to automate. It will lead to a huge amount of link rot or at least something very similar. There will be solutions but setting up a page e2e gets more and more complicated. In the end you want a service provider who takes care of it. Maybe not the worst thing, but what kind of security issues are we talking about? There is still certificate revocation...
icedchai 2 days ago [-]
Have you tried caddy? Each TLS protected site winds up being literally a couple lines in a config file. Renewals are automatic. Unless you have a network / DNS problem, it is set and forget. It is far simpler than dealing with manual cert renewals, downloading the certificates, restarting your web server (or forgetting to...)
chromanoid 2 days ago [-]
Yes, but only for internal stuff. I prefer traefik at the moment. But my point is more about how people use wix over free webspace and so on. While I don't agree with many of Jonathan Blow's arguments, but news like this make me think of his talk "Preventing the
collapse of civilization" https://m.youtube.com/watch?v=ZSRHeXYDLko
ikiris 12 hours ago [-]
Traefik without certmanager is just as self inflicted a wound. It’s literally designed to handle this for you.
chromanoid 8 hours ago [-]
I have to use an internal cert out of my control anyways. For personal projects I switched to web hosters after some bad experience. But I vividly remember setting up my vps as a teen. while I understand the reasoning it's always sad to see those simpler times go away. and sometimes I don't see the reasoning behind and suspect it's because some c-suites don't see big harm, since it ought to make things safer and those people that are left in the dust don't count anyway...
mystraline 6 hours ago [-]
I'm sure this will be buried, but SSL is supposed to provide encryption. That's it.
Self-signed custom certs also does that. But those are demonized.
Also SSL also tries to define a ip-dns certification of ownership, kind of.
There's also a distinct difference between 'this cert expired last week' and 'this cert doesn't exist' and mitm attack. Expired? Just give a warning, not a scare screen. MITM? Sure give a big scary OHNOPE screen.
But, yeah, 47 days is going to wreck havok on network and weird devices.
kbolino 5 hours ago [-]
If there was no IP/DNS ownership verification, how would you even know you had been MITMed? You think the attacker is going to set a flag to let you know the certificate is fake?
The only real alternative to checking who signed a certificate is checking the certificate's fingerprint hash instead. With self-signed certificates, this is the only option. However, nobody does this. When presented with an unknown certificate, people will just blindly trust it. So self-signed certificates at scale are very susceptible to MITM. And again, you're not going to know it happened.
Encryption without authentication prevents passive snooping but not active and/or targeted attacks. And the target may not be you, it may be the other side. You specifically might not be worth someone's time, but your bank and all of its other customers too, probably is.
OCSP failed. CRLs are not always being checked. Shorter expiry largely makes up for the lack of proper revocation. But expiration must consequently be treated as no less severe than revocation.
nickf 24 hours ago [-]
Don't forget the lede buried here - you'll need to re-validate control over your DNS names more frequently too.
Many enterprises are used to doing this once-per-year today, but by the time 47-day certs roll around, you'll be re-validating all of your domain control every 10 days (more likely every week).
compumike 8 hours ago [-]
(Shameless self-promotion) We set up our https://heiioncall.com/ monitoring to give our on-call rotation a non-critical “it can wait until Monday” alert when there are 14 days or less left on our SSL certificates, and a critical alert “do-not-disturb be damned” when 48 hours left until expiry. Because cert-manager got into some weird state once a few years ago, and I’d rather find out well in advance next time.
The poor vendor folk needs to come more often on site to fix the cert issue.
procaryote 13 hours ago [-]
If this is causing you pain, certbot with Acme DNS challenge is pretty easy to set up to get you certs for your internal services. There are tools for many different dns providers like route53 or cloudflare.
I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.
merb 12 hours ago [-]
The problem is more or less devices that do not support dns challenges or only support letsencrypt and not the acme protocol (to chain acme servers, etc)
cpach 11 hours ago [-]
What kind of devices are you thinking of? Like switches and other network gear?
JackSlateur 10 hours ago [-]
I've deployed LE on IPMI (dell, supermicro), so that's not a good excuse ! As long as you have a way to "script" something on your devices (via ssh, API or whatever) .. you are good to go
1970-01-01 2 days ago [-]
Your 90-day snapshot backups will soon become 47-day backups. Take care!
gruez 2 days ago [-]
???
Do people really backup their https certificates? Can't you generate a new one after restoring from backup?
belter 2 days ago [-]
This is going to be one of the obvious traps.
DiggyJohnson 2 days ago [-]
To care about stale certs on snapshots or the opposite?
belter 2 days ago [-]
Both. One breaks your restore, the other breaks your trust chain.
raggi 2 days ago [-]
It sure would be nice if we could actually fix dns.
6 hours ago [-]
zephius 2 days ago [-]
Old SysAdmin and InfoSec Admin perspective:
Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.
tptacek 2 days ago [-]
I think everybody involved knows about the likelihood that things are going to break at enterprise shops with super-expensive commercial middleboxes. They just don't care anymore. We ran a PKI that cared deeply about the concerns of admins for a decade and a half, and it was a fiasco. The coders have taken over, and things are better.
zephius 2 days ago [-]
That's great for shops with Dev teams and in house developed platforms. Those shops are rare outside Silicon Valley and fortune 500s and not likely to increase beyond that. For the rest of us, we are at the mercy of off the shelf products and 3rd party platforms.
tptacek 2 days ago [-]
I suggest you buy products from vendors who care about the modern WebPKI. I don't think the browser root programs are going to back down on this stuff.
nickf 24 hours ago [-]
This. Also, re-evaluate how many places you actually need public trust that the webPKI offers. So many times it isn't needed, and you make problems for yourself by assuming it does.
I have horror stories I can't fully disclose, but if you have closed networks of millions of devices where you control both the server side and the client side, relying on the same certificate I might use on my blog is not a sane idea.
whs 1 days ago [-]
Agree. My company was cloud first, and when we built the new HQ buying Cisco gear and VMware (as they're the only stack several implementers are offering) it felt like we were sending the company 15 years backwards
zephius 2 days ago [-]
I agree, and we try, however that is not a currently widely supported feature in the boring industry specific business software/hardware space. Maybe now it will be, so time will tell.
ignaloidas 8 hours ago [-]
Hey, you now have a specific cost to point to when arguing for/against solutions that have this problem. "each deployment will cost us at least 12 specialist hours per year just replacing the certificates" is a non-negligible cost that even the least tech-minded people will understand, and it can be a good lever for requiring the support.
ikiris 12 hours ago [-]
Reverse proxies exist. If you don’t like having to do that then have requirements for standards of the past 10 years in your purchasing.
Havoc 8 hours ago [-]
At the same time I don’t think it’s reasonable to make global cert decisions like this based on what some crappy manufacturer failed to implement in their firmware. The issue there is clearly the crap hardware (though the sysadmins that have to deal with it have my condolences)
cpach 2 days ago [-]
“Hardware vendors are simply incompetent and slow to adapt to security changes.”
Perhaps the new requirements will give them additional incentives.
zephius 2 days ago [-]
Yeah, just like TLS 1.2 support. Don't even get me started on how that fiasco is still going.
yjftsjthsd-h 2 days ago [-]
Sounds like everything is solvable via code, and the hardware vendors just suck at it.
zephius 2 days ago [-]
In a nutshell, yes. From a security perspective, look at Fortinet as an egregious example of just how bad. Palo Alto also has some serious internal issues.
dijit 10 hours ago [-]
not really, a lot of those middleware boxes are doing some form of ASIC offloading for TLS, and the PROM that loads the cert(s) are not rated for heavy writes… thus writing is slow, blocking, and will wear your hardware out.
The larger issue is actually our desire to deprecate cipher suites so rapidly though, those 2-3 year old ASICs that are functioning well become e-waste pretty quickly when even my blog gets a Qualys “D” rating after having an “A+” rating barely a year ago.
How much time are we spending on this? The NSA is literally already in the walls.
trothamel 2 days ago [-]
Question: Does anyone have a good solution for renewing letsencrypt certificates for websites hosted on multiple servers? Right now, I have one master server that the others forward the well-known requests too, and then I copy the certificate over when I'm done, but I'm wondering if there's a better way.
nullwarp 2 days ago [-]
I use DNS verification for this then the server doesn't even need to be exposed to the internet.
magicalhippo 22 hours ago [-]
And if changing the DNS entry is problematic, for example the DNS provider used doesn't have an API, you can redirect the challenge to another (sub)domain which can be hosted by a provider that has an API.
I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.
We just use certbot on each server. Are you worried about the rate limit? LE rate limits based on the list of domains. So we send the request for the shared domain and the domain for each server instance. That makes each renew request unique per server for the purpose of the rate limit.
noinsight 2 days ago [-]
Orchestrate the renewal with Ansible - renew on the "master" server remotely but pull the new key material to your orchestrator and then push them to your server fleet. That's what I do. It's not "clean" or "ideal" to my tastes, but it works.
It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.
The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.
pornel 14 hours ago [-]
I copy the same certbot account settings and private key to all servers and they obtain the certs themselves.
It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.
bayindirh 2 days ago [-]
There's a tool called "lsyncd" which watches for a file and syncs the changed file to other servers "within seconds".
I use this to sync users between small, experimental cluster nodes.
Have you tried certbot? Or if you want a turnkey solution, you may try Caddy or Traefik that have their own automated certificate generation utility.
2 days ago [-]
dboreham 2 days ago [-]
DNS verification.
throw0101b 2 days ago [-]
getssl was written with a bit of a focus on this:
> Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.
A welcome change if it gives some vendors a kick up the behind to implement ACME.
ShakataGaNai 14 hours ago [-]
There is no more choice. No one is going to buy from (example) GoDaddy if they have to login every 30 days to manually get a new certificate. Not when they can go to (example) digicert and it's all automatic.
It goes from a "rather nice to have" to "effectively mandatory".
jonathantf2 14 hours ago [-]
I think GoDaddy supports ACME - but if you're using ACME you might as well use Let's Encrypt and stop paying for it.
schlauerfox 11 hours ago [-]
Depends on the kind of certificate you need.
rhodey 13 hours ago [-]
After EFF Lets Encrypt made the change to disable reminder emails I decided I would be moving my personal blog from my VPS to AWS specifically. I just today made the time to make the move and 10 minutes after I find this.
I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.
Times they are a changing
mystified5016 13 hours ago [-]
Why not just automate your LetsEncrypt keys like literally everyone else does? It's free and you have to go out of your way to do it manually.
Or just pay Amazon, I guess. Easier than thinking.
AlfeG 12 hours ago [-]
Will see how Azure FD will handle this. We opened more than expected tickets with support on certs not updating automatically...
dsr_ 12 hours ago [-]
Daniel K Moran figured out the endgame:
"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."
from The Long Run (1989)
Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.
Havoc 8 hours ago [-]
Continually surprised by how emotional people get about cert lifetimes.
I get that there are some fringe cases where it’s not possible but for the rest - automate and forget.
lucb1e 6 hours ago [-]
If emotional reactions to a (likely intentional) annoyance factor surprises you, remember that people start wars for differing sets of beliefs!
This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.
They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.
This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
pornel 14 hours ago [-]
Browsers check the identity of the certificates every time. The host name is the identity.
There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).
You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).
racingmars 5 hours ago [-]
> This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. [...] identity is not the primary purpose certificates serve in the real world.
Identity is the only purpose that certificates serve. SSL/TLS wouldn't have needed certificates at all if the goal was purely encryption: key exchange algorithms work just fine without either side needing keys (e.g. the key related to the certificate) ahead of time.
But encryption without authentication is a Very Bad Idea, so SSL was wisely implemented from the start to require authentication of the server, hence why it was designed around using X.509 certificates. The certificates are only there to provide server authentication.
chowells 2 days ago [-]
I think it's absolutely critical when I'm sending a password to a site that it's actually the site it claims to be. That's identity. It matters a lot.
zelon88 2 days ago [-]
Not to users. The user who types Wal-Mart into their address bar expects to communicate with Wal-Mart. They aren't going to check if the certificate matches. Only that the icon is green.
This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.
And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.
So you see, Identity isn't the value that people expect from a certificate. It's the encryption.
Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.
chowells 2 days ago [-]
Well, no. That's just not true.
I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.
Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.
BrandoElFollito 2 days ago [-]
You use words that are alien to everyone. Well, there is a small incertainity in "everyone" and it is there where the people who actually understand DHCP, DoS, etc. live. This is a very, very small place.
So no, nobody will ever look at a certificate.
When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.
chowells 2 days ago [-]
Who said a word about looking at a certificate?
I said exactly the words I meant.
> I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Without the identity component, I can't trust that those things I care about are insulated from local interference. With the identity component, I say it's fine to connect to random public wifi. Without it, it wouldn't be.
That's the relevant level. "Is it ok to connect to public wifi?" With identity validation, yes. Without, no.
hedora 1 days ago [-]
When you say identity, you mean “the identity of someone that convinced a certificate authority that they controlled walmart.com’s dns record at some point in the last 47 days, or used some sort of out of band authentication mechanism”.
You don’t mean “Walmart”, but 99% of the population thinks you do.
Is it OK to trust this for anything important? Probably not. Is OK to type your credit card number in? Sure. You have fraud protection.
chowells 1 days ago [-]
So what you're saying is that you actually understand the identity portion is critical to how the web is used and you're just cranky. It's ok. Take a walk, get a bite to eat. You'll feel better.
hedora 18 hours ago [-]
I’m not the person you were arguing with. Just explaining your misunderstanding.
JambalayaJimbo 17 hours ago [-]
Right so misrepresenting your identity with similar looking urls is a real problem with PKI. That doesn’t change the fact that certificates are ultimately about asserting your identity, it’s just a flaw in the system.
aseipp 14 hours ago [-]
Web browsers have had defenses against homograph attacks for years now, my man, dating back to 2017. I'm somewhat doubtful you're on top of this subject as much as you seem to be suggesting.
gruez 2 days ago [-]
>This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.
"example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.
>This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.
xyst 15 hours ago [-]
I don’t see any issue here. I already automate with ACME so rotating certificates on an earlier basis is okay. This should be like breathing for app and service developers and infrastructure teams.
Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…
ShakataGaNai 14 hours ago [-]
Because there are lots of companies, large and small, which haven't gotten that far. Lots of legacy sites/services/applications.
I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).
iJohnDoe 2 days ago [-]
Getting a bit ridiculous.
dboreham 2 days ago [-]
Looks like a case where there are tradeoffs to be made, but the people with authority over the decision have no incentive to consider one side of the trade.
bayindirh 2 days ago [-]
Why?
nottorp 2 days ago [-]
The logical endgame is 30 second certificates...
krunck 2 days ago [-]
Or maybe the endgame could be: creation of a centralized service that all web servers are required to be registered with and connected to at all times in order to receive their (frequently rotated) encryption keys. Controllers of said service then have kill switch control of any web service by simply withholding keys.
nottorp 2 days ago [-]
Exactly. And all in the name of security! Think of the children!
saltcured 15 hours ago [-]
I was thinking about this with my morning coffee.. the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
woodruffw 3 hours ago [-]
> the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?
The article notes this explicitly: the goal here is to reduce the number of online CA connections needed. Reducing certificate lifetimes is done explicitly with the goal of reducing the Web PKI's dependence on OCSP for revocation, which currently has the online behavior you're worried about here.
(There's no asymptotic benefit to extremely short-lived certificates: they'd be much harder to audit, and would be much harder to write scalable transparency schemes for. Something around a week is probably the sweet spot.)
bayindirh 2 days ago [-]
For extremely sensitive systems, I think a more logical endgame is 30 minutes or so. 30 seconds is practically continuous generation.
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
nottorp 2 days ago [-]
> once or twice a year makes sense
You don't say. Why are the defaults already 90 days or less then?
bayindirh 2 days ago [-]
Because most of the sites on the internet store much more sensitive information when compared to the sites I gave as an example, and can afford 1/2 certificates a year.
90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.
nottorp 2 days ago [-]
That's not the average website, that's a corporate website or an online store.
Why do you think all the average web sites have to handle members?
bayindirh 2 days ago [-]
Give me examples of websites which doesn’t have any kind of member system in place.
Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.
What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.
So most of the sites have much more to protect than meets the eye.
ArinaS 24 hours ago [-]
> "Negligible amount."
Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.
nottorp 2 days ago [-]
I could move that all your examples except forums do not NEED members or users... except to spy on you and spam you.
bayindirh 2 days ago [-]
I mean, a news site needs their journalists to login. Your own personal Wordpress needs a user for editing the site. The blog platform I use (mataroa) doesn’t even have detailed statistics serve many users so they need user support.
Web is a bit different than you envision/think.
ArinaS 24 hours ago [-]
> "I mean, a news site needs their journalists to login."
Why can't this site just upload HTML files to their web server?
nottorp 22 hours ago [-]
Why can't this site have their CMS entirely separated from the public facing web site, for that matter? :)
> Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing...
Any non predatory practices you can add to the list?
bayindirh 21 hours ago [-]
I think you were trying to reply to me.
I'm not a web developer, and I don't do anything similar on my pages, blog posts, whatever, so I don't know.
The only non-predatory way to do this is to being honest/transparent and don't pulling tricks on people.
However, I think, A/B testing can be used in a non-predatory way in UI testing, by measuring negative comment between two new versions, assuming that you genuinely don't know which version is better for the users.
bayindirh 22 hours ago [-]
Two operational requirements:
1. Journalists shall be able to write new articles and publish them ASAP, possibly from remote locations.
2. Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing... So you need a data structure which can be modified non-destructively and autonomously.
Plus many more things, possibly. I love static webpages as much as the next small-web person, but we have small-web, because the web is not "small" anymore.
panki27 2 days ago [-]
That CRL is going to be HUGE.
psz 2 days ago [-]
Why you think so? Keep in mind that revoked certs are not included in CRLs once expired (because they are not valid any more).
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."
wnevets 12 hours ago [-]
Has anyone managed to calculate the increase power usage across the entire internet this change will cause? Well, I suppose the environment can take one more for the team.
margalabargala 12 hours ago [-]
The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.
It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.
wnevets 12 hours ago [-]
> The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.
We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
Also you not picking up on the Futurama quote is disappointing.
margalabargala 10 hours ago [-]
> We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
We aren't cracking highly secure key pairs. We're making them.
On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.
Yes, there are a lot of websites, close to a billion of them. No, this still is not some onerous use of electricity. For the whole world, this is an additional usage of a bit over 9000 kWh annually. Toss up a few solar panels and you've offset the whole planet.
wnevets 7 hours ago [-]
> On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.
but you think think it would take a decade for the entire internet to use as much power as a single AI video?
margalabargala 2 hours ago [-]
No, doing out the math I see I was being hyperbolic.
That one AI video used about 100kWh, so about four days worth of HTTPS for the whole internet.
detaro 10 hours ago [-]
When you generate a new cert you do not generate a new keypair every time.
12 hours ago [-]
throw0101b 2 days ago [-]
Justification:
> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.
gruez 2 days ago [-]
The "less trustworthy" refers to key compromise, not the e-shop going rogue and start scamming customers or whatever.
throw0101a 2 days ago [-]
Okay, the key is compromised: that means they can MITM the trust relationship. But with modern algorithms you have forward security, so even if you've sniffed/captured the traffic it doesn't help.
And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
gruez 2 days ago [-]
>And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
By that logic, we don't really need certificates, just TOFU.
throw0101d 2 days ago [-]
> By that logic, we don't really need certificates, just TOFU.
It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).
tptacek 2 days ago [-]
It does not work well for SSH. We just don't care about how badly it works.
throw0101d 2 days ago [-]
> It does not work well for SSH. We just don't care about how badly it works.
How "should" it work? Is there a known-better way?
tptacek 2 days ago [-]
Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).
throw0101d 2 days ago [-]
> Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).
I am aware of them.
As someone in the academic sphere, with researchers SSHing into (e.g.) HPC clusters, this solves nothing for me from the perspective of clients trusting servers. Perhaps it's useful in a corporate environment where the deployment/MDM can place the CA in the appropriate place, but not with BYOD.
Issuing CAs to users, especially if they expire is another thing. From a UX perspective, we can tie password credentials to things like on-site Wifi and web site access (e.g., support wiki).
So SSH certs certainly have use-cases, and I'm happy they work for people, but TOFU is still the most useful in the waters I swim in.
tptacek 2 days ago [-]
I don't know what to tell you. The problem with TOFU is obvious: the FU. The FU happens more often than people think it does (every time you log in from a new or reprovisioned workstation) and you're vulnerable every time. I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.
throw0101d 2 days ago [-]
> I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.
It was suggested by someone else: I commented TOFU works for SSH, but is probably not as useful for web-y stuff (except for maybe small in-house stuff).
Personally I'm somewhat sad that opportunistic encryption for the web never really took off: if folks connect on 80, redirect to 443 if you have certs 'properly' set up, but even if not do an "Upgrade" or something to move to HTTPS. Don't necessary indicate things are "secure" (with the little icon), but scramble the bits anyway: no false sense of security, but make it harder for tapping glass in bulk.
xurukefi 13 hours ago [-]
Nobody forces you to change your key for renewals.
thyristan 12 hours ago [-]
semi-related question: where is the letsencrypt workalike for s/mime?
13 hours ago [-]
0xbadcafebee 12 hours ago [-]
I hate this, but I'm also glad it's happening, because it will speed up the demise of Web PKI.
CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.
What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.
It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.
That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.
I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.
CommanderData 23 hours ago [-]
Why bother with such a long staggered approach?
There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.
datadrivenangel 18 hours ago [-]
Enterprises are like lobsters: You gotta crank the water temperature up slowly.
readthenotes1 2 days ago [-]
I wonder how many forums run by the barely able are going to disappear or start charging.
I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby
ezfe 11 hours ago [-]
Why would they start charging? Auto-renewing certificates with Let's Encrypt are easy to do.
dijit 10 hours ago [-]
as long as you only have a single server, or a DNS server that has an API.
Even certbot got deprecated, so my IRC network has to use some janky shell scripts to rotate TLS… I’m considering going back to traditional certs because I geo-balance the DNS which doesn’t work for letsencrypt.
The issue is actually that I have multiple domains handled multiple ways and they all need to be letsencrypt capable for it to work and generate a combined cert with SAN’s attached.
junaru 2 days ago [-]
> For this reason, and because even the 2027 changes to 100-day certificates will make manual procedures untenable, we expect rapid adoption of automation long before the 2029 changes.
Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.
xnyanta 15 hours ago [-]
I have automated IPMI certificate rotation set-up through Let's Encrypt and ACME via the Redfish API. And this is on 15 year old gear running HP iLO4. There's no excuse for not automating things.
panki27 2 days ago [-]
People will just roll out almost forever-lasting certificates through their internal CA for all systems that are not publicly reachable.
throw0101d 2 days ago [-]
> through their internal CA
Nope. People will create self-signed certs and tell people to just click "accept".
Avamander 13 hours ago [-]
They're doing it right now and they'll continue doing so. There are always scapegoats for not automating.
aaomidi 2 days ago [-]
Good.
If you can't make this happen, don't use WebPKI and use internal PKI.
Lammy 14 hours ago [-]
This sucks. I'm actually so sick of mandatory TLS. All we did was get Google Analytics and all the other spyware running “““securely””” while making it that much harder for any regular person to host anything online. This will push people even further into the arms of the walled gardens as they decide they don't want to deal with the churn and give up.
ShakataGaNai 14 hours ago [-]
First.... 99% of people have zero interest in hosting things themselves anyways. Like on their own server themselves. Geocities era was possibly the first and last time that "having your own page" was cool, and that was basically killed by social media.
As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.
Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.
Lammy 13 hours ago [-]
lol at recommending Cloudflare and Microsoft (Github) in response to a comment decrying spyware
Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.
jonathantf2 12 hours ago [-]
A "regular person" won't/can't deal with a server, VPS, anything like that. They'll go to GoDaddy because they saw an advert on the TV for "websites".
Lammy 12 hours ago [-]
They absolutely can deal with the one-time setup of one single thing that's easy to set up auto-pay for. It's so many additional concepts when you add in the ACME challenge/response because now you have to learn sysadmin-type skills to care for a periodic process, users/groups for who-runs-what-process and who-owns-what-cert-files, software updates to chase LE/ACME changes or else all your stuff breaks, etc.
Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.
ezfe 11 hours ago [-]
Lol this is literally not true. I've set up self-hosted websites with no knowledge (just reading tutorials) and TLS is by far not the hardest step.
Lammy 11 hours ago [-]
A brand-new setup is not relevant to what I'm talking about. Try ignoring your entire infrastructure for a few years and see if you still think that lol
riffic 14 hours ago [-]
ah this is gonna piss off a few coworkers today but it's a good move either way.
ocdtrekkie 2 days ago [-]
It's hard to express how absolutely catastrophic this is for the Internet, and how incompetent a group of people have to be to vote 25/0 for increasing a problem that breaks the Internet for many organizations yearly by a factor of ten for zero appreciable security benefit.
Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.
dextercd 2 days ago [-]
CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
ocdtrekkie 2 days ago [-]
It's actually far worse for smaller sites and organizations than large ones. Entire pricey platforms exist around managing certificates and renewals, and large companies can afford those or develop their own automated solutions.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
dextercd 2 days ago [-]
I think most orgs can get away with free ACME clients and free/cheap monitoring options.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
ocdtrekkie 2 days ago [-]
I mean to give you an example of how far we are from this: IIS does not have built-in ACME support, and in the enterprise world it is basically "most web servers". Sure, you can add some third party thing off the Internet to do it, but... how many banks will trust that?
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
dextercd 2 days ago [-]
Let's Encrypt lists 10 ACME clients for Windows / IIS.
If an organisation ignores all those options, then I suppose they should keep doing it manually. But at the end of the day, that is a choice.
Maybe they'll reconsider now that the lifetime is going down or implement their own client if they're that scared of third party code.
Yeah, this will inconvenience some of the CA/B participant's customers. They knew that. It'll also make them and everyone else more secure. And that's what won out.
The idea that this change got voted in due to incompetence, malice, or lack of oversight from the companies represented on the CA/B forum is ridiculous to me.
ocdtrekkie 2 days ago [-]
> Let's Encrypt lists 10 ACME clients for Windows / IIS.
How many of those are first-party/vetted by Microsoft? I'm not sure you understand how enterprises or secure environments work, we can't just download whatever app someone found on the Internet that solves the issue.
dextercd 1 days ago [-]
No idea how many are first-party or vetted by Microsoft. Probably none of them. But I really, really doubt you can only run software that ticks one of those two boxes.
Certify The Web has a 'Microsoft Partner' badge. If that's something your org values, then they seem worth looking into for IIS.
I can find documentation online from Microsoft where they use YARP w/ LettuceEncrypt, Caddy, and cert-manager. Clearly Microsoft is not afraid to tell customers about how to use third party solutions.
Yes, these are not fully endorsed by Microsoft, so it's much harder to get approval for. If an organisation really makes it impossible, then they deserve the consequences of that. They're going to have problems with 397 day certificates as well. That shouldn't hold the rest of the industry back. We'd still be on 5 year certs by that logic.
ocdtrekkie 1 days ago [-]
[flagged]
dextercd 1 days ago [-]
Stealing a private key or getting a CA to misissue a certificate is hard. Then actually making use of this in a MITM attack is also difficult.
Still, oppressive states or hacked ISPs can perform these attacks on small scales (e.g. individual orgs/households) and go undetected.
For a technology the whole world depends on for secure communication, we shouldn't wait until we detect instances of this happening. Taking action to make these attacks harder, more expensive, and shorter lasting is being forward thinking.
Certificate transparency and Multi-Perspective Issuance Corroboration are examples of innovations without bothering people.
Problem is, the benefits of these improvements are limited if attackers can keep using the stolen keys or misissued certificates for 5 years (plus potentially whatever the DCV reuse limit is).
Next time a DigiNotar, Debian weak keys, or heartbleed -like event happens, we'll be glad that these certs exit the ecosystem sooner rather than later.
I'm sure you have legit reasons to feel strongly about the topic and also that you have substantive points to make, but if you want to make them on HN, please make them thoughtfully. Your argument will be more convincing then, too, so it's in your interests to do so.
JackSlateur 10 hours ago [-]
I hope you understand how funny this is
The ballot is nothing but expected
The whole industry has been moving in this direction for the last decade
So there is nothing much to say
Except that if you waited the last moment, well you will have to be in a hurry. (non)Actions have consequences :)
I'm glad by this decision because that'll hammer a bit down those resisting, those who but a human do perform yearly renewal. Let's how stupid it can get.
xyzzy123 2 days ago [-]
Can you point to a specific security problem this change is actually solving? For example, can we attribute any major security compromises in the last 5 years to TLS certificate lifetime?
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
dextercd 2 days ago [-]
If a CA or subscriber improves their security but had an undetected incident in the past, a hacker today has a 397 day cert and can reuse the domain control validation in the next 397 days, meaning they can MITM traffic for effectively 794 days.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
xyzzy123 10 hours ago [-]
People who care deeply about this can use 30 day certs right now if they want to.
dextercd 9 hours ago [-]
Sure, but it's even better if everyone else does too, including attackers that mislead CAs into misissuing a cert.
CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good. It's the same with this change, and you have plenty of time to prepare for it.
xyzzy123 8 hours ago [-]
> including attackers that mislead CAs into misissuing a cert.
I thought we had CT for this.
> CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good.
Fair.
> It's the same with this change, and you have plenty of time to prepare for it.
Not so sure on this one, I think it's basically a result of a security "purity spiral". Yes, it will achieve better certificate hygiene, but it will also create a lot of security busywork that could be better spent in other parts of the ecosystem that have much worse problems. The decision to make something opt-in mandatory forcibly allocates other people's labour.
dextercd 8 hours ago [-]
CT definitely helps, but not everyone monitors it. This is an area where I still need to improve. But even if you detect a misissued cert, it can not reliably be revoked with OCSP/CRL.
--
The maximum cert lifetime will gradually go down. The CA/B forum could adjust the timeline if big challenges are uncovered.
I doubt they expect this to be necessary. I suspect that companies will discover that automation is already possible for their systems and that new solutions will be developed for most remaining gaps, in part because of this announced timeline.
This will save people time in the long run. It is forced upon you, and that's frustrating, but you do have nearly a year before the first change. It's not going down to 47 days in one go.
I'm not saying that no one will renew certificates manually every month. I do think it'll be rare, and even more rare for there to be a technical reason for it.
10 hours ago [-]
sidewndr46 2 days ago [-]
According to the article:
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
xyzzy123 2 days ago [-]
> I'm not even sure what "outdated certificate data" could be...
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
dextercd 2 days ago [-]
"outdated certificate data" would be domains you no longer control. (Example would be a customer no longer points a DNS record at some service provider or domains that have changed ownership).
In the case of OV/EV certificates, it could also include the organisation's legal name, country/locality, registration number, etc.
Forcing people to change passwords increases the likelihood that they pick simpler, algorithmic password so they can remember them more easily, reducing security. That's not an issue with certificates/private keys.
Shorter lifetimes on certs is a net benefit. 47 days seems like a reasonable balance between not having bad certs stick around for too long and having enough time to fix issues when you detect that automatic renewal fails.
The fact that it encourages people to prioritise implementing automated renewals is also a good thing, but I understand that it's frustrating for those with bad software/hardware vendors.
bsder 10 hours ago [-]
> They didn't do this because they're incompetent but because they think it'll improve security.
No, they did it because it reduces their legal exposure. Nothing more, nothing less.
The goal is to reduce the rotation time low enough that the certificates will rotate before legal procedures to stop them from rotating them can kick in.
This does very little to improve security.
dextercd 9 hours ago [-]
Apple introduced this proposal. Why would they care about a CA's legal exposure?
Lower the lifetime of certs does mean that orgs will be better prepared to replace bad certs when they occur. That's a good thing.
More organisations will now take the time to configure ACME clients instead of trying to convince CA's that they're too special to have their certs revoked, or even start embarrassing court cases, which has only happened once as far as I know.
Theories that involve CAs, Google, Microsoft, Apple, and Mozilla having ulterior motives and not considering potential downsides of this change are silly.
nickf 10 hours ago [-]
That isn’t at all true.
arp242 13 hours ago [-]
You can disagree with all of this, but calling for everyone involved to be fired is just ridiculous and mean-spirited.
rglover 10 hours ago [-]
Is it? This is the crux of the problem with a lot of institutions. There's little to no professional accountability for bad moves anymore. It used to be that doing a good job and taking pride in one's work was all you needed to do to keep your job.
Now? It's a spaghetti of politics and emotional warfare. Grown adults who can't handle being told that they might not be up to the task and it's time to part ways. If that's the honest truth, it's not "mean," just not what that person would like to hear.
rcxdude 2 days ago [-]
A large part of why it breaks things is because it only happens yearly. If you rotate certs on a regular pace, you actually get good at it and it stops breaking, ever. (basically everything I've set up with letsencrypt has needed zero maintenance, for example)
ocdtrekkie 2 days ago [-]
So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)
And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.
Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.
Avamander 13 hours ago [-]
> So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)
This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.
msie 13 hours ago [-]
Eff this shit. I'm getting out of sysadmin.
curtisszmania 5 hours ago [-]
[dead]
aristofun 12 hours ago [-]
Oh no, Bunch of stupid bureacrats came up with another dumb idea. What a surprise!
belter 2 days ago [-]
Are the 47 days to please the current US Administration?
eesmith 2 days ago [-]
Based on the linked-to page, no:
47 days might seem like an arbitrary number, but it’s a simple cascade:
* 200 days = 6 maximal month (184 days) + 1/2 30-day month (15 days) + 1 day wiggle room
* 100 days = 3 maximal month (92 days) + ~1/4 30-day month (7 days) + 1 day wiggle room
* 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
vasilzhigilei 15 hours ago [-]
I'm on the SSL/TLS team @ Cloudflare. We have great managed certificate products that folks should consider using as certificate validity periods continue to shorten.
bambax 15 hours ago [-]
Simply having a domain managed by Cloudflare makes it magically https; yes, the traffic between the origin server and Cloudflare isn't encrypted, so it's not completely "secure", but for most uses it's good enough. It's also zero-maintenance and free.
Keep up the good work! ;-)
vasilzhigilei 12 hours ago [-]
Thanks! You can also set up free origin certs to make Cloudflare edge to origin connections encrypted as well.
tinix 9 hours ago [-]
yeah I'm convinced this is the real reason for these changes...
perverse incentives indeed.
Lammy 13 hours ago [-]
SSL added and removed here ;-)
lucb1e 12 hours ago [-]
Is this a joke (as in, that you don't actually work there) to make CF look bad for posting product advertisements in comment threads, or is this legit?
vasilzhigilei 12 hours ago [-]
It's one of my first times posting on HN, thought this could be relevant helpful info for someone. Thanks for pointing out that it sounds salesy, rereading my comment I see it too now.
Rendered at 07:49:51 GMT+0000 (Coordinated Universal Time) with Vercel.
> The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
This is patently nonsensical. There is hardly any information in a certificate that matters in practice, except for the subject, the issuer, and the expiration date.
> Shorter lifetimes mitigate the effects of using potentially revoked certificates.
Sure, and if you're worried about your certificates being stolen and not being correctly revoked, then by all means, use a shorter lifetime.
But forcing shorter lifetimes on everyone won't end up being beneficial, and IMO will create a lot of pointless busywork at greater expense. Many issuers still don't support ACME.
Do I need to update certbot in all my servers? Or would they continue to work without the need to update?
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).
End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.
Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.
Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.
That's their "solution".
Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.
Old cruft dying there for decades
That's the reality and that's an issue unrelated to TLS
Running unmanaged compute at home (or elsewhere ..) is the issue here.
I think that's a big win.
The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.
Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
https://en.wikipedia.org/wiki/Utah_Data_Center
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
Thinking deeper about it: Verisign (now Symantec) must have some insanely good security, because every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks against major email providers. (I'm pretty sure this already happened in Netherlands.)
I might be misremembering but I thought one insight from the Snowden documents was that a certain three-letter agency had already accomplished that?
And what do certificate buyers gain? The ability for their site to be revoked or expired and thus no longer work.
I’d like to corrected.
A certificate is evidence that the server you're connected to has a secret that was also possessed by the server that the certificate authority connected to. This means that whether or not you're subject to MITMs, at least you don't seem to be getting MITMed right now.
The importance of certificates is quite clear if you were around on the web in the last days before universal HTTPS became a thing. You would connect to the internet, and you would somehow notice that the ISP you're connected to had modified the website you're accessing.
Nobody has really had to pay for certificates for quite a number of years.
What certificates get you, as both a website owner and user, is security against man-in-the-middle attacks, which would otherwise be quite trivial, and which would completely defeat the purpose of using encryption.
DANE is the way (https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...)
But no browser have support for it, so .. :/
Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.
[1]: https://en.wikipedia.org/wiki/Heartbleed#Certificate_renewal...
So for a bank, a private cert compromise is bad, for a regular low traffic website, probably not so much?
Sounds like your concept of the customer/provider relationship is inverted here.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
LOL. old-fashioned enterprises are the worst at "oh, no, can't do that, need months of warning to change something!", while also handling critical data. A major event in the CA space last year was a health-care company getting a court order against a CA to not revoke a cert that according to the rules for CAs the CA had to revoke (in the end they got a few days extension, everyone grumbled and the CA got told to please write their customer contracts more clearly, but the idea is out there and nobody likes CAs doing things they are not supposed to, even if through external force).
One way to nip that in the bud is making sure even you get your court order preventing the CA from doing the right thing, your certificate will expire soon anyways, so "we are too important to have working IT processes" doesn't work anymore.
News report: https://www.heise.de/en/news/DigiCert-Customer-seeks-to-exch...
nitty-gritty bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=1910322#c8
some follow-on drama: https://news.ycombinator.com/item?id=43167087
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
That is the next step in nation state tapping of the internet.
A MITM cert would need to be manually trusted, which is a completely different thing.
This is already the case; CT doesn't rely on your specific served cert being comparable with others, but all certs for a domain being monitorable and auditable.
(This does, however, point to a current problem: more companies should be monitoring CT than are currently.)
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
Well you see, they also want to be able to break your automation.
For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.
Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.
This is a ridiculous straw man.
> 48 hours. I am willing to bet money this threshold will never be crossed.
That's because it won't be crossed and nobody serious thinks it should.
Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons
- cert transparency logs and other logging would need to be substantially scaled up
- for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond
- this would cause issues with some HTTP3 performance enhancing features
- thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)
> This feels like much more of an ideological mission than a practical one
There are numerous practical reasons, as mentioned here by many other people.
Resisting this without good cause, like you have, is more ideological at this point.
It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.
But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.
Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?
I applaud you for sticking to your guns though.
And the conventional wisdom for application management and deployments is - if it's painful, do it more. Like this, applications in the container infrastructure are forced to get certificate deployment and reloading right on day 1.
And yes, some older application that were migrated to the infrastructure went ahead and loaded their credentials and certificates for other dependencies into their database or something like that and then ended up confused when this didn't work at all. Now it's fixed.
https://news.ycombinator.com/item?id=25380301
Except there are no APIs to rotate those. The infrastructure doesn't exist yet.
And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.
Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.
Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.
https://learn.microsoft.com/en-us/entra/workload-id/workload...
Individuals are in the same boat: if you're running your own custom services at your house, you've self-identified as being in the amazingly small fraction of the population with both the technical literacy and desire to do so. Either set up LetsEncrypt or run your own ACME service; the CAB is making clear here and in prior changes that they're not letting the 1% hold back the security bar for everybody else.
In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.
Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.
My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader
Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints
This is fair. I assumed all small businesses would be tech startups, haha.
Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.
Congrats for securing your job by selling the free internet and your soul.
If someone doesn’t want to learn then nobody needs to help them for free.
Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.
If we’re talking about businesses hosting services on some intranet and concerned about TLS, then yes, I assume it’s either a tech company or they have at least one competent engineer to host these things. Why else would the question be relevant?
> “Out of touch” is apt and you should probably reflect on that at length.
That’s a very weird personal comment based on a few comments on a website that’s inside a tech savvy bubble. Most people here work in IT, so I talk as if most people here work in IT. If you’re a mechanic at a garage or a lawyer at a law firm, I wouldn’t tell you rolling your own CA is easy and just a few commands.
Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.
I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.
The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).
Not everything is a massive enterprise with an army of IT support personnel.
For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.
Or cheat and use tailscale to do the whole thing.
Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.
https://www.digicert.com/blog/https-only-features-in-browser...
A static page that hosts documentation on an internal network does not need encryption.
The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.
Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.
Of course, then there are the employees who could just intercept HTTP requests, and modify them to include a payload to root an employee's machine. There is so much software out there that can destroy trust in a network, and it's literally download and install, then point and click with no knowledge. Seems like there is a market for simple and cheap solutions for internal networks, for small business. I could see myself making quite a bit off it, which I did in the mid-2000's, but I can't stand doing sales any more in my life, and dealing with support is a whole issue on it's own even with an automated solution.
Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.
The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.
[proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]
So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.
Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.
Just because something is possible in theory doesn't make it likely or worth the time invested.
You can put 8 locks on the door to your house but most people suffice with just one.
Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?
But it's not really a concern worth investing resources into for most.
Ah, the "both me and my attackers agree on what's important" fallacy.
What if they modify the man page response to include drive-by malware?
There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.
Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.
The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.
You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.
End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.
Non browser things usually don’t care even if cert is expired or trusted.
So I expect people still to use WebPKI for internal sites.
Why would browsers "most likely" enforce this change for internal CAs as well?
That said, it would be really nice if they supported DANE so that websites do not need CAs.
Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.
https://smallstep.com/docs/step-ca/
'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.
If you're at the scale past what IPA/your domain can manage, well, c'est la vie.
I think folks are being facetious wanting more for 'free'. The solutions have been available for literal decades, I was deliberate in my choice.
Not the average, certainly the majority where I've worked. There are at least two well-known Clouds that enroll their hypervisors to a domain. I'll let you guess which.
My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.
The devices are already managed; you've deployed them to your fleet.
No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!
Don't complain to me about 'your' choices. Self-selected problem if I've heard one.
Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.
Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.
Literal Clouds do this, why can't 'you'?
You're managing your machine deployments with something so of course you just use that that to include your cert which isn't particularly hard but there's a long-tail of annoying work when dealing with containers and vms you aren't building yourself like k8s node pools. It can be done but it's usually less effort to just get public certs for everything.
To your point, people don't, but it's a perfectly viable path.
Containers/kubernetes, that's pipeline city, baby!
The closest thing is maybe described (but not shown) in these posts: https://blog.daknob.net/workload-mtls-with-acme/ https://blog.daknob.net/acme-end-user-client-certificates/
(disclamer: i'm a founder at anchor.dev)
So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.
Fun times...
For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.
Someone will fuck up accidentally, so production zones are usually gated somehow, sometimes with humans instead of pure automata.
Giving write access does not mean giving unrestricted write access
Also, another way (which I built in a previous compagny) is to create a simple certificate provider (API or whatever), integrated with whatever internal authentication scheme you are using, and are able to sign csr for you. A LE proxy, as you might call it
If it's just because your DNS is at a provider, you should be aware that it's possible to self-host DNS.
You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.
Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.
I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.
Two pointers that might be of interest:
https://community.f5.com/discussions/technicalforum/upload-l...
https://clouddocs.f5.com/api/icontrol-rest/APIRef_tm_sys_cry...
Those tend to be quite brittle in reality. What’s the old adage about engineering vs architecture again?
Something like this I think: https://www.reddit.com/r/PeterExplainsTheJoke/comments/16141...
For some companies, it might be worth it to throw away a $100000 device and buy something better. For others it might not be worth it.
Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.
Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.
Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.
Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.
And in term of security, I think that it is a double edged sword:
- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.
As a side note, I'm totally laughing at the following explanation in the article:
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.
At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.
like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.
Support for cert and CA pinning is in a state that is much better than I thought it will be, at least for mobile apps. I'm impressed by Apple's ATS.
Yet, for instance, you can't pin a CA for any domain, you always have to provide it up front to audit, otherwise your app may not get accepted.
Doesn't this mean that it's not (realistically) possible to create cert pinning for small solutions? Like homelabs or app vendors that are used by onprem clients?
We'll keep abusing PKI for those use cases.
Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:
> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous
From http://crypto.stackexchange.com/questions/16364/why-do-nothi...
Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.
There are alternatives to pinning, DNS CAA records, monitoring CT logs.
For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.
Pet peeve.
There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.
I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.
TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.
So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.
Which for some threat models is sufficiently good.
It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.
It’s how I know what my kids are up to.
It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.
Identity really is security.
More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.
So no, IPSec couldn't have fixed the MITM issue without requiring a CA or some equivalent.
For example if the MITM requires you to have physical access to the machine, you'd also have to cover the physical security first. As long as that is not the case who cares for some connection hijack. If the data you are actually communicating is in addition just not worth the encryption but has to be because of regulation you are just doing the dance without it being worth it.
Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.
https://preview.redd.it/1l4h9e72vp981.jpg?width=640&crop=sma...
However, ECH relies on a trusted 3rd party to provide the key of the server you are intending to talk to. So, it won't work if you have no way of authenticating the server beforehand the way GP was thinking about.
ECH gets the key from the DNS, and there's no real authentication for this data (DNSSEC is rare and is not checked by the browser). See S 10.2 [0] for why this is reasonable.
[0] https://tlswg.org/draft-ietf-tls-esni/draft-ietf-tls-esni.ht...
Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.
Firefox 136 (2025) now does https first as well.
With an intact trust chain, there is NO scenario where a 3rd party can see or modify what the client requests and receives beyond seeing the hostname being requested (and not even that if using ECH/ESNI)
Your "if you don't have an out-of-band reason to trust the server cert" is a fitting description of the global PKI infrastructure, can you explain why you see that as a problem? Apart from the fact that our OSes and browser ship out of the box with a scary long list of trusted CAs, some from fairly dodgy places?
let's not forget that BEFORE that TCP handshake there's probably a DNS lookup where the FQDN of the request is leaked, if you don't have DoH.
of course the L3/L4 can be (non) trivially intercepted by anyone, but that is exactly what TLS protects you against.
if simple L4 interception were all that is required, enterprises wouldn't have to install a trust root on end devices, in order to MITM all TLS connections.
the comment you were replying to is
> How is an attacker going to MITM an encrypted connection they don't have the keys for
of course they can intercept the connection, but they can't MITM it in the sense that MITM means -- read the communications. the kind of "MITM" / interception that you are talking about is simply what routers do anyway!
*I mistakenly wrote "certificate" here initially. Sorry.
Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.
To be clear, there are a lot of obvious security problems with this:
- It relies on me actually checking the fingerprint.
- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.
- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.
This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.
As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.
That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.
In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.
A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.
https://self-signed.badssl.com/
and when I clicked "Accept the risk and continue", the certificate was added to Certificate Manager. I closed the browser, re-opened it, and it did not prompt again.
I did the same thing in Chromium and it also worked, though I'm not sure if Chromium's are permanent or if they have a lifespan of any kind.
I am absolutely 100% certain that it did not always work that way. I remember a time when Firefox had an option to permanently add an exception, but it was not the default.
Either way, apologies for the misunderstanding. I genuinely did not realize that it worked this way, and it runs contrary to my previous experience dealing with self-signed certificates.
To be honest, this mostly resolves the issues I've had with self-signed certificates for use cases where getting a valid certificate might be a pain. (I have instead been using ACME with DNS challenge for some cases, but I don't like broadcasting all of my internal domains to the CT log nor do I really want to manage a CA. In some cases it might be nice to not have a valid internet domain at all. So, this might just be a better alternative in some cases...)
TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.
Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.
Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.
Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.
For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.
Alternatively, you could manually verify + pin certs after first use.
The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.
On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.
But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.
Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.
He wasn't proposing that encryption without authentication gets the full padlock and green text treatment.
You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.
OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.
Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.
Alternatively, I would suggest letsencrypt with DNS verification. Little bit of setup work, but low maintenance work and zero effort on clients.
1. Wire up LetsEncrypt certs for things running on your LAN, and all the "dire certificate warnings" go away.
2. Run a local ACME service, wire up ACME clients to point to that, make your private CA valid for 100 years, trust your private CA on the devices of the Regular People in your house.
I did this dance a while back, and things like acme.sh have plugins for everything from my Unifi gear to my network printer. If you're running a bunch of servers on your LAN, the added effort of having certs is tiny by comparison.
Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.
If we encrypt everything we don't need AuthN/Z.
Encrypt locally to the target PK. Post a link to the data.
The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.
That's an important practical distinction that's overlooked by security bozos.
Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.
However, we could use some form of Certificate Transparency that would somehow work with DANE.
Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.
(emphasis added)
Pump the brakes there, digicert. Price is based on an annual subscription. CA costs will actually go up an infinitesimal amount, but they’re already nearly zero to begin with. Running a CA has got to be one of the easiest rackets in the world.
Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?
When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?
Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?
1. mobile apps.
2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.
Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.
Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.
It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.
You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.
What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.
Yeah not sure about that one...
It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.
In essence it brings a working method of revocation to WebPKI.
> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Compared to a year?
That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.
The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.
In short: Don't lose your domain.
> Compared to a year?
Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.
But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.
Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?
At least, that's what the rules say. In practice CAs have a really hard time saying no to a multi-week extension because a too-big-to-fail company running "critical infrastructure" isn't capable of rotating their certs.
Short cert duration forces companies to automate cert renewal, and with automation it becomes trivial to rotate certs in an acceptable time frame.
And significant part of security is concentrated around the way Certifying Authorities validate the domain ownership. (So called challenges).
Next, maybe clients can run those challenges directly, instead of relying onto certificates? For example, when connecting a server, client client sends two unique values, and the server must create DNS record <unique-val-1>.server.com with record value of the <unique-val-2>. Client check that such record is created and thus the server has proven it controls the domain name.
Auth through DNS, that's what it is. We will just need to speed up the DNS system.
I would be more concerned about the number of certificates that would need to be issued and maintained over their lifecycle - which now scales with the number of unique clients challenging your server (or maybe I misunderstand, and maybe there aren't even certificates any more in this scheme).
Not to mention the difficulties of assuring reasonable DNS response times and fresh, up-to-date results when querying a global eventually consistent database with multiple levels of caching...
[0] https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
I am not saying this scheme is really practical currently.
That's just an imaginary situation coming to mind, illustrating the increased importance of domain ownership validation procedures used by Certifying Authorities. Essentially the security now comes down to the domain ownership validation.
Also a correction. The server not simply puts <unique-val-2>, it puts sha256(<unique-val-2> || '.' || <fingerprint of the public key of the account>).
Yes, the ACME protocol uses some account keys. Private key signs a requests for new cert, and public key fingerprint during domain ownership validation confirms that the challenge response was intended for that specific account.
I am not suggesting ACME can be trivially broken.
I just realized that risks of TLS certs breaking is not just risk of public key crypto being broken, but also includes the risks of domain ownership validation protocols.
I've been in the cert automation industry for 8 years (https://certifytheweb.com) and I do still hear of manual work going on, but the majority of stuff can be automated.
For stuff that genuinely cannot be automated (are you sure you're sure) these become monthly maintenance tasks, something cert management tools are also now starting to help with.
We're planning to add tracking tasks for manual deployments to Certify Management Hub shortly (https://docs.certifytheweb.com/docs/hub/), for those few remaining items that need manual intervention.
I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?
- If you're looking for a concise (yet complete) guide: https://www.feistyduck.com/library/bulletproof-tls-guide/
- OpenSSL Cookbook is a free ebook: https://www.feistyduck.com/library/openssl-cookbook/
- SSL/TLS and PKI history: https://www.feistyduck.com/ssl-tls-and-pki-history/
- Newsletter: https://www.feistyduck.com/newsletter/
- If you're looking for something comprehensive and longer, try my book Bulletproof TLS and PKI: https://www.feistyduck.com/books/bulletproof-tls-and-pki/
Yep, that me.
Thanks for the blog post!
No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.
If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?
The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.
https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)
The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)
(Edit because I'm posting too fast, for the reply):
> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.
Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?
Self-signed custom certs also does that. But those are demonized.
Also SSL also tries to define a ip-dns certification of ownership, kind of.
There's also a distinct difference between 'this cert expired last week' and 'this cert doesn't exist' and mitm attack. Expired? Just give a warning, not a scare screen. MITM? Sure give a big scary OHNOPE screen.
But, yeah, 47 days is going to wreck havok on network and weird devices.
The only real alternative to checking who signed a certificate is checking the certificate's fingerprint hash instead. With self-signed certificates, this is the only option. However, nobody does this. When presented with an unknown certificate, people will just blindly trust it. So self-signed certificates at scale are very susceptible to MITM. And again, you're not going to know it happened.
Encryption without authentication prevents passive snooping but not active and/or targeted attacks. And the target may not be you, it may be the other side. You specifically might not be worth someone's time, but your bank and all of its other customers too, probably is.
OCSP failed. CRLs are not always being checked. Shorter expiry largely makes up for the lack of proper revocation. But expiration must consequently be treated as no less severe than revocation.
Edit: it’s configured under Trigger -> Outbound Probe -> “SSL Certificate Minimum Expiration Duration”
I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.
Do people really backup their https certificates? Can't you generate a new one after restoring from backup?
Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.
Perhaps the new requirements will give them additional incentives.
The larger issue is actually our desire to deprecate cipher suites so rapidly though, those 2-3 year old ASICs that are functioning well become e-waste pretty quickly when even my blog gets a Qualys “D” rating after having an “A+” rating barely a year ago.
How much time are we spending on this? The NSA is literally already in the walls.
I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.
https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.
The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.
It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.
I use this to sync users between small, experimental cluster nodes.
Some notes I have taken: https://notes.bayindirh.io/notes/System+Administration/Synci...
> Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.
* https://github.com/srvrco/getssl
It goes from a "rather nice to have" to "effectively mandatory".
I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.
Times they are a changing
Or just pay Amazon, I guess. Easier than thinking.
"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."
from The Long Run (1989)
Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.
I get that there are some fringe cases where it’s not possible but for the rest - automate and forget.
But there's also security implications: https://news.ycombinator.com/item?id=43708319
They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.
This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).
You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).
Identity is the only purpose that certificates serve. SSL/TLS wouldn't have needed certificates at all if the goal was purely encryption: key exchange algorithms work just fine without either side needing keys (e.g. the key related to the certificate) ahead of time.
But encryption without authentication is a Very Bad Idea, so SSL was wisely implemented from the start to require authentication of the server, hence why it was designed around using X.509 certificates. The certificates are only there to provide server authentication.
This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.
And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.
So you see, Identity isn't the value that people expect from a certificate. It's the encryption.
Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.
I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.
Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.
So no, nobody will ever look at a certificate.
When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.
I said exactly the words I meant.
> I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Without the identity component, I can't trust that those things I care about are insulated from local interference. With the identity component, I say it's fine to connect to random public wifi. Without it, it wouldn't be.
That's the relevant level. "Is it ok to connect to public wifi?" With identity validation, yes. Without, no.
You don’t mean “Walmart”, but 99% of the population thinks you do.
Is it OK to trust this for anything important? Probably not. Is OK to type your credit card number in? Sure. You have fraud protection.
"example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.
[1] https://web.archive.org/web/20171222000208/https://stripe.ia...
>This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.
Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…
I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
The article notes this explicitly: the goal here is to reduce the number of online CA connections needed. Reducing certificate lifetimes is done explicitly with the goal of reducing the Web PKI's dependence on OCSP for revocation, which currently has the online behavior you're worried about here.
(There's no asymptotic benefit to extremely short-lived certificates: they'd be much harder to audit, and would be much harder to write scalable transparency schemes for. Something around a week is probably the sweet spot.)
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
You don't say. Why are the defaults already 90 days or less then?
90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.
Why do you think all the average web sites have to handle members?
Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.
What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.
So most of the sites have much more to protect than meets the eye.
Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.
Web is a bit different than you envision/think.
Why can't this site just upload HTML files to their web server?
> Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing...
Any non predatory practices you can add to the list?
I'm not a web developer, and I don't do anything similar on my pages, blog posts, whatever, so I don't know.
The only non-predatory way to do this is to being honest/transparent and don't pulling tricks on people.
However, I think, A/B testing can be used in a non-predatory way in UI testing, by measuring negative comment between two new versions, assuming that you genuinely don't know which version is better for the users.
1. Journalists shall be able to write new articles and publish them ASAP, possibly from remote locations.
2. Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing... So you need a data structure which can be modified non-destructively and autonomously.
Plus many more things, possibly. I love static webpages as much as the next small-web person, but we have small-web, because the web is not "small" anymore.
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."
It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.
We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
Also you not picking up on the Futurama quote is disappointing.
We aren't cracking highly secure key pairs. We're making them.
On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.
Yes, there are a lot of websites, close to a billion of them. No, this still is not some onerous use of electricity. For the whole world, this is an additional usage of a bit over 9000 kWh annually. Toss up a few solar panels and you've offset the whole planet.
but you think think it would take a decade for the entire internet to use as much power as a single AI video?
That one AI video used about 100kWh, so about four days worth of HTTPS for the whole internet.
> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.
And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
By that logic, we don't really need certificates, just TOFU.
It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).
How "should" it work? Is there a known-better way?
I am aware of them.
As someone in the academic sphere, with researchers SSHing into (e.g.) HPC clusters, this solves nothing for me from the perspective of clients trusting servers. Perhaps it's useful in a corporate environment where the deployment/MDM can place the CA in the appropriate place, but not with BYOD.
Issuing CAs to users, especially if they expire is another thing. From a UX perspective, we can tie password credentials to things like on-site Wifi and web site access (e.g., support wiki).
So SSH certs certainly have use-cases, and I'm happy they work for people, but TOFU is still the most useful in the waters I swim in.
It was suggested by someone else: I commented TOFU works for SSH, but is probably not as useful for web-y stuff (except for maybe small in-house stuff).
Personally I'm somewhat sad that opportunistic encryption for the web never really took off: if folks connect on 80, redirect to 443 if you have certs 'properly' set up, but even if not do an "Upgrade" or something to move to HTTPS. Don't necessary indicate things are "secure" (with the little icon), but scramble the bits anyway: no false sense of security, but make it harder for tapping glass in bulk.
CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.
What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.
It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.
That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.
I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.
There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.
I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby
Even certbot got deprecated, so my IRC network has to use some janky shell scripts to rotate TLS… I’m considering going back to traditional certs because I geo-balance the DNS which doesn’t work for letsencrypt.
The issue is actually that I have multiple domains handled multiple ways and they all need to be letsencrypt capable for it to work and generate a combined cert with SAN’s attached.
Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.
Nope. People will create self-signed certs and tell people to just click "accept".
If you can't make this happen, don't use WebPKI and use internal PKI.
As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.
Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.
Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.
Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.
Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
If an organisation ignores all those options, then I suppose they should keep doing it manually. But at the end of the day, that is a choice.
Maybe they'll reconsider now that the lifetime is going down or implement their own client if they're that scared of third party code.
Yeah, this will inconvenience some of the CA/B participant's customers. They knew that. It'll also make them and everyone else more secure. And that's what won out.
The idea that this change got voted in due to incompetence, malice, or lack of oversight from the companies represented on the CA/B forum is ridiculous to me.
How many of those are first-party/vetted by Microsoft? I'm not sure you understand how enterprises or secure environments work, we can't just download whatever app someone found on the Internet that solves the issue.
Certify The Web has a 'Microsoft Partner' badge. If that's something your org values, then they seem worth looking into for IIS.
I can find documentation online from Microsoft where they use YARP w/ LettuceEncrypt, Caddy, and cert-manager. Clearly Microsoft is not afraid to tell customers about how to use third party solutions.
Yes, these are not fully endorsed by Microsoft, so it's much harder to get approval for. If an organisation really makes it impossible, then they deserve the consequences of that. They're going to have problems with 397 day certificates as well. That shouldn't hold the rest of the industry back. We'd still be on 5 year certs by that logic.
Still, oppressive states or hacked ISPs can perform these attacks on small scales (e.g. individual orgs/households) and go undetected.
For a technology the whole world depends on for secure communication, we shouldn't wait until we detect instances of this happening. Taking action to make these attacks harder, more expensive, and shorter lasting is being forward thinking.
Certificate transparency and Multi-Perspective Issuance Corroboration are examples of innovations without bothering people.
Problem is, the benefits of these improvements are limited if attackers can keep using the stolen keys or misissued certificates for 5 years (plus potentially whatever the DCV reuse limit is).
Next time a DigiNotar, Debian weak keys, or heartbleed -like event happens, we'll be glad that these certs exit the ecosystem sooner rather than later.
I'm sure you have legit reasons to feel strongly about the topic and also that you have substantive points to make, but if you want to make them on HN, please make them thoughtfully. Your argument will be more convincing then, too, so it's in your interests to do so.
The ballot is nothing but expected
The whole industry has been moving in this direction for the last decade
So there is nothing much to say
Except that if you waited the last moment, well you will have to be in a hurry. (non)Actions have consequences :)
I'm glad by this decision because that'll hammer a bit down those resisting, those who but a human do perform yearly renewal. Let's how stupid it can get.
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
BGP hijackings have been uncovered in the last 5 years and MPIC does make this more difficult. https://en.wikipedia.org/wiki/BGP_hijacking
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good. It's the same with this change, and you have plenty of time to prepare for it.
I thought we had CT for this.
> CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good.
Fair.
> It's the same with this change, and you have plenty of time to prepare for it.
Not so sure on this one, I think it's basically a result of a security "purity spiral". Yes, it will achieve better certificate hygiene, but it will also create a lot of security busywork that could be better spent in other parts of the ecosystem that have much worse problems. The decision to make something opt-in mandatory forcibly allocates other people's labour.
--
The maximum cert lifetime will gradually go down. The CA/B forum could adjust the timeline if big challenges are uncovered.
I doubt they expect this to be necessary. I suspect that companies will discover that automation is already possible for their systems and that new solutions will be developed for most remaining gaps, in part because of this announced timeline.
This will save people time in the long run. It is forced upon you, and that's frustrating, but you do have nearly a year before the first change. It's not going down to 47 days in one go.
I'm not saying that no one will renew certificates manually every month. I do think it'll be rare, and even more rare for there to be a technical reason for it.
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
In the case of OV/EV certificates, it could also include the organisation's legal name, country/locality, registration number, etc.
Forcing people to change passwords increases the likelihood that they pick simpler, algorithmic password so they can remember them more easily, reducing security. That's not an issue with certificates/private keys.
Shorter lifetimes on certs is a net benefit. 47 days seems like a reasonable balance between not having bad certs stick around for too long and having enough time to fix issues when you detect that automatic renewal fails.
The fact that it encourages people to prioritise implementing automated renewals is also a good thing, but I understand that it's frustrating for those with bad software/hardware vendors.
No, they did it because it reduces their legal exposure. Nothing more, nothing less.
The goal is to reduce the rotation time low enough that the certificates will rotate before legal procedures to stop them from rotating them can kick in.
This does very little to improve security.
Lower the lifetime of certs does mean that orgs will be better prepared to replace bad certs when they occur. That's a good thing.
More organisations will now take the time to configure ACME clients instead of trying to convince CA's that they're too special to have their certs revoked, or even start embarrassing court cases, which has only happened once as far as I know.
Theories that involve CAs, Google, Microsoft, Apple, and Mozilla having ulterior motives and not considering potential downsides of this change are silly.
Now? It's a spaghetti of politics and emotional warfare. Grown adults who can't handle being told that they might not be up to the task and it's time to part ways. If that's the honest truth, it's not "mean," just not what that person would like to hear.
And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.
Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.
This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.
Keep up the good work! ;-)
perverse incentives indeed.