snailmailman a day ago

I have had the flag to enable this setting enabled for quite some time. It’s never caused any issues. I have only seen it pop-up once- for a cert that I had just issued a second prior. The cert was logged properly and the page loaded another second later. Very quick.

  • tialaramex 9 hours ago

    > I have only seen it pop-up once- for a cert that I had just issued a second prior. The cert was logged properly and the page loaded another second later.

    Hmm. Possibly a timing issue? It is conventional to slightly "back date" certificates so that they claim to have been issued an hour ago, as obviously if users forgot to adjust a PC for the clocks changing your site should still work & it was seen as easier to just back date the certificates. However for SCTs because the log has a Maximum Merge Delay conventionally set to 24 hours - so such back dating gives you 1 hour less to fix any technical problems - if you miss that 24 hour deadline you're out and must start over. So we do not back date SCTs.

    Thus if your system had the time slightly wrong (say, off by 10 seconds) but had Transparency checks enabled I can imagine it would reject a freshly issued cert because the certificate says it was issued almost an hour ago but the SCTs are in the near future.

    • cmeacham98 5 hours ago

      More likely guess, given a refresh fixed it: there was a slight delay with the CT log and it hadn't started returning the precertificate yet.

      • tialaramex 4 hours ago

        The browser isn't talking to the CT log, nor to the CA. It's just looking at the documents it was given, typically the certificate for a single intermediate and then the certificate for server itself which has the SCTs baked inside it.

        Suppose that you get a cert minted for your new server on 1st March at 14:56:09 UTC, the CA does a few checks, concludes this is OK and writes a to-be-signed certificate dated 1st March 13:56:11 UTC, then it mutates this tbsCert by adding poison (per the CT design) signs that and sends it to two CT logs shortly after, each CT log accepts this poisoned pre-cert and provides an SCT dated 1st March 14:56:15. The CA fastens these SCTs to the tbsCert it made before and signs all of that, which it provides back as a new certificate, this delivery completes at 14:56:21 only 12 seconds after you started and your server can use it immediately.

        Unfortunately your PC's clock is 15 seconds slow, it still believes it is 14:56:06 and so when it tries to visit the web site and sees the SCTs with 14:56:15 on them those are from the future and not yet valid. A message is shown explaining that this isn't valid, you take a moment to read it and then try refreshing at what you believe (based on your PC clock) is 14:56:15. This time though it all works because now the documents are valid.

samgranieri a day ago

Hmm. I wonder how this will work with certificates generated by enterprise or private certificate authorities. Specifically, I use caddy for local web development and it generates a snake oil ca for anything on *.localhost using code from step-ca.

I also use step-ca and bind to run a homelab top level domain and generate certs using rfc2136. I have to install that root ca cert everywhere, but it’s worth it

  • Habgdnv a day ago

    As of now, such stricter certificate requirements only apply to publicly trusted CAs that ship with the browser. Custom-added CAs are not subject to these requirements—this applies to all major browsers.

    I haven't tested Firefox's implementation yet, but I expect your private CA to continue working as expected since it is manually added.

    Private CAs can:

    * Issue longer certificates, even 500 years if you want. Public CAs are limited to 1 year I think, or 2? I think it was 1..

    * Can use weaker algorithms or older standards if they want.

    * Not subject to browser revocation policies - no need for OCSP/CRL etc.

    * More things that I do not know?

    • Uvix a day ago

      Public CAs are currently limited to 398 days (effectively 13 months).

      • tialaramex 9 hours ago

        For anybody wondering: The weird amount of time is because with a commercial CA it needs to be possible to "carry" some validity during renewal. If I need a $10 Doodad and they're valid for exactly one calendar year, if I renew the Doodad on Monday instead of the following Sunday because I know I'll forget at the weekend, I am losing almost 20¢ of value. People get disproportionately passionate about stuff like this. So, the CAs credited your remaining time on the previous certificate - if you renewed with them, if you had six weeks to go but renewed a 3 year cert early you'd get issued a 3 years + 6 weeks cert.

        As the maximum expiry shrank (to improve agility and encourage automation) the slack for granting such extra periods shrank too, withh "3 years" it was actually 39 months, maybe a bit more depending on how you squint, now it's exactly 398 days because Apple said so.

braiamp a day ago

I am on Debian Firefox 135.0.1 and https://no-sct.badssl.com/ doesn't error out as expected. Is Debian doing something different?

  • jeroenhd a day ago

    135.0.1 on Ubuntu is warning me. Maybe Mozilla is doing a delayed rollout?

    For context, in about:config my security.pki.certificate_transparency.mode is set to 2. According to https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra... if it's on 0 (disabled) or 1 (not enforcing, collecting telemetry only), you can enable it.

    I can imagine Mozilla setting that setting to 1 by default (collecting telemetry on sites you visit) and Debian overridding it for privacy purposes.

    • lxgr a day ago

      Does the browser actually communicate with any external service for enforcing CT?

      I was under the impression it just checked the certificate for an inclusion proof, and actual monitoring of consistency between these proofs and logs is done by non-browser entities.

      • tialaramex 9 hours ago

        I assume Firefox doesn't implement this but one idea at the core of a full CT system is "gossip". Suppose your browser visits a site which has Dodgy Cert which has a bogus SCT, there should be some chance that the browser tells other people who care, maybe it anonymously sends some info to a gossip integrator. Browsers don't check that every SCT they see makes consistent sense, if your browser is shown two SCTs which could not exist in the same universe it won't realise - but the hypothetical gossip integrator can see if any browsers sampled any SCTs which are not mutually coherent and raise alarms.

        This would detect e.g. US government forces Google's log to cover up a CIA-obtained certificate for north-korean-military.example so it works fine for visitors, but the Korean's can't see it in the public logs. There's no sign that anything like this has ever happened, but in theory it would be easier to pull off since gossip is not implemented.

      • tgsovlerkhgsel a day ago

        No, I assume but Mozilla was first collecting telemetry to see if enabling CT would cause user-visible errors or not.

        • lxgr a day ago

          Ah, good point – presumably 2 also sends telemetry to Mozilla?

          • tgsovlerkhgsel a day ago

            I would expect (without having checked) that both 1 and 2 send telemetry to Mozilla if and only if the global telemetry switch is on (which I think it is by default).

  • saint_yossarian a day ago

    I'm on Debian sid / Firefox 135.0.1-1 and do get the warning.

  • wooque a day ago

    Debian 12 with Firefox 135.0.1 and I do get a warning.

  • lxgr a day ago

    135.0.1 on macOS is warning me as well.

  • BSDobelix a day ago

    Librewolf 135.0.1 is working as expected (FreeBSD, Windows and Linux)

Eikon a day ago

Shameless plug: Check out my Certificate Transparency monitor at https://www.merklemap.com

The scale is massive, I just crossed 100B rows in the main database! :)

  • Etheryte a day ago

    I'm clearly not the target audience for this, so excuse me if this is a dumb question: what is this tool used for? Who would usually use it and for what purpose?

    • Eikon a day ago

      Anyone setting up infrastructure, security researchers, security teams and IT teams.

      It’s also actually very useful too in the brand management field, especially to detect phishing websites.

  • tgsovlerkhgsel a day ago

    Are you continuously monitoring consistency proofs? Or in other words, would someone (you or someone else) actually notice if a log changed its contents retroactively?

    • Eikon a day ago

      Not yet, but that’s definitely the short term plan!

  • AznHisoka a day ago

    Why do it only show a few subdomains for .statuspage.io? I would have expected at least 10K or so. https://www.merklemap.com/search?query=*.statuspage.io&page=...

    Is my query wrong or are you just showing less results intentionally if you’re not paying?

    • Eikon a day ago

      > Why do it only show a few subdomains for .statuspage.io? I would have expected at least 10K or so. https://www.merklemap.com/search?query=*.statuspage.io&page=...

      Because they have a wildcard for *.statuspage.io, which they are probably hosting their pages on.

      > Is my query wrong or are you just showing less results intentionally if you’re not paying?

      No, results are the same but not sorted.

  • antonios a day ago

    That's interesting, can you share more information about your tech stack?

    • Eikon a day ago

      Merklemap is running PostgreSQL as the primary database, currently scaling at ~18TB on NVMe storage, and around 30TB of actual certificates that are stored on s3.

      The backend is implemented in Rust (handling web services, search functionality, and data ingestion pipelines).

      The frontend is built with Next.js.

  • ambigious7777 17 hours ago

    Why not use something like crt.sh?

    • Eikon 12 hours ago

      Why not use something like Merklemap?

  • immibis a day ago

    I tried to do something like this one time and had a problem just finding the logs. All information on the internet points to the fact that certain logs exist, but not how to access them. Are they not public access? Do you have a B2B relationship with the companies like Cloudflare that run logs?

    • tialaramex 19 hours ago

      They're required to be public services. https://crt.sh/monitored-logs is the list of logs monitored by crt.sh (a public log monitor operated by Sectigo, a commercial CA) if that helps. Each of the major browsers also publishes which logs they trust and provides information about e.g. distrust of logs. Is the problem that you couldn't figure out how to use a log? It doesn't just have a web site where you can type in searches you need to be able to use their web API as defined in the protocol documentation.

joelthelion 2 days ago

Can someone explain in a nutshell what CT is, and how does it help security for the average user?

  • tgsovlerkhgsel a day ago

    It's a public, tamper-proof log of all certificates issued.

    When a CA issues a certificate, it sends a copy to at least two different logs, gets a signed "receipt", and the receipt needs to be included in the certificate or browsers won't accept it.

    The log then publishes the certificate. This means that a CA cannot issue a certificate (that browsers would accept) without including it in the log. Even if a government compels a CA, someone compromises it, or even steals the CAs key, they'd have to either also do the same to two CT logs, or publish the misissued certificate.

    Operators of large web sites then can and should monitor the CT logs to make sure that nobody issued a certificate for their domains, and they can and will raise hell if they see that happen. If e.g. a government compels a CA to issue a MitM certificate, or a CA screws up and issues a fake cert, and this cert is only used to attack a single user, it would have been unlikely to be detected in the past (unless that user catches it, nobody else would know about the bad certificate). Now, this is no longer possible without letting the world know about the existence of the bad cert.

    There are also some interesting properties of the logs that make it harder for a government to compel the log to hide a certificate or to modify the log later. Essentially, you can store a hash representing the content of the log at any time, and then for any future state, the log can prove that the new state contains all the old contents. The "receipts" mentioned above (SCTs) are also a promise to include a certificate by a certain time, so if a log issues an SCT then publishes a new state more than a day later that doesn't include the certificate, that state + the SCT are proof that the log is bad.

    • xg15 21 hours ago

      > Operators of large web sites then can and should monitor the CT logs to make sure that nobody issued a certificate for their domains, and they can and will raise hell if they see that happen.

      The tech is definitely an improvement from the previous situation, but I've always wondered about this step: Suppose you've found an unauthorized certificate for your site in the log (and you're not Google, Apple or Microsoft). Then what? What can you actually do about it?

      • cpach 20 hours ago

        Good question!

        When you inform the CA about the incident they are required to revoke the certificate. AFAICT they are also expected to file an incident report to Mozilla’s Bugzilla bug tracker (they have a section just for stuff like this).

        The operations of Certificate Authorities are strictly regulated by policies such as the “Baseline Requirements” (Baseline Requirements for the Issuance and Management of Publicly‐Trusted TLS Server Certificates), Mozilla’s Root Store Policy and the policies of the Common CA Database. If a CA fails to live up to these requirements, the major browsers will kick their root cert of their root stores. (This is not an empty threat.)

        You can find some more info here:

        https://wiki.mozilla.org/CA/Responding_To_An_Incident

        https://cabforum.org/working-groups/server/baseline-requirem... (section 4.9)

        The bugzilla I mentioned is here: https://bugzilla.mozilla.org/buglist.cgi?product=CA%20Progra... – AFAICT, a lot of the deviations are reported by CA staff themselves. So the whole system is actually quite open and self-regulating, not as corrupt and scammy as many seem to believe.

    • brookst a day ago

      Thanks for the great explanation of both tech design and real world benefits!

  • perching_aix a day ago

    CT is an append-only distributed log for certificate issuances. People and client software can use it to check if a certificate is being provided by a trusted CA, if it has been revoked, or is being provided by multiple CAs (the latter possibly indicating CA compromise). CA meaning Certificate Authority, the organizations that issue certificates.

    This provides a further layer of technological defense to attempting the mitigation of your web browser traffic being intercepted and potentially tampered with.

    In practice a regular person is unlikely to run into this, because web PKI is mostly working as expected, so there's no reason for the edge cases to happen en masse. This change is covering one such edge case.

    No idea how the typical corporate interception solutions (e.g. Zscaler) circumvent it in other browsers where this check has long been implemented.

    • q2dg a day ago

      Will Mitmproxy stop working?

      • perching_aix a day ago

        I believe so. You'll need to disable CT enforcement / or add your SPKI hash to the ignore list in the browser settings temporarily to get it working. [0] I guess this is also how corporations get around this issue? Still unsure.

        [0] https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra...

        • mcpherrinm a day ago

          No. CT is only required for public CAs. You only need those browser policy settings if you’re using a public CA without CT.

          • perching_aix a day ago

            I'd imagine this is why certs that terminate in root certificates manually added to the trust store will work fine then [as stated by other comments]?

            • mcpherrinm a day ago

              Right, any CA you add yourself that isn’t part of what Mozilla ships isn’t considered a publicly trusted CA.

  • cyberax a day ago

    Basically, a git repository for all the issues certificates.

schoen 5 days ago

Congratulations! That's terrific news.

megamorf a day ago

Doesn't this effectively render corporate CAs useless?

  • archi42 a day ago

    Another comment mentioned [0]. Enterprise and people running a private CA can set "security.pki.certificate_transparency.disable_for_hosts" to disable CT for certain domains (plus all their subdomains).

    I just hope they automatically disable it for non-public tlds, both from IANA and RFC 6762.

    [0] https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra...

  • zinekeller a day ago

    > Doesn't this effectively render corporate CAs useless?

    All of the browsers ignore transparency for enterprise roots. To determine which is which, the list of actual public roots is stored separately in the CA database, listed in chrome://certificate-manager/crscerts for Chrome and listed as a "Builtin Object Token" in Firefox's Certificate Manager.

  • Eikon a day ago

    No, it just makes any CA accountable for all the certs they issue.

perching_aix a day ago

Would be cool if DANE/TLSA record checks were also implemented. Not sure why browsers are not adopting it.

  • cpach a day ago

    DNSSEC has aged very poorly. I also believe it operates at the wrong layer. When you surf to chase.com you want to make be sure that the website you see is actually JPMorganChase and not Mallory’s fake bank site. That’s why we have HTTPS and the WebPKI. If your local DNS server is poisoned somehow that’s obviosly not good, but it cannot easily send you to a fake version of chase.com.

    Part of why it’s so hard for Mallory to create a fake version of a bank site is Certificate Transparency. It makes it much much harder to issue a forged certificate that a browser such as Chrome, Safari or Firefox will accept.

    For further info about the flaws of DNSSEC I can recommend this article: https://sockpuppet.org/blog/2015/01/15/against-dnssec/ It’s from 2015 but I don’t think anything has really changed since that.

    • perching_aix a day ago

      There are a lot of things that are different for sure since that article's release, for example the crypto, but also the existence of DoH/DoT and that it is leaps and bounds more deployed. They also talk about key pinning, but key pinning has been dead for a while and replaced by exactly CT.

      I'm also not sure how much to trust the author. The writing is very odd language wise and they seem to have quite the axe to grind even with just public CA-based PKI, let alone their combination. The FAQ they link to also makes no sense to me:

      > Under DNSSEC/DANE, CA certificates still get validated. How could a SIGINT agency forge a TLS certificate solely using DNSSEC? By corrupting one of the hundreds of CAs trusted by browsers.

      It's literally what I'd want TLSA enforcement for to combat.

      • tptacek a day ago

        The existence of DoH hurts DNSSEC, it doesn't help it. While privacy is the motivating use case for DoH, it's also the case that on-path attackers can't corrupt the results of a DoH query; they have to move upstream of it.

        The dream of TLSA as a bulwark against suborned CAs has always been problematic, because the security of TLSA records collapses down to that of the TLD operators, the most popular of which are state actors or proxies for them, and most of the remainder are essentially e-commerce firms, not trust anchors.

        But that doesn't matter, because TLSA as an alternative to the WebPKI is already dead on arrival. So many people have problematic access to DNS that no browser can ship hard-fail DANE; in the (extraordinarily unlikely) future world where mainstream browsers do DANE, everybody will have soft-fail DANE falling back to the WebPKI. So, instead of a small number of (state-run!) PKI roots, you'll have the thousands of legacy operators plus the state-run PKI roots.

        This problem motivated the design of "stapling" protocols, where we'd basically throw away the DNS part of the protocol, and just keep the TLSA records, and attach them to the TLS handshake. For several years, this was the last best hope for DANE adoption (read Geoff Huston on this, he's a DANE supporter and he's great), and it all fell apart because nobody could get the security model right.

        It's at this point I like to remind people that the browsers basically had to shake down the CAs to get Certificate Transparency to happen. They held almost all the cards (except for antitrust claims, which were wielded against them) --- "comply with CT, or we'll remove you from our root program". But browsers can't do that with DNS TLD operators; they hold none of the cards. So, in addition to the fact that there's no "DNS Transparency" on the horizon, there's also none of the leverage required to actually get it deployed.

        DANE does not work. DNSSEC is a dead letter. It's long past time for people to move on. I have a lot of hope for what we can accomplish with ubiquitous DoH-like lookups.

  • tialaramex a day ago

    The reason browsers didn't implement DANE is because most people's DNS servers are garbage, so if you do this the browser doesn't work and "if you changed last you own the problem".

    At the time if you asked a typical DNS server e.g. at an ISP or built into a cheap home router - any type of question except "A? some.web.site.example" you get either no answer or a confused error. What do you mean records other than A exist? RFC what? These days most of them can also answer AAAA? but good luck for the records needed by DNSSEC.

    Today we could do better where people have any sort of DNS privacy, whether that's over HTTPS or TLS or QUIC so long as it's encrypted it's probably not garbage and it isn't being intercepted by garbage at your ISP.

    Once the non-adoption due to rusted-in-place infrastructure happened, you get (as you will probably see here on HN) people who have some imagined principle reasons not to do DNSSEC, remember always to ask them how their solution fixed the problem that they say DNSSEC hasn't fixed. The fact the problem still isn't fixed tells you everything you need to know.

    • lxgr a day ago

      > if you asked a typical DNS server e.g. at an ISP or built into a cheap home router - any type of question except "A? some.web.site.example" you get either no answer or a confused error.

      Really? Because that would mean that anything using SRV records wouldn’t work on home routers, yet it’s an integral part of many protocols at this point.

      There’s some room between “my DNS resolver doesn’t do DNSSEC” and “I can only resolve A records”.

      • tialaramex a day ago

        Yes really. Like I said - even AAAA though better than it was isn't as reliable as A, the "Happy Eyeballs" tactic makes that tolerable, maybe 90% of your customers have IPv6, get the AAAA answer quickly, reach the IPv6 endpoint, awesome. 9% only have IPv4 anyway, get IPv4 endpoint, also fine, but 1% the AAAA query never returns, a few milliseconds later the IPv4 connection succeeds and the AAAA query is abandoned so who cares.

        I'd guess that you if you build something which needs SRV? to "Just work" in 2025, not "nice to have" but as a requirement, you probably lose 1-2% of your potential users for that. It might be worth it. But if you need 100% you'll want a fallback. I suggest built-in DoH to, say, Cloudflare.

    • perching_aix a day ago

      I guess I did forget that me using Cloudflare and Google as my DNS is not a normal setup to have...

      But surely it doesn't have to be so black and white? TLSA enforcement is not even a hidden feature flag in mainstream web clients, it's just completely non-existent to my knowledge.

  • Avamander a day ago

    DNSSEC is not a good PKI, that's why.

    There are basically no rules on how to properly operate it, even if there were, there'd be no way to enforce them. There's also almost zero chance a leaked key would ever be detected.

    • perching_aix a day ago

      I'm not sure I follow, could you please elaborate a bit more? I'm not really suggesting DNS to be exclusively used for PKI over the current Web PKI system of public CAs either.

      • bawolff a day ago

        That is kind of the value proposition for DANE though.

        • perching_aix a day ago

          What prevents me from putting the hash of the public key of my public CA certificate into the TLSA record? Nothing. What prevents clients from checking both that the public CA based certificate I'm showing is valid and is present on CT, as well as that it's hashes to the same value I have placed into the TLSA record? Also nothing.

          Am I grossly misunderstanding something here? Feels like I missed a meta.

          • bawolff 15 hours ago

            Nothing saying you can't, just when people talk about DANE that is usually not what they are proposing.

            In terms of what you are saying, i think the main objection would be that HPKP feels a lot easier then putting it in DNS and we couldnt even get that to work. Otoh maybe dns could do a lot lower ttl which would counter some of the risks.

          • Avamander 9 hours ago

            What benefit would that provide? It's just one more thing that has to be constantly maintained and could break while providing very little additional security.

  • zinekeller a day ago

    Impractical in the sense that there are still TLDs (ccTLDs mind you, ICANN can't force anything for those countries) which do not have any form of DNSSEC, which makes DANE and TLSA useless for those TLDs.

    • perching_aix a day ago

      Kind of disappointing if that is the actual stated reason by the various browser vendors, all or nothing doesn't sound like a good policy for this. Surely there is a middle ground possible.

      • Eikon a day ago

        Supporting DANE means you need to maintain both traditional CA validation and DANE simultaneously.

        This may be controversial, but I believe that with CT logs already in place, DANE could potentially reduce security by leaving you without an audit trail of certificates issued to your hosts. If you actively monitor certificate issuance to your hosts using CT, you are in a much better security posture than what DANE would provide you with.

        People praising DANE seem to be doing so as a political statement ("I don't want a 3rd party") rather than making a technical point.

        • perching_aix a day ago

          Why not do both at the same time? I understand that a TLSA record in and of its own would suffice technically, but combined with the regular CA-based PKI, I figured the robustness would increase.

          • Eikon a day ago

            > Why not do both at the same time? I understand that a TLSA record in and of its own would suffice technically, but combined with the regular CA-based PKI, I figured the robustness would increase.

            That seems quite complicated while not increasing security by much, or at all?

            • perching_aix a day ago

              I don't necessarily see the complication. The benefit would be that I, the domain owner, would be able to communicate to clients what certificate they should be expecting, and in turn, clients would be able to tell if there's a mismatch. Sounds like a simple win to me.

              According to my understanding, multiple CAs can issue a certificate covering the same domain just fine, so that on its own showing up on the CT logs is not a sign of CA compromise, just a clue. Could then check CAA, but that is optional and clients are never supposed to check that according to the standard, only the CAs (which again the idea is that one or more are compromised in this scenario). So there's a gap there. This gap to my knowledge is currently bridged by people auditing CT manually, and is the gap that would be filled with DANE in this setup in my thinking, automating it away (or just straight up providing it, because I can imagine that a lot of domain owners do not monitor CT for their domains whatsoever).

ozim a day ago

Wonder why they say you should monitor transparency logs instead of setting up CAA records - malicious actors will most likely disregard CAA anyway.

  • tgsovlerkhgsel a day ago

    They're doing different things, and you should do both.

    Setting CAA records primarily serves to reduce your attack surface against vulnerable domain validation processes. If an attacker wants to specifically attack your domain, and you use CAA, the attacker now needs to find a vulnerability in your CA's domain validation process instead of any CAs validation process. If it works, it prevents an attacker from getting a valid cert.

    Monitoring CT logs only detects attacks after the fact, but will catch cases where CAs wrongly issued certificates despite CAA records, and if you monitor against a whitelist of your own known certificates, it will catch cases where someone got your CA to issue them a certificate, either by tricking the CA or compromising your infrastructure (most alerts you will actually see will be someone at your company just trying to get their job done without going through what you consider the proper channels, although I think you can now restrict CAA to a specific account for LetsEncrypt).

    Since CT is required now by browsers, an attacker that compromises (or compels!) a CA in any way would still have to log the cert or also compromise or compel at least two logs to issue SCTs (signed promises to include the cert in the log) without actually publishing the cert (this is unlikely to get caught but if it was, there would be signed proof that the log did wrong).

  • lambdaone a day ago

    Let's not let the best be the enemy of the good. Malicious actors who disregard CAA would first have to have gone through the process of accreditation to be added to public trust stores, and then would quickly get removed from those trust stores as soon as the imposture was detected. So while creating a malicious CA and then ignoring CAA records is entirely possible for few-shot high-value attacks, it's not a scalable approach, and it means CAA offers at least partial protection against malicious actors forging certificates as a day-to-day activity.

    Transparency logs are of course better because they make it much easier for rogue CAs to be caught rapidly, but it's not a reason to abandon CAA until transparency log checking is universal, not just in browsers, but across the whole PKI ecosystem.

  • mcpherrinm a day ago

    In any security setting, it’s usually good to have both controls and detection.

    CAA records help prevent unexpected issuance, but what if your DNS server is compromised? DNSSEC might help.

    Certificate Transparency provides a detection mechanism.

    Also, unlike CAA records which are enforced only by policy that CAs must respect them, CT is technically enforced by browsers.

    So they are complimentary. A security-sensitive organization should have both.

einpoklum a day ago

It seems Mozilla is making Firefox irrelevant through collection of data on users and its plan to have users consent to that data collected and passed on to third parties. So what it does with certificates may not be very important, very soon.

  • user3939382 a day ago

    Yeah in light of their license change, my reaction to this is “who cares”.

ocdtrekkie a day ago

Highlight point: They are just using Chrome's transparency logs. Yet again, Firefox chooses to be subservient to Google's view of the world.

  • lima 8 hours ago

    Chrome's list of transparency logs. The actual CT logs are operated by multiple parties, including Google.

linwangg a day ago

Great move! Curious to see how this will impact lesser-known CAs. Will this make it easier to detect misissued certs, or will enforcement still depend on browser policies?

  • arccy a day ago

    firefox is just catching up with what chrome implemented years ago. unless you have a site visited only by firefox users, ecosystem effect is likely to be minimal... though it does protect firefox users in the time between detection and remediation.

greatgib a day ago

In theory it is good, but somehow it is also a big threat to privacy and security of your infrastructure.

No need anymore to scan your network to map the complete endpoints of your infrastructure!

And it's a new single point of control and failure!

  • Eikon a day ago

    You're essentially advocating for security through obscurity.

    The fact that public infrastructure is mappable is actually beneficial. It helps enforce best practices rather than relying on the flawed 'no one will discover this endpoint' approach.

    > And it's a new single point of control and failure!

    This reasoning is flawed. X.509 certificates themselves embed SCTs.

    While log unavailability may temporarily affect new certificate issuance, there are numerous logs operated by diverse organizations precisely to prevent single points of failure.

    Certificate validation doesn't require active log servers once SCTs are embedded.

    • tzs a day ago

      > You're essentially advocating for security through obscurity

      So? The problem with security through obscurity is when it is the only security you are using. I didn't see anything in his comment that implied his only protection was the secrecy of his endpoints.

      Security through obscurity can be fine when used in addition to other security measures, and has tangible benefits in a significant fraction of real world situations.

      • grayhatter a day ago

        > So? The problem with security through obscurity is when it is the only security you are using. I didn't see anything in his comment that implied his only protection was the secrecy of his endpoints.

        Directly, or unintentionally implied or not. That's an implication you're allowed to infer when obscurity is the only thing listed, because it's *very* common that is the only defense mechanism. Also, when given the choice between mentioning something that works (literally any other security measure), or mentioning something well known to fail more often than work (obscurity). You're supposed to mention the functioning one, and omit the non-functioning one. https://xkcd.com/463/

        > Security through obscurity can be fine when used in addition to other security measures,

        No, it also has subtle downsides as well. It changes the behavior of everything that interacts with the system. Humans constantly over value the actual value of security though obscurity. And will make decisions based on that misconceived notion. I once heard an engineer tell me. "I didn't know you could hit this endpoint with curl". The mental model for permitting secrets to be used as part of security is actively harmful to security. Much more than it has ever shown to benefit it. Thus, the cure here is to affirmatively reject security though obscurity.

        We should treat it the same way we treat goto. Is goto useful, absolutely. Are there places where it improves code? Another absolutely. Did code quality as a whole improve once SWE collectively shunned goto? Yes! Security though obscurity is causing the exact same class of issues. And until the whole industry adapts to the understanding that it's actually more harmful than useful, we still let subtle bugs like "I thought no one knew about this" sneak in.

        We're not going to escape this valley while people are still advocating for security theatre. We all collectively need to enforce the idea that secrets are dangerous to software security.

        > and has tangible benefits in a significant fraction of real world situations.

        So does racial profiling, but humans have proven over and over and over again, that we're incapable of not misusing in a way that's also actively harmful. And again, when there are options that are better in every way, it's malpractice to use the error prone methods.

        • Eikon a day ago

          Thank you for putting this up so clearly!

    • cle a day ago

      They are tradeoffs and it’s not all-or-nothing. There’s a reason security clearances exist, and that’s basically “security through obscurity”.

      The argument here is that the loss of privacy and the incentives that will increase centralization might not be worth the gain in security for some folks, but good luck opting out. It basically requires telling bigco about your domains. How convenient for crawlers…

  • tialaramex a day ago

    "Passive DNS" is a thing people sell. If people connect to your systems and use public DNS servers, chances are I can "map the complete endpoints of your infrastructure" without touching that infrastructure for a small cost.

    If client X doesn't want client-x.your-firm.example to show up by all means obtain a *.your-firm.example wildcard and let them use any codename they like - but know that they're going to name their site client-x.your-firm.example, because in fact they don't properly protect such "secrets".

    "Blue Harvest" is what secrets look like. Or "Argo" (the actual event not the movie, although real life is much less dramatic, they didn't get chased by people with guns, they got waved through by sleepy airport guards, it's scary anyway but an audience can't tell that).

    What you're talking about is like my employer's data centre being "unlisted" and having no logo on the signs. Google finds it with an obvious search, it's not actually secret.

  • lxgr a day ago

    > And it's a new single point of control and failure!

    That’s why there is a mandatory minimum of several unaffiliated logs that each certificate has to be submitted to.

    If all of these were to catastrophically fail, it would still always be possible for browsers or central monitors to fall back to trusting certificates logged by exactly these without inclusion verification.

  • perching_aix a day ago

    It does leak domain name info, but then you do still have the option to use a wildcard certificate or set up a private CA instead of relying on public ones, which likely makes more sense when dealing with a private resource anyways.

    I guess there might be a scenario where you need "secret" domains be publicly resolvable and use distinct certs, but an example escapes me.

  • bawolff a day ago

    > In theory it is good, but somehow it is also a big threat to privacy and security of your infrastructure.

    This is silly. Certificates have to be in CT logs regardless of if firefox valudates or not.

    Additionally this doesnt apply to private CAs, so internal infrastructure is probably not affected unless you are using public web pki on them.

    • tialaramex 9 hours ago

      technically incorrect which is presumably the best kind of incorrect?

      Certificate logging is not mandatory, none of the Root Programmes (agreements with typically browser vendors to recognise your CA roots) require logging. Now, in practice the browsers may reject certificates if they aren't presented a logging proof (in the certificate or stapled to it by the protocol, or some other means) but that's not a violation of your agreement with the vendor.

      Most CAs (obviously including ISRG / Let's Encrypt) always log every certificate, but some either have programmes where you can pick or legacy systems which just don't do this. You can log such a certificate yourself, if you want, and then staple the receipts to your connection setup - but most people don't know how and don't want to learn.

elmo2you a day ago

I may be (legitimately) flagged for asking a question that may sound antagonizing ... but asked with sincerity: is at all smart to mention Firefox and transparency in the same sentence, at least at this particular moment in time?

While this no doubt is an overall win, at least for most and in most cases, afaik this isn't completely without problems of its own. I just hope it won't lead to a systemd-like situation, where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).

Not trying to be dismissive here. Just have genuine concerns and reservations. Even if mostly intuitively for now; no concrete ones yet. Maybe it's just a Pavlov-reaction, after reading the name Firefox. Honestly can't tell.

  • lxgr a day ago

    You’re spot on: You are reacting seemingly without understanding the fundamentals of what you are reacting to.

    Certificate Transparency [1] is an important technology that improves TLS/HTTPS security, and the name was not invented by Mozilla to my knowledge.

    If Firefox were to implement a hypothetical IETF standard called “private caching”, would you also be cynical about Firefox “doing something private at this point in time” without even reading up what the technology in question does?

    [1] https://en.wikipedia.org/wiki/Certificate_Transparency

    • elmo2you a day ago

      > You’re spot on: You are reacting seemingly without understanding the fundamentals of what you are reacting to.

      What if I did (understand)? What if I knew a thing or two about it, even some lesser known details and side-effects? Maybe including a controversy or two, or at least an odd limitation and potential hazard at that. But, you correctly do point out that Firefox isn't to blame for implementing somebody else's "standard". Responsible for any and all consequences? Nonetheless, certainly yes.

      Aside from now probably not being the best of times for Firefox, my main (potential) concern still stands. However, it is hardly a Firefox-only one, I'll give it that.

      • bawolff a day ago

        > What if I did (understand)?

        I think its pretty clear you don't.

        > What if I knew a thing or two about it, even some lesser known details and side-effects?

        Then you would explicitly mention them instead of alluding to them.

        People who know what they are talking about actually bring up the things that they are concerned about. They don't just say, i know an issue but im not going to tell you what it is.

  • bawolff a day ago

    > is at all smart to mention Firefox and transparency in the same sentence, at least at this particular moment in time?

    What are you expecting them to do? Rename the technology 1984 style?

    > I just hope it won't lead to a systemd-like situation, where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).

    This is a non sensical statement. I mean that literally. It does not make sense.

    > Just have genuine concerns and reservations

    Do you? Because it doesn't sound like it.

  • perching_aix a day ago

    I guess this is a good lesson on what the reasoning one would typically (and unfortunately) bring to a mainstream political thread results in when met with a topic from another area of life instead, particularly a technical one.

    Especially this:

    > where a cadre of (opinionated) people with power get to decide what's right from wrong, based on their beliefs about what might only be a subset of reality (albeit their only/full one at that).

    This is always true. There's no arrangement where you can outsource reasoning and decisionmaking (by choice or by coercion) but also not. That's a contradiction.

    • elmo2you a day ago

      > This is always true. There's no arrangement where you entrust someone else with decisionmaking (by choice or not nonwithstanding) but then they're somehow not the ones performing the decisionmaking afterwards.

      I'm well aware of that. On itself there isn't a problem with it, in principle at least. Right until it leads to bad decisions being pushed through, and more often in ignorance rather than malice. I personally only have a real problem with it when people or tech ends up harmed or even destroyed, just because of ignorance rather than deliberate arbitrary choices (after consideration, hopefully).

      To be clear, I'm not saying that any of that is the case here. But lets just say that browser vendors in general, and Mozilla as of lately in particular, aren't on my "I trust you blindly at making the right decisions" list.

      • lxgr a day ago

        > browser vendors in general, and Mozilla as of lately in particular, aren't on my "I trust you blindly at making the right decisions" list.

        That's entirely fair. But what does this have to do with Mozilla's decision to enforce Certificate Transparency in Firefox?

        If you have a concrete concern, voicing it could lead to a much more productive discussion than exuding a general aura of distrust, even if warranted.

      • perching_aix a day ago

        I do see pretty massive problems with it, such as those you list off, but the unfortunate truth is that one cannot know or do everything themselves. So usually it's not even a choice but a coercive scenario.

        For example, say I want to ensure my food is safe to eat. That would require farmland where I can grow my own food. Say I buy some, but then do I have the knowledge and the means to figure out whether the food I grew is actually safe to eat? After all, I just bought some random plot of farmland, how would I know what was in it? Maybe it wasn't even the land that's contaminated but instead the wind brought over some chance contamination? And so on.