As already noted on this thread, you can't use certbot today to get an IP address certificate. You can use lego [1], but figuring out the exact command line took me some effort yesterday. Here's what worked for me:
lego --domains 206.189.27.68 --accept-tos --http --disable-cn run --profile shortlived
https://github.com/certbot/certbot/pull/10370 showed that a proof of concept is viable with relatively few changes, though it was vibe coded and abandoned (but at least the submitter did so in good faith and collaboratively) :/ Change management and backwards compatibility seem to be the main considerations at the moment.
It allowed me to quickly obtain a couple of IP certificates to test with. I updated my simple TLS certificate checker (https://certcheck.sh) to support checking IP certificates (IPv4 only for now).
I wrote about OpenSSL's performance regressions in the December issue of Feisty Duck's cryptography newsletter [1]. In addition to Alex's and Paul's talk on Python cryptography, at the recent OpenSSL conference there have been several other talks worth watching:
Note: be careful with these recorded talks as they have a piercing violin sound at the beginning that's much louder than the rest. I've had to resort to muting the first couple of seconds of every talk.
Because "everybody uses RC4" (the sibling comment from dchest is correct). There was a lot of bad cryptography in that period and not a lot of desire to improve. The cleanup only really started in 2010 or thereabouts. For RC4 specifically, its was this research paper: https://www.usenix.org/system/files/conference/usenixsecurit... released in 2013.
Hello Peter. Thank you for your work. I really like this approach. I too have been following Temporal and I like it, but I don't think it's a good match for simpler systems.
I've been reading the DBOS Java documentation and have some questions, if you don't mind:
- Versioning; from the looks of it, it's either automatically derived from the source code (presumably bytecode?), or explicitly set for the entire application? This would be too coarse. I don't see auto-configuration working, as a small bug fix would invalidate the version. (Could be less of a problem if you're looking only at method signatures... perhaps add to the documentation?) Similarly, for the explicit setting, a change to the version number would invalidate all existing workflows, which would be cumbersome.
Have you considered relying on serialVersionUID? Or at least allowing explicit configuration on a per workflow basis? Or a fallback method to be invoked if the class signatures change in a backward incompatible way?
Overall, it looks like DBOS is fairly easy to pick up, but having a good story for workflow evolution is going to be essential for adoption. For example, if I have long-running workflows... do I have to keep the old code running until all old instances complete? Is that the idea?
- Transactional use; would it be possible to create a new workflow instance transactionally? If I am using the same database for DBOS and my application, I'd expect my app to do some work and create some jobs for later, reusing my transaction. Similarly, maybe when the jobs are running, I'd perhaps want to use the same transaction? As in, the work is done and then DBOS commits?
I know using the same transaction for both purposes could be tricky. I have, in fact, in the past, used two for job handling. Some guidance in the documentation would be very useful to have.
1. Yes, currently versioning is either automatically derived from a bytecode hash or manually set. The intended upgrade mechanism is blue-green deployments where old workflows drain on old code versions while new workflows start on new code versions (you can also manually fork workflows to new code versions if they're compatible). Docs: https://docs.dbos.dev/java/tutorials/workflow-tutorial#workf...
That said, we're working on improvements here--we want it to be easier to upgrade your workflows without blue-green deployments to simplify operating really long-running (weeks-months) workflows.
2. Could I ask what the intended use case is? DBOS workflow creation is idempotent and workflows always recover from the last completed step, so a workflow that goes "database transaction" -> "enqueue child workflow" offers the same atomicity guarantees as a workflow that does "enqueue child workflow as part of database transaction", but the former is (as you say) much simpler to implement.
Here's an example of a common long-running workflow: SaaS trials. Upon trial start, create a workflow to send the customer onboarding messages, possibly inspecting the account state to influence what is sent, and finally close the account that hasn't converted. This will usually take 14-30 days, but could go for months if manually extended (as many organisations are very slow to move).
I think an "escape hatch" would be useful here. On version mismatch, invoke a special function that is given access to the workflow state and let it update the state to align it with the most recent code version.
> transactional enqueuing
A workflow that goes "database transaction" -> "enqueue child workflow" is not safe if the connection with DBOS is lost before the second step completes safely. This would make DBOS unreliable. Doing it the other way round can work provided each workflow checks that the "object" to which it's connected exists. That would deal with the situation where a workflow is created, but the transaction rolls back.
If both the app and DBOS work off the same database connection, you can offer exactly-once guarantees. Otherwise, the guarantees will depend on who commits first.
Personally, I would prefer all work to be done from the same connection so that I can have the benefit of transactions. To me, that's the main draw of DBOS :)
Yes, something like that is needed, we're working on building a good interface for it.
> transactional enqueueing
But it is safe as long as it's done inside a DBOS workflow. If the connection is lost (process crashes, etc.) after the transaction but before the child is enqueued, the workflow will recover from the last completed step (the transaction) and then proceed to enqueue the child. That's part of the power of workflows--they provide atomicity across transactions.
> > transactional enqueueing
> But it is safe as long as it's done inside a DBOS workflow.
Yes, but I was talking about the point at which a new workflow is created. If my transaction completes but DBOS disappears before the necessary workflows are created, I'll have a problem.
Taking my trial example, the onboarding workflow won't have been created and then perhaps the account will continue to run indefinitely, free of charge to the user.
Looking at the trial example, the way I would solve it is that when the user clicks "start trial", that starts a small synchronous workflow that first creates a database entry, then enqueues the onboarding workflow, then returns. DBOS workflows are fast enough to use interactively for critical tasks like this.
Those two don't really compete. DNSSEC provides authenticity/integrity without privacy and DoH does exactly the opposite. If anything, you need both in order to secure DNS.
They don't compete in any immediate way, but over the long term, end-to-end DNS secure transport would cut sharply into the rationale for deploying DNSSEC. We're not there yet (though: I don't think DNSSEC is a justifiable deployment lift regardless).
It's worth keeping in mind that the largest cause of DNS authoritative data corruption isn't the DNS protocol at all, but rather registrar phishing.
Honestly, and I think this has been true for a long time, but in 2025 the primary (perhaps sole) use case for DNSSEC is as a trust anchor for X.509 certificate issuance. If that's all you need, you can get that without a forklift upgrade of the DNS. I don't think global DNSSEC is going to happen.
In what way does DoH provide end-to-end security? It doesn't, unless you adopt a different definition of "end-to-end" where the "server end" is an entity that's different from the domain name owner, but you're somehow trusting it to serve the correct/unaltered DNS entries. And even then, they can be tricked/coerced/whatever into serving unauthentic information.
For true end-to-end DNS security (as in authentication of domain owners), our only option is DNSSEC.
At best, you can argue that DoH solves a bigger problem.
If you have an end-to-end secure transport to the authority, you've factored out several of the attacks (notably: transaction-driven cache poisoning) that have, at times, formed the rationale for deploying DNSSEC. The most obvious example here is Kaminsky's attack, and the txid attacks that preceded it, which had mitigations in non-BIND DNS software but didn't in BIND specifically because DNSSEC was thought to be the proper fix. Those kinds of attacks would be off the table in a universal DOH/DOT/DOQ world, in some of the same sense that they would be if DNS just universally used TCP.
"True" DNS security isn't a term that means anything to me. Posit a world in which DNSSEC deployment is universal, rather than the sub-5% single digit deployment it has today. There are still attacks on the table, most notably from DNS TLD operators themselves. We choose to adopt a specific threat model and then evaluate attacks against it. This is a persistent problem when discussing DNSSEC (esp. vs. things like DOH). because DNSSEC advocates tend to fall back on a rhetorical crutch of what "true" security of authoritative data meant, as if that had some intuitively obvious meaning for operators.
In a world where DNS message transports are all end-to-end secure, there really isn't much of a reason at all to deploy DNSSEC; again: if you're worried about people injecting bogus data to get certificates issued, your real concern should be your registrar account.
> Even if you exceed your monthly share, don’t worry, there are no excess fees. We will simply notify you of reaching the fair transfer policy and may reduce the bandwidth of your Cloud Servers to 100 Mbps for the rest of the month.
EDIT For completeness, there is also:
> If you believe to require more transfer per month than the Fair Transfer Policy provides, you may opt in to a paid transfer model at €0.01/GB. This affords you completely unlimited egress with no restrictions.
A soft capped network capacity is different from a hard spending cap. They have different risk models, benefits and drawbacks.
Personally I prefer a cap on spending given that the risk of runaway costs has a bigger impact than the risk of runaway of legit network traffic. I suspect most people feel that they are more cable of catching and addressing runaway success, rather than an runaway network problem caused by an undiscovered bug or attack (often intentionally done during off-hours in the middle of the night).
While this is technically true, it is almost every time false in this case. The network traffic scales with the CPU resources, you use on UpCloud. I know that, because I used their services.
But it would have been enough to just read the links, that are posted here.
I encourage everyone to do the same, before posting stuff that is irrelevant and/or plain wrong.
> "Although the chance of a collision is extremely low because the random value has at least 150 bits of entropy, there is still a chance."
I am... speechless. I mean... Um.
The last time I checked, no one was able to break 128 bits of security for anything, let alone 150 bits, or for a domain validation of some domain name no one cares about.
This is the same attitude that has everyone deploying in-kernel code and arbitrary updates written by companies who can't get the basic QA right. The auditors and lawyers get to decide what "security" looks like.
I really like NATS, but it's worth noting that it's only really "the SQLite of queues" in the context of their Go client, which can embed a NATS server into an application.
Otherwise I would really say it is more like the Postgres of queues (still great, but potentially not what OP is looking for)
reply