Hacker Newsnew | past | comments | ask | show | jobs | submit | more bayindirh's commentslogin

I believe they now have a proper e2ee mode which disables all the cloud powered features, no?

They aquihired (and gutted) keybase for this, but I have a doubt that their "reimplementation" is actually E2EE.

IBM is an interesting beast when it comes to business decisions. While I can't give exact details, their business intelligence and ability to predict monetary things is uncannily spot-on at times.

So, when their CEO says that this investment will not pay off, I tend to believe them, because they most probably have the knowledge, insight and data to back that claim, and they have ran the numbers.

Oh, also, please let's not forget that they dabbled in "big AI" before everyone else. Anyone remembers Deep Blue and Watson, the original chatbot backed by big data?


As evidenced by the fact that they are a 100+ year old company that still exists. People forget that.

While air cooling doesn't scale, air also is not a great heat carrier when you cram that much power to a small space.

Today's supercomputers (AI or not) can't cool themselves off with air. Too much heat in a too confined space. Direct Liquid Cooling is a must.

However, you can use closed-loop liquid cooling (like Europe), but open-loop is cheaper since it skips the "pump the heat out from water to atmosphere" part and "who cares about the water anyway, there's monies to be made".

Putting money above the environment always makes me angry though. It's like burning the walls of your house to stay warm.


That’s what I thought.

Yet look at the prices on e.g. Hetzner and how profitable US cloud is. They can easily afford to do closed loop.

Whenever there’s a real environmental problem the answer is usually “there’s a right way to do it that lacks these issues but it’s slightly more expensive.”


> None of that has reached the market yet.

AI for science is not "marketed". It silently evolves under the wraps and changes our lives step by step.

There are many AI systems already monitoring our ecosystem and predicting things as you read this comment.


> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.

Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.

I may even move to a cathedral model or just stop sharing the software I write with the general world, too.

Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.

Life is more nuanced than that.


How large an audience do you want to share it to? Self host photo album software, on hardware you own, behind a password, to people you trust.

Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.

Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.

[0]: https://creativecommons.org/licenses/by-nc-nd/4.0/


Good on you. Maybe some future innovation will afford everyone the same opportunity.

Maybe one day we will all become people again!

(But only all of us simultaneously, otherwise won't count! ;))))

The number of triggered Stockholm Syndrome patients in this comment section is terminally nauseating.


That future innovation is called "policy that doesn't screw over the working class".

Not that innovative, but hey. If it let's someone pretend it is and fixes the problem, I'm all for it.


That future innovation is in fact higher productivity. Equality is super important but we are simply not good enough yet at what we do, societally, for everyone everywhere to live as good a life as we enjoy, regardless of how we distribute.

Yet, not restricting themselves to train on permissively licensed code only.

The two ends of the spectrum, both source available and copyleft licensed code shouldn't be used for training, but who's listening.


FWIW, Forgejo (Codeberg) is also building federation capability [0].

[0]: https://codeberg.org/forgejo-contrib/federation/src/branch/m...


Unfortunately it's most ActivityPub-oriented, right? Which means no name portability. That's a major shortcoming compared to an AT protocol-based thing like Tangled appears to be.

I don't think it's relevant to this specific instance, but AFAIK ActivityPub doesn't inherently prevent name portability. It's just that almost all implementations currently don't allow it (and I wouldn't expect Forgejo's either).

Of course, the practical downside of Tangled is also that it only has network effects within the ATmosphere, i.e. you still can't reach GitHub users.


First of all, non-standard extensions to federated protocols have a pretty rough history. Even when an extension reaches median adoption (rare, I assert), the long-tail adoption is dismal. For something as fundamental as

Second of all, how could this just be an implementation-specific extension? The failure mode (of a client not supporting the extension) would be outright broken. To have name portability, the client needs a two-step to first discover the name's server before then connecting to that server. Whereas now (afaiu), the server is already identified by the name. That's a fundamental change in what identity means at the protocol level.

I'd love to be corrected by someone more intimately familiar with ActivityPub. But until it has mandatory (and mass-adopted) support for something vaguely like SMTP's MX records or whatever the equivalent for ATProto is, name portability is a distant dream.


I'm not intimately familiar with the spec, so don't take my word as gospel, but as I understand it, current implementations already do name discovery. It's just that every implementation hardcodes the server name. IIRC some people have applied some kludge where they put a .well-known document on their own server to point to their instance's account, but it's still pretty spotty without that server being actively aware of that identifier. But (again, if I'm understanding correctly) servers could be updated/written to support that properly.

Not to split hairs, but that sounds more like plain hackery rather than proper extension. Let alone a viable future. I don't doubt what you say that it's possible in the most technical sense of possibility. But actually possible, in this world where we live? No, doesn't seem like it.

I might be overlooking something, but if Mastodon were to decide to add an input field to your settings page where you could enter your domain, along with instructions on what static file to upload where (ie in .well-known), I wouldn't be surprised if the rest of the ecosystem were to follow.

Nostr would being better. As it is truly free vs AT protocol is backed by VC.

I don't tend to believe in cryptokey-first protocols like Nostr, where your identity is tightly coupled to a keypair. Human identity doesn't work like that at all, and keypairs as the basis of identity will never be suitable for use by the masses.

Human-readable names are far more suitable as a handle for identity as humans think of it. And DNS names are an okay-ish implementation of that.

I think that a decentralized protocol that provides name portability based on the DNS is a far better protocol than one that relies on keypairs.


AT being backed by VC is false—it's Bluesky the company that is. AT is merely a spec for signing, storing and propagating structured data (records) + the identity that owns said records.

and who controls what goes in to the spec? still Bluesky.

Not really. It's very open for everyone to participate. Further, Bluesky has been working on standardizing AT at the IETF [0][1]. They have also made a patent non-agression pledge: https://bsky.social/about/blog/10-01-2025-patent-pledge

In short, they're actively working on making AT as neutral as possible.

[0]: https://docs.bsky.app/blog/taking-at-to-ietf

[1]: https://datatracker.ietf.org/doc/bofreq-newbold-authenticate...


Dillo is a 25 year old project some of us know very well and use.

Obscurity is subjective. It might be obscure for you, and that's OK, but Dillo is not obscure for many people.


AFAIK, the author wants to work like how Source Hut and Linux kernel works: by e-mails.

When you're working with e-mails, you sync your relevant IMAP box to local, pulling all the proposed patches with it, hence the pull model.

Then you can work through the proposed changes offline, handle on your local copy and push the merged changes back online.


I would love to see more projects use git-bug, which works very well for offline collaboration. All bug tracker info is stored in the repo itself. https://github.com/git-bug/git-bug

It still needs work to match the capabilities of most source forges, but for small closed teams it already works very well.


Reminder that POP and IMAP are protocols, and nothing stops a code forge—or any other website—from exposing the internal messaging/notification system to users as a service on the standard IMAP ports; no one is ever required to set up a bridge/relay that sends outgoing messages to, say, the user's Fastmail/Runbox/Proton/whatever inbox. You can just let the user point their IMAP client to _your_ servers, authenticate with their username and password, and fetch the contents of notifications that way. You don't have to implement server-to-server federation typically associated with email (for incoming messages), and you don't have to worry about deliverability for outgoing mail.

All of this makes sense. Thank you for explaining. I don't think I understand the difference though.

Like are they calling the "GitHub pull request" workflow as the push model? What is "push" about it though? I can download all the pull request patches to my local and work offline, can't I?


GitHub pull request pushes you a notification/e-mail to handle the merge, and you have to handle the pull request mostly online.

I don't know how you can download the pull request as a set of patches and work offline, but you have to open a branch, merge the PR to that branch, test the things and merge that branch to relevant one.

Or you have to download the forked repository, do your tests to see the change is relevant/stable whatnot and if it works, you can then merge the PR.

---

edit: Looks like you can get the PR as a patch or diff, and is trivial, but you have to be online again to get it that way. So, getting your mails from your box is not enough, you have to get every PR as a diff, with a tool or manually. Then you have to organize them. e-mails are much more unified and simple way to handle all this.

---

In either case, reviewing the changes is not possible when you're offline, plus the pings of the PRs is distracting, if your project is popular.


Seems like you found it, but for others: one of the easiest ways to get a PR's diff/patch is to just put .diff or .patch at the end of its URL. I use this all the time!

Random PR example, https://github.com/microsoft/vscode/pull/280106 has a diff at https://github.com/microsoft/vscode/pull/280106.diff

Another thing that surprises some is that GitHub's forks are actually just "magic" branches. I.e the commits on a fork exist in the original repo: https://github.com/microsoft/vscode/commit/8fc3d909ad0f90561...


It’s bonkers to me that there isn’t a link to the plan patch from the page. Yes, it’s trivial to add a suffix once you know, but lots of people don’t—as evidenced by this thread.

Discoverability in UX seems to have completely died.


> It’s bonkers to me that there isn’t a link to the plan patch from the page.

It's yet another brick on the wall of the garden. That's left there for now, but for how long?

IOW, It's deliberate. Plus, GitHub omits to add trivial features (e.g.: deleting projects, "add review" button, etc.) while porting their UI.

It feels like they don't care anymore.


You could set up a script that lives in the cloud (so you don't have to), receives PRs through webhooks, fetches any associated diff, and stores them in S3 for you to download later.

Maybe another script to download them all at once, and apply each diff to its own own branch automatically.

Almost everything about git and github/gitlab/etc. can be scripted. You don't have to do anything on their website if you're willing to pipe some text around the old way.


Why complicate the workflow when it can be solved with a simple e-mail?

> Almost everything about git and github/gitlab/etc. can be scripted.

Moving away from GitHub is more philosophical than technical at this point. I also left the site the day they took Copilot to production.


That's silly. Batteries don't like to be kept at 100% all the time, not unlike your lungs which doesn't want to stay filled all the time (which is uncomfortable for your muscles even if you ignore the carbon dioxide).

e.g.: MacBooks discharge the battery down to 80% by using the battery even if it's plugged in by citing "Rarely used battery", and keep the battery at 80% for at least half a day, then charge it again.

Li-ion is an adversarial chemistry. You need to take care of it or the battery bites back by puffing up or losing capacity very fast, or becoming an indoor firework.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: