Hacker Newsnew | past | comments | ask | show | jobs | submit | diegs's commentslogin

I used git5 from when I started in 2011 to when I left in 2017.

I'm going back starting on monday, so I'm curious to try out jj.

In the past 10 years it's all been github and gitlab, and their code review tools are so painful, specifically w.r.t. tracking discussions across revisions. I never felt excited to try out jj because I was afraid it would that situation even worse.


It was deprecated in 2015 iirc


I agree, I've fantasized about an editor with a truly pluggable editing model which is decoupled from the other parts.

Yi was kind of designed like this, I believe. You could compile in an emacs-like model, a vim-like model, or presumably make your own model.

I've used Helix and Kakoune in addition to Emacs and Vim, but dealing with the limitations/featureset/plugin treadmill gets a little tiring.

I have been following Zed, and it seems that they have rearchitected things to enable adding Helix mode and making the editing model a bit more modular, but it's still fairly new. They are fixing bugs pretty quickly. I will have to try it again.

They have a nice discussion here:

https://github.com/zed-industries/zed/discussions/6447

They reference Ki, which also looks cool, and they out some of Helix's inconsistencies in their comparison: https://ki-editor.github.io/ki-editor/docs/comparisons/

I prefered Kakoune to Helix (it was more consistent). But to your point, being able to swap these things out more easily would let you choose an editor based on features, and not tradeoff between features and an ergonomic editing model.

Ironically you can use Ki inside of VSCode (and I know you can use Vim that way too), but VSCode is so darn bloated and slow...


The truly pluggable editor is emacs. I too spent months trying out neovim, then emacs, then finding helix. Spent a year on helix, then zed because I would rather have something more complete, and brought with me all i could of helix modal editing.

But emacs. Emacs is the one that can truly become anything you like. And with lsp and treesitter being finally in it. I've finally came to my senses and started building my helix in it.


I wish some radical team just says fuck yes, we're gonna make Emacs fast and actually accomplishes it.

It's definitely easier with LLMs now, but still considerably hard.


LLMs don't make it any easier at all.

But the team is out there ; )


Epsilon is fast emacs


I thought they vastly improved user-space wireguard performance?

https://tailscale.com/blog/more-throughput

Not sure if the kernel implementation pulled ahead again, I don't really follow these things.

Also not defending tailscale, I respect them but I agree they are a one size fits some solution.


This reminds me of https://en.wikipedia.org/wiki/Venti_(software) which was a content-addressible filesystem which used hashes for de-duplication. Since the hashes were computed at write time, the performance penalty is amortized.


That's wild. My dentist was in that building for quite a while as well.


Probably the same one?


+1 to 0.3mg, larger doses can lead to nightmares and other issues.

It also may take longer to have an effect than is commonly said. For me, it's ~3-4 hours. I'm a natural night owl but 0.3mg melatonin at 6pm has me falling asleep on the couch at 9:30-10pm.


That's interesting. For me it takes 30 minutes, give or take, to start to feel sleepy, and I'm also a night owl :-)


And then you have Go, which won't even let you compile code with an unused variable...


    func TestWhatever(t *testing.T) {
        // ...lots of code

        _, _, _, _, _, _, _ = resp3, resp4, fooBefore, subFoo, bar2, barNew, zap2
    }
Like, I get it, it's a good feature, it caught quite a lot of typos in my code but can I please get an option to turn this checking off e.g. in unit tests? I just want to yank some APIs, look at their behaviour, and tinker a bit with the data.


This example isn't particularly good code. If you've got "lots of code" that names a bunch of variables (e.g. using ':=') that are never referenced AND you have a good reason not to do so (which I doubt: given this context it looks like an incomplete test), then predeclare these 'excess' variables:

   func TestWhatever(t *testing.T) {
        var resp3, resp4, fooBefore, subFoo, bar2, barNew, zap2 theirType

        // ...lots of code
   }
Alternatively, use '_' where they are being defined:

   // instead of
   resp2, err := doit()

   // use
   _, err := doit()
If, and given this context it's likely, you're checking these errors with asserts, then either change the name of the error variable, predeclare the err name (`var err error`), or split it into multiple tests instead of one giant spaghetti super test.

That said, in a code review, at a minimum, I would probably ask that these variables be checked for nil/default which would completely eliminate this problem.


This is not a piece of code I would commit, obviously! It's a piece of code in the middle of being written and re-written (and re-run, a la REPL), and constantly replacing "resp2" with "_" and back again with "resp2" is friction. Go doesn't have REPL but having a TestWhatever(t *testing.T) function is a mostly good enough replacement, except for this one small problem.


Whew, that's a relief! If I understand correctly, then I think you'll have a better experience if you practice doing something like this when writing tests:

  foo, fooErr := doit()
  require.NotNil(foo)
  require.NoError(fooErr)
  _, _ = foo, fooErr // alternative if you don't want the asserts for some reason, remember to come back later and delete though

  // ...repl churn code... 
Using the stretchr/testify/require package. This code defines both variables, gives the error a unique name in the scope, and then references the names in two asserts. You won't have to deal with unreferenced name errors in the "repl churn", so you can comment/uncomment code as you go.


The good news is if you use Nix or Guix it’s relatively easy to hack your local build of the compiler to demote the unused variables hard error to just a warning.


You know, for some weird reason, it never crossed my mind to hack the Go compiler to let me do things like that. And it's such a great idea.



Well, it's about as easy without neither Nix not Guix.


It's really not. Yes it's possible to figure out how to build and install the Go compiler, but then you have to repeat that process every time you want to upgrade to a new version. With Guix (I assume Nix is similar) you just save the patch somewhere and then run `guix install --with-patches=go=/path/to/patch go` and everything just works (including reapplying the patch on upgrade).


All of this could've been prevented if Go just had two ways to compile. Debug and release.

The go devs decided against this since they didn't want to build a highly optimizing (read: slow) compiler, but that is missing the point of developer ergonomics.


It could be prevented in an even simpler way: emitting warnings.

Most people nowadays ban building with warnings in CI and allow them during local development. But “CI” was barely a thing when go was developed so they just banned them completely. And now they are probably too stubborn to change.


The Go decision was explicitly to not have warnings, and the unused identifier thing complained about is merely a consequence of that.

https://go.dev/doc/faq#unused_variables_and_imports


As an outsider to Go, it feels to me like this basic pattern comes up over and over again in Go:

Q. Why can’t I have feature X?

A. We thought of that already, and your opinion is wrong.

Q. But basically every other language in existence supports this. And it makes development easier. We would really, really like it. Please?

A. Apparently you don’t get it. Here’s a pointer to a 15 year-old post on a mailing list where the first time someone asked for this, we said no. Your opinion is wrong.


> And now they are probably too stubborn to change.

Sounds like we agree!


I wish I could pay money to hide amazon ads on our echo show devices and not auto-opt-in to each new "experience" pane they add. They let you do it for Kindle ads and now for Prime, maybe it'd be a nice cash injection for the faltering Alexa org?

Really, I just want our smart displays (which I paid real money for) to show our family photos and do smart home things. It's exhausting to repeatedly have to open up the settings panel and uncheck whichever new screens/"experiences" they've added each time they start popping up. There are dozens at this point--talk about shipping your org chart!

Hopefully Matter will mature at some point and Apple will ship some smart displays of their own, and then we can toss our Alexas in the bin.


Google's displays are fairly decent. No ads, can cast Home Assistant dashboards onto them (with functional touch), and the 1st gen hubs sell for around ~$30 and 2nd gen around ~$50.


Yeah, they are generally better but Ring and Alexa Skills are better than their Google counterparts (at least the last time I checked).

I actually have a few old Nest hubs sitting in the basement…


I rely on HA pretty heavily for everything, and that covers anything I'd want out of Alexa and more.


How do they inject the ads? It might be worth running your own little dns like adguard home or pi hole (or paying for nextdns)


Sadly they’re just part of the app. I run adblocking DNS on my home network, but it doesn’t make a difference with this.


Which brand do you use? I couldn't find an exact match via web search :(


https://matelibre.com/ out of Montréal. They even have them in cans.

Looking at their site, they apparently even make a vodka drink with them now. Uh, caffeine & alcohol drinks is usually not very well received by the government here. I wonder if it'll stir some political drama.


Is this still incompatible with split horizon DNS? Whenever I'm connected to my corporate tailnet I can no longer resolve hostnames that are registered on my personal, DHCP-assigned DNS server, breaking access to my home network. This also leads me to believe that all my DNS requests are being routed through the magic DNS server which is not cool IMO.


It sounds like your corporate tailnet checked the "override local DNS" setting and provided their own default nameservers, so those are the ones that get used. They could also not do that, at which point your LAN resolver would get consulted, but I presume there's a policy reason in play?

You say "the MagicDNS server" like it's a quad-8 thing out on the internet. That server lives in the tailscale process on localhost. In some configurations on some OSes, we do have to route requests through that in order to polyfill missing OS features (usually, implementing split-DNS policies that the OS cannot represent natively, or transparently upgrading to DoH for upstreams that support it). You can inspect the logic that decides how to implement DNS policy depending on the policy and OS in https://github.com/tailscale/tailscale/tree/main/net/dns, as well as inspect what the in-process DNS forwarder does (extremely boring: match query suffix in configuration, forward packet to appropriate upstreams).


Weird, I asked our TS admin to disable "override local DNS" and he claimed the option was disabled out, seemingly due to magic DNS being enabled or something. I'll see if I can get access myself to try and change it. Thank you for the reply!


If things still aren't behaving, write in to support@tailscale.com and we'll sort you out. It sounds like the corporate setup wants to just push some custom DNS routes for specific suffixes and leave everything else alone, which is definitely a supported configuration.


Most of the Split DNS issues should be fixed now.

If you're on Linux, you want systemd-resolved, as it's the only Linux DNS resolver that's really any good, regardless of your opinions on systemd overall (See https://tailscale.com/blog/sisyphean-dns-client-linux/)

In any case, file a bug with details and we'll fix it up if there are still issues.


You're right for most setups, but when Docker also comes into play, systemd-resolved+Tailscale+Docker interacts really badly and containers cannot resolve anything anymore. This caused some serious hair-pulling at work a few months ago.


How did you solve it?

I want to be prepared if it happens, spent too much time figuring out weird Docker - DNS/network interactions on hotel wifis and the like...


The only proper solution I could find is disabling systemd-resolved entirely. There doesn't seem to be any way to make it use something other than 127.0.0.1 as its listen address (it's actually hardcoded in systemd-resolved) which means that Docker containers which inherit /etc/resolv.conf rules can't resolve DNS anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: