Hacker Newsnew | past | comments | ask | show | jobs | submit | adkadskhj's commentslogin

I dabble in "life storage" and your comment made me think that some sort of executable shipped alongside backup locations to read the data, if in some deduplicated backup form, seems valuable.

Eg Camlistore/Perkeep had the premise of using JSON to store data. However some random person isn't going to write code to parse all that data, pull files out, etc. A lifeboat .exe might be interesting.

Though doing it in the most simple, least configurable, least breakable way seems.. necessary. Yea some baked in UI would be cool, but more moving parts means less likely to work.


`


The myth of shareholder value is roundly rejected by the people who's job it is to think about these things: https://corpgov.law.harvard.edu/2012/06/26/the-shareholder-v...


>Ie why would Blizzard be okay with workers unionizing for better work conditions? It seems unlikely that would favor Blizzard in the short term[1].

Well, if everyone reading about Blizzard's malfeasance decides to not do business with them, as some commenters in this thread are doing, then they will suffer a problem in the short term as well.


AFAICT, this is one of those counter-intuitive cases. Mission-driven companies tend to (AFAIK) do better than pure profit-driven companies; similarly, AFAIK, companies that take morality seriously tend to do better. A significant aspect of this (IMHO) is employee empowerment, which relates to the common theories as to why Silicon Valley happened rather than the Boston corridor (legal situation advantageous to employees).

Other things that come to mind: "Ask for money, get advice. Ask for advice, get money twice." https://www.smbc-comics.com/comic/ags - "If the stock market made any sense, you'd be able to exploit that sense and capitalize on it." https://en.wikipedia.org/wiki/Goodhart%27s_law

So it makes "logical sense" IFF short-term improvements/non-losses of control, power, and position lead to profit, and that's not something that's logically provable. Rather, that's something that's got to be observed experimentally, and AFAIK it ...isn't.

If your goal-metric is money, you run into both what SMBC is pointing out, and Goodhart's law. If your goal-metric is something else, money is an "easy" by-product.

> how can we work to promote moral behavior?

IMHO sociological studies examining this. "Seek money, make nothing. Seek to make, get money twice", or something.


> Mission-driven companies tend to (AFAIK) do better than pure profit-driven companies; similarly, AFAIK, companies that take morality seriously tend to do better.

Do you have links to any studies backing this up? The idealist in me really wants this to be true, but the cynic in me believes capitalism is set up to reward the most ruthless, unscrupulous people.


I do not :/ It's a soundbyte that I've heard within the startup space (probably HN, ages ago).

But - what's be your top 10 list of successful startups? How many of them are either directly mission-driven, or have a strong mission?


My hope is that I can contribute in my small way to making moral behavior more profitable than immoral. IE voting with my wallet.

I'll also vote for policies and representatives to enact sensible (opinions will differ, obviously) legislation to enforce better morality and better safeguarding of the commons, especially with respect to externalities -- as I see those as a major source of the failings of capitalism.

Beyond that, I don't really know. And I'll readily admit that looking around at where we as a society are and where we seem to be headed, especially with respect to things like climate change, those seem like very small measures. I just don't know what else to do.


Edit: parent deleted their comment, but it was a question about isn't union busting the logical capitalist move and what can we do about it.


Heck if it's a general purpose web app - ie not only Chrome, maybe give Tauri[1] a try?

[1]: https://github.com/tauri-apps/tauri


How useful would that book (IPv6 Address Planning) be to someone not working specifically in networking/ops? I like developing applications and i manage, of course, my home network.

I'd love a book that gives me everything i need to know about IPv6. From justifications, to things to know when working with it, implementing it, using it in my local network, etc.

I don't perhaps need or care to learn it at a super low level, but i do want a complete understanding of it for my specific use cases. Applications and home networks, i imagine. For a novice in networks, to be clear.

Thoughts?


For home you probably care about SLAAC, PD, the standard subnet size of /64, and possibly the Link Local differences (more out of curiosity of what those addresses show up on your machines for than needing to know to do anything with it). Also DNS is going to have AAAA records instead of A records and reverse lookups use a different zone, the changes in DNS are pretty 1:1 translational for admins though. If you want to go full on v6 you'll want to read about NAT64 so you can still reach the v4 internet from your v6 only home network. Also take a look at http://shouldiblockicmp.com/ even if you don't go down the path of v6.

For applications programming you'll want to have a feel for the above, IPv4-mapped IPv6 addresses, and review link local again to in particular note how to encode the interface in a socket call (useful for configurationless cluster communication).

Most every other detail of IPv6 changes should only matter to those that write networking stacks or make routers.

For all of the above info I'd recommend just reading the Wikipedia article on IPv6. Most of these are straightforward wrote memorization of best practices or background reasoning things so it's not "read a book" worthy if you're not trying to do this for a living IMO (coming from someone who does networking for a living).


I would guess that the first two chapters would be useful.

Honestly, if your first thought isn't "oh wow, I would love to learn how to plan out IPv6 networks", it might not be worthwhile.


Man, a personal hosted VPN for all my junk is my dream. Can't wait to get something selfhosted, tail-scail-like setup. Something i can get into from anywhere and work from anything. I'm tempted to order a bluetooth keyboard like this post just to have it in the car and be able to work from my iPad/iPhone in a pinch (assuming internet).

For someone wanting an easy, at home VPN for setups just like in the blog - is anything on the market competitive with Tailscale's UX? Ideally there would be no proxy, no VPN hosted on DigitalOcean that i fear being a weak point.. Instead, i'd like:

1. A redirect hosted on DigitalOcean, acting as a self hosted DynIP. No security issue here i'd think?

2. After hitting my real IP, connect to a VPN on a predetermined port.

3. Get access to bells and whistles now on the priveledged network. Bonus points if i could assign DNS entries like `ssh me@workmachine.fake` or `http(s?)://videostreaming.fake`

Tailscale makes me a bit nervous, as cool as they are i'd prefer entirely self hosted. Though i may give them a try just to experience this UX.

edit: https://blog.tonari.no/introducing-innernet innernet might be what i'm looking for


For the last decade or so I have been running cheap Asus routers on my home network. They support dyndns and OpenVPN out of the box. I assume that other manufacturers offer the same thing.

I haven't poked around in the guts of their vpn setup. But it's generally password protected and I would assume in the clear. So I replace accounts frequently and only leave it running if I am out of town and have some project I may need access to.

Basically you vpn to yourname.dyndnsprovider.com and you are done.

The rest is DNS Setup which.is sort of up to you but you can easily configure to something like machinename.home.mydomain.net


Microtik routers can also run a VPN just fine. :)


Zerotier?


Hash Map/Set keys is a common one, for example.


So i agreed with you until:

> I'd certainly classify that as addictive behavior -- i.e. when a healthy young man is more interested in what's on his phone than he is in the attractive woman sitting across the table from him. Something is very wrong there.

We can definitely debate online life vs "in the flesh" - but it seems small minded to me for you to suggest someones preferences for experience are only the result of unhealthy addiction.

Many would argue your allowance of modern life, from TVs to cars to in city restaurants/etc. That you (or that person, i guess) didn't make a home cooked meal, or go experience nature together - to be an addiction to the modern and lacking in down to earth, honest and real connections.

Not that i agree with any of that of course. My point is that i think there is a perfectly valid possible course where someone prefers to experience their life in cities, in the woods, or in more virtual spaces.

The reality though, and where i agree with you - is that i don't think we actually have a virtual space that _isn't_ fueled entirely be addiction. Powered by highly financed and motivated teams of people.

I just think we need to be cognizant of alternative life styles. Just because commonly certain lifestyles result in unhealthy behavior doesn't inherently mean that lifestyle shouldn't be followed at all. If that was the case i think this argument should probably switch to avoiding much of modern life. As it is full of unhealthy habits and poor balances. We'll be living in the woods pretty soon if we can't recognize the possible healthy and balanced ways to live in the unhealthy-unbalanced minefield that is so many alternate forms of life.


I don't know much about Starlink, but isn't one point of Starlink that eventually it could even beat wired connections for distance latency? Ie it's a shorter and more direct trip to use Starlink to get from US West to US East, for example.

Though this is quite a ways out i imagine.

Best of all this idea works in current tech, and can get even better with future tech when Starlink starts going Satellite <-> Satellite, avoiding unnecessary land hops.

I'm interested in Starlink for all use cases, once they get more satellites up.


My understanding is that in addition to distance latency there is some non-trival amount of latency due to the use of TDMA. Unlike CDMA where everybody piles on the same spectrum at the same time, TDMA gives a timeslot to every station and you gotta wait your turn before TX/RX (this is a gross oversimplification of course).

While it doesn't add that much latency it is more than CDMA.


Not just TDMA - lasers in vacuum travel at the speed of light, but signals in optical fiber and coax copper are both 2/3 of that. This means that if (and that's still an if, not yet a when) starlink does laser between satellites it can beat terrestrial speeds.

I could see the end result being that we still do bent-pipe signaling over land and only laser interlinks for crossing oceans and for customers that pay for low ping to financial markets.


Having the lowest latency is only useful to commodity traders. There's no way Starlink can ever compete with terrestrial service on QoS in general (due to Shannon theorem). Their USP is universal availability.


> Sometimes I don't understand what's going on in the heads of the people thinking this stuff up :/

So this post made me look up[1][2] COOP/COEP, but as far as i can tell this seems to be a security measure. Seemingly because they don't know, at this point in time, how else to enable shared memory in WASM without this limitation.

So what in your mind could have been done better? I agree it really sucks having your WASM apps live in two camps, single and multithreaded, but it seems like we, as users conceptually have two choices:

1. Don't get shared memory at all. Or, 2. Get shared memory in limited scenarios

#2 still seems better than #1, no?

Or do you perhaps think the performance Opt-In is overly aggressive. Ie if we just enabled shared memory always we'd reduce the WASM split with minimal issues. Alternatively we could do the reverse, Opt-Out, such that for resource constrained environments the phone/whatever could connect to `mobile.example.com`.

[1]: https://web.dev/coop-coep/ [2]: https://www.youtube.com/watch?v=XLNJYhjA-0c&t=4s


Well, "obviously" the web should have a mechanism in place that allows to request security sensitive features without having to poke around in the web server configuration, because in many cases this is "off limits" to the people who author the files to be hosted. How this is achieved, I don't really care, I only know that the current solution is half-baked.

The underlying problem is that this is a classic finger-pointing-situation that will probably never be fixed, because the web security people point the finger at the web hosters, and the web hosters shrug it off because 99% of their customers don't need the features because they just host their static blog there.


HTML meta headers used to be the solution to this kind of stuff, like the <meta charset="UTF-8"> tag for example (which contains information you can also provide in a HTTP header).


Good use of past tense, given that they aren't the solution any longer.


> So what in your mind could have been done better?

If it's a security risk, there shouldn't be an option. Setting up a web server is a low bar for malicious actors.


It's not a security risk. The security risk was removed, and the provided feature is an expensive workaround to avoid the security risk.


maybe depending on the app, a layer can be created, so like a wasm inside a wasm, kinda like a docker type thing that would allow an app to live inside a wasm virtual machine


I could agree with you from a UX-flow perspective, but the tech is so shoddy that i loathe it. Everything is a "web 2.0" monstrosity of load times and popin. Open the wrong link and it takes you 40 seconds for components to popin, pull data, render, move to the highlighted component, it to popin, pull data, render, and finally you get what you want.

The UX-flow might be good, not sure, but the tech is so bad it actually inhibits users. The use of independent components might be neat when loading a Jira card from Bitbucket pull requests (which works), but it makes loading Jira cards from... Jira, terrible. Imo.


>>> Open the wrong link and it takes you 40 seconds

There's nothing in JIRA that take anywhere near 40 seconds to open.

If you really have pages that take double digit seconds to open (you can open your browser's development tools to measure), it's more likely administrator in your company that is to blame for doing a horrible setup running the web service on a toaster and the database on a NAS.


You're right, i'm sensitive to load times and exaggerated by approximately 4x.

I just (loosely) timed it, it took ~10-12s to open a backlog selected issue card _with cache_ from refresh. I used that example because it highlights the list i gave before.

1. The page loads, a bit slow in general.

2. The dom has loaded, so now backlog issues are loading.

3. The backlog issues are loaded, so now the selected component opens.

4. The selected issue component starts loading data.

5. Your data is now finally visible.

The ~10-12s is loosely evenly spread through the entire steps 1-5. This is on Jira Cloud, no "toaster NAS" unless you want to blame Jira Cloud for running a toaster, in which case i'd agree.

The problem in my mind isn't the servers. Opening the network tab, you see requests responding a bit slow, maybe 50-500ms, but not _terrible_. I'd like to see all requests below 300ms personally.

The problem is the UI design. Everything executes dynamically and sequentially. The URL indicates exactly what sort of page i want to see, but nothing is loaded until the JS loads, renders, makes a request for whatever data that individual component needs. Any sub-components to this then get rinse and repeat once they are actually loaded.

When you stack components on components on components that all need to sequentially load data a 1-2s load time starts to stack up, fast. And best of all, network requests are slowed down by how fast your DOM renders? Ugh.


What's the excuse for Jira Cloud? It sometimes takes up to 60s to open a popup window with an issue's details.


I am so curious as to what setup causes these horrific load times. I never, never have performance issues like these with Jira Cloud. A "slow" load for me is about 4 seconds, normally cards open in a second or two.


I gave an example here: https://news.ycombinator.com/item?id=27909949

But in addition to that, 1-3 seconds per click is horrible imo. I have to literally pause, and slow down my workflow, because every input (navigation click) takes 1-3 seconds to load? That sort of delay starts to drag on users, imo.

Wasn't JS based UI's supposed to _lighten_ the load? Make web pages faster because they could just ask for the data that changed? This feels like such a massive step backward from simple HTML, like HN.

1-3 seconds being good, or even okay, is abysmal to me. Especially when the full page isn't needed, just a handful of data. Something is fundamentally broken with this version of the "modern web". And i say that as a web developer who loves complex frontend technologies. But to me if the user isn't getting a faster response, the frontend tech would be better off as plain HTML.


I used to manage an on-prem Jira installation at $OLD_JOB, the amount of network traffic that the page to view an issue does is absurd. If you have a speedy connection or most of the page cached you won't notice, but switch to slower connections or a congested enterprise VPN and it's painfully slow. Open the browser's toolbox, disable the cache and reload. See for yourself.


We've had performance issues with cloud Jira / Bitbucket the last couple months. Doing simple merges can take 2 minutes in the background. I encounter 5-10 second slow downs multiple times a week with other parts of the UI. GitHub is instantaneous in comparison.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: