Given the primary selling point of laptops is their portability (often at the cost of other things), they should be optimized to be highly usable wherever they might end up getting used.
Who will make these custom battery pouches for old phones though? Think my old Poco x3 pro, not the iPhone 17. I am sure there are tons of people willing to make batteries for the iPhone 17 but I feel like the interest wanes as the phone gets older?
For older phones, the batteries we buy are likely also old stock left over if we can find some in stock anyway, right?
Just to add, VictoriaMetrics covers all 3 signals:
- VictoriaMetrics for metrics. With Prometheus API support, so it integrates with Grafana using Prometheus datasource. It has its own Grafana datasource with extra functionality too.
- VictoriaLogs for logs. Integrates natively with Grafana using VictoriaLogs datasource.
- VictoriaTraces for traces. With Jaeger API support, so it intergrates with Grafana using Jaeger datasource.
All 3 solutions support alerting, managed by same team, are Apache2 licensed, are focused on resource efficiency and simiplicity.
That hasn't been true for at least 15 years. I was a k-root DNS operator then, and we ran several software stacks on each cluster in case one had a bug.
>It's a bit wasteful to provision your computers so that all the cold data lives in expensive RAM.
But that's a job applications are already doing. They put data that's being actively worked on in RAM they leave all the rest in storage. Why would you need swap once you can already fit the entire working set in RAM?
Because then you have more active working memory as infrequently used pages are moved to compressed swap and can be used for more page cache or just normal resident memory.
Swap ram by itself would be stupid but no one doing this isn’t also turning on compression.
> Swap ram by itself would be stupid but no one doing this isn’t also turning on compression.
I'm not sure what you mean here? Swapping out infrequently accesses pages to disk to make space for more disk cache makes sense with our without compression.
Swapping out to RAM without compression is stupid - then you’re just shuffling pages around in memory. Compression is key so that you free up space. Swap to disk is separate.
>Because then you have more active working memory as infrequently used pages are moved to compressed swap and can be used for more page cache or just normal resident memory.
Uhh... A VMM that swaps out to disk an allocated page to make room for more disk cache would be braindead. The process has allocated that memory to use it. The kernel doesn't have enough information to deem disk cache a higher priority. The only thing that should cause it to be swapped out is either another process or the kernel requesting memory.
> A VMM that swaps out to disk an allocated page to make room for more disk cache would be braindead
Claiming any decision is “brain dead” in something as heuristic heavy and impossible to compute optimally as resident memory pages is quite the statement to make; this is a form of the knapsack problem (NP-complete at least) with the added benefit of time where the items are needed in some specific indeterminate order in the future and there’s a whole bunch of different workloads and workload permutations that alter this.
To drive this point home in case you disagree, what’s dumber? Swapping out to disk an allocated page (from the kernel’s perspective) that’s just sitting in the free list of the userspace allocator for that process or a page of some frequently accessed page of data?
Now, I agree that VMMs may not do this because it’s difficult to come up with these kinds of scenarios that don’t penalize the general case, more importantly than performance this has to be a mechanism that is explainable to others and understandable for them. But claiming it’s a braindead option to even consider is IMHO a bridge too far.
You mean to tell me most applications you've ever used read the entire file system, loading every file into memory, and rely on the OS to move the unused stuff to swap?
A silly but realistic example: lots of applications leak a bit of memory here and there.
Almost by definition, that leaked memory is never accessed again, so it's very cold. But the applications don't put this on disk by themselves. (If the app's developers knew about which specific bit is leaking, they'd rather fix the leak then write it to disk.)
That's just recognizing that there's a spectrum of hotness to data. But the question remains: if all the data that the application wants to keep in memory does fit in memory, why do you need swap?
> Running out of memory kills performance. It is better to kill the VM and restart it so that any active VM remains low latency.
Right, you seem to be not understanding what I'm getting at.
Memory exhaustion is bad, regardless of swap or not.
Swap gets you a better performing machine because you can swap out shit to disk and use that ram for vfs cache.
the whole "low latency" and "I want my VM to die quicker" is tacitly saying that you haven't right sized your instances, your programme is shit, and you don't have decent monitoring.
Like if you're hovering on 90% ram used, then your machine is too small, unless you have decent bounds/cgroups to enforce memory limits.
Luckily they are still improving and we now have Tandem OLED with about double that.