There are Intel CPUs which come with bundled RAM. For example Intel Core Ultra 5 238V. It's like SoM: RAM is mounted directly on the CPU package, not even soldered on the motherboard. I'm not sure what particular advantages does that bring over traditional packaging, maybe shorter wires to allow for faster turnarounds between CPU and RAM. But there's zero chance of upgrading or replacing RAM for sure.
In theory, but that is not the case with Lunar Lake, which nowadays does not have a greater bandwidth than the current CPUs with external LPDDR memory.
However, at launch, a year and a half ago, it had a bandwidth about 15% higher than competing CPUs.
For a really "massive increase in bandwidth", it would have needed a wider memory interface, like AMD Ryzen Max, which has a 256-bit memory interface, instead of the 128-bit memory interface of most Intel/AMD laptop CPUs.
Yes, totally. By introduced I didn't mean they were the first in the space but rather they have introduced it to the laptops they're shipping now. But yes, it's been a thing for awhile on other architectures as well.
> While the parent article shows AMD Zen 5 having significantly better results in floating-point SPEC CPU2017, these benchmark results are still misleading, because in properly optimized for AVX-512 applications the difference between Zen 5 and Cortex-X925 would be much greater. I have no idea how SPEC has been compiled by the author of the article, but the floating-point results are not consistent with programs optimized for Zen 5.
The arithmetic intensity of most SPECfp subtests is quite low. You see this wall because it ends up reaching bandwidth limitations long before running out of compute on cores with beefy SIMD.
My mental model is that each of these covers a different layer of the stack, from lowest to highest:
* hypervisor-framework handles the hypervisor bits, like creating virtual machines, virtualising hardware resources, basically a C API on top of Apple's hypervisor
* virtualization-framework is a higher-level API, meant to make it easy to run a full-blown VM with an OS and hardware integration, without having to reinvent the integration with lower-level primitives that hypervisor-framework provides
* containerization-framework uses virtualization-framework to run Linux containers on macOS in microVMs.
By analogy to not mix them up, it's a bit like KVM > QEMU > containerd.
Virtualization.framework was introduced in Big Sur. It builds on top of Hypervisor.framework and is essentially Apple's QEMU (in some ways quite literally, it implements QEMU's pvpanic protocol for example). Before QEMU and other VMMs gained ARM64 Hypervisor.framework support, it was the only way to run virtual machines on ARM Macs and still is the only official way to virtualize ARM macOS.
The new Tahoe framework you're probably thinking of is Containerization, which is a WSL2-esque wrapper around Virtualization.framework allowing for easy installation of Linux containers.
>a WSL2-esque wrapper around Virtualization.framework allowing for easy installation of Linux containers.
So Linux is now a first class citizen on both Windows and Mac? I guess it really is true that 'if you can't beat em, join em.' Jobs must be rolling in his grave.
Oh good point. I mixed it up, UTM is using qemu under hood, but as someone mentioned now OpenBSD snapshot boots with qemu seemlesly. It's still virtualised though.
reply