I saw Radicle posted yesterday and tried it out. Its pretty much exactly what is being asked for here. The main issue I see is its impossible to search through peoples projects you have to be linked to them from somewhere that isnt radicle explorer.
If we run with your theory that the human mind adapts and thinking becomes less valuable that should be setting off alarmbells. What kind of world would be one where thinking is not considered a valuable skill. If I thought that was true I would be in favour of restricting AI use to certain workflows.
Its not a 180. You can be against copyright but as long as copyright is still being enforced on you then you can think it should be enforced on AI companies.
I'd prefer no copyright but we live in a world where there is copyright so its unfair that only AI companies get to be immune.
The hidden cost to competing in these industries is insane. Its so hard to build a physical product that can compete against a giant like IKEA. You need to make some with less r&d, less automation, less infrastructure and you're going to sell less units and all that needs to be price competitive against something that is made on an production line with a team of experienced engineers and sold to millions at fine margins.
I very recently ran the numbers on these GPUs for an upcoming blog post. The token generation performance is bad, but the prefill performance is _really_ bad.
For a Qwen 3.6 35B / 3B MoE, 4-bit quant:
- parsing a 4k prompt on a M4 Macbook Air takes 17 seconds before generating a single token.
- on an M4 Max Mac Studio it's faster at 2.3 seconds
- on an RTX 5090, it's 142ms.
RTX 5090 uses more power than an M4 Max Mac Studio but it's not 16x more power.
Somehow Apple has always been able to sell their stuff as somehow Magic. Remember the megahertz myth? Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon.
> Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon.
The thing that Apple has always been excellent at is efficiency - even during the Intel era, MacBooks outclassed their Windows peers. Same CPU, same RAM, same disks, so it definitely wasn't the hardware, it was the software, that allowed Apple to pull much more real-world performance out of the same clock cycles and power usage.
Windows itself, but especially third party drivers, are disastrous when it comes to code quality, and they are much much more generic (and thus inefficient) compared to Apple with its very small amount of different SKUs. Apple insisted on writing all drivers and IIRC even most of the firmware for embedded modules themselves to achieve that tight control... which was (in addition to the 2010-ish lead-free Soldergate) why they fired NVIDIA from making GPUs for Apple - NV didn't want to give Apple the specs any more to write drivers.
> NV didn't want to give Apple the specs any more to write drivers.
I think that's a valid demand, considering Nvidia's budding commitment to CUDA and other GPGPU paradigms. Apple, backing OpenCL, would have every reason to break Nvidia's code and ship half-baked drivers. They did it with AMD's GPUs later down the line, pretending like Vulkan couldn't be implemented so they could promote Metal.
Apple wouldn't have made GeForce more efficient with their own firmware, they would have installed a Sword of Damocles over Nvidia's head.
> They did it with AMD's GPUs later down the line, pretending like Vulkan couldn't be implemented so they could promote Metal.
It was even worse than that, they just stopped updating OpenGL for years before either Vulkan or Metal existed at all. Taking a Macbook and using bootcamp would instantly raise the GPU feature level by several generations just because Apple's GPU drivers were so fucking old & outdated.
On Geekbench 5, the M1 hits 483 FPS and the RTX 3090 hits 504 FPS.
There are other workloads where the M1 actually beats the 3090.
Apple does plenty of hyping but it's always cute when irrational haters like you put them down. The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt.
What geekbench 5 fps are you talking about? Geekbench only has OpenCL and Vulkan scores for the 3090 as far as I can tell, and the M1 Ultra is less than half the OpenCL score of the 3090. And the M1 Ultra was significantly more expensive.
Find or link these workloads you think exist, please
> The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt.
The GTX 1660 also smokes the 3090 in perf per watt. Being more efficient while being dramatically slower is not exactly an achievement, it's pretty typical power consumption scaling in fact. Perf per watt is only meaningful if you're also able to match the perf itself. That's what actually made the M1 CPU notable. M-series GPUs (not just the M1, but even the latest) haven't managed to match or even come close to the perf, so being more efficient is not really any different than, say, Nvidia, AMD, or Intel mobile GPU offerings. Nice for laptops, insignificant otherwise
Here you go[0]. 'Aztek Ruins offscreen'. Although I misremembered the exact FPS, the 3090 is at 506 FPS.
Also note how the M1 Ultra is pushing 2/3 of the FPS of the 3090 despite 1/3 of the power budget and the game itself being poorly optimized for the M-series architecture.
And here[1] you have it smoking an Intel i9 12900K + RTX 3900. The difference doesn't look too impressive until you realize the power envelope for that build is 700-800W.
Also, the GTX 1660 (technically an RTX 2000 series, but whatever) is about 26% less efficient than an 3090[2].
> Being more efficient while being dramatically slower
That's my whole point and what you're refusing to see. The M1 is not dramatically slower than an i9 or 3090 despite having dramatically lower power use.
The proof for this will really start to come once Qualcomm and Mediatek have gotten a handle on their PC ARM chips and Valve decides they're good enough for a Steam Deck 2 or 3. You'll get to see 2-3x the battery life along a modest performance increase.
> Here you go[0]. 'Aztek Ruins offscreen'. Although I misremembered the exact FPS, the 3090 is at 506 FPS.
Oh, GFXBench not geekbench.
Realistically that 506 fps result is probably CPU bottlenecked, not that aztec ruins is all that relevant. It's a very old benchmark, released in 2018, that was destroyed for mobile GPUs, so realistically is using a 2010-ish GPU feature set.
If that's your use case, great. But it's not significant at all.
> And here[1] you have it smoking an Intel i9 12900K + RTX 3900.
Not using the GPU, so irrelevant. Also not using 700-800w
> Also, the GTX 1660 (technically an RTX 2000 series, but whatever) is about 26% less efficient than an 3090[2].
"bestvaluegpu" I've never heard of but holy AI slop nonsense batman. Taking 3dmark score and dividing it by TDP is easily one of the worst ways to compare possible.
Yes the land is beautiful but the media portrays it pretty accurately. I visted LA before covid ~2018 and it was dystopian. It was dirty, the infrastructure was very poor and the wealth divide was unlike anything id seen before. I had a great time still because I had money to spend.
You should have responded something like "its fine on my end, you are not following the documentation i've written please take any complaints up with [Manager]" and then left your manager to deal with the problem they caused. Or lodge a complaint to the person above them. If they think its ok to run docs through an AI and hand it out uncritically they should be fired on the spot.
you're really complaining that they're using location based keywords? Using a location based keyword to serve a relevant sponsored post isnt personal data. I swear mozilla haters just want it to die so they can use chrome guilt free.
Location data is personal data, same as seach data in general but that battle has long been lost with Firefox which sells all user searches to google anyways.
reply