Hacker Newsnew | past | comments | ask | show | jobs | submit | skohan's commentslogin

I wonder to what extent the 4o rollback was motivated by this exact case


As in the removal of 4o, or its reinstatement? Like, the model involved here was 4o AFAICS; if it was related to this case you'd expect them to remove it and bury it, not remove it and return it a few days after.


I had the same experience. For me peanut butter and apples were my "healthy snack" and were accounting for a huge amount of calories per week.


"Healthy" is a pretty loaded word. In the grand scheme of things apples and peanut butter is indeed a pretty healthy snack: good balance of fiber and protein and carbs and healthy fats, and nothing particularly bad like partially hydrogenated oils if you stick to a decent brand of peanut butter. But its not a particularly low-anything snack, so maybe not an efficient use of calories for someone trying to watch their weight. Nuts in general are a pretty caloric food.

Occasionally (usually as a distraction while trying to make a meal plan for the week) I find myself wondering by what we could do differently with the concept of healthy foods that makes nuances like this easier for people to understand, without getting into fad diet territory. I've never had a brilliant idea here because its fundamentally asking the public to have a nuanced understanding in an industry with tons of historical marketing spin, which is... hard. Really hard. Fad diets and diet plans in general exist because someone telling you exactly what to eat is sometimes more effective than trying to give an understanding about why those choices are made.

The reality is that "healthy" is an individualized goal and different foods are a tool for getting to that destination.


I found I developed a new value of food based on its deliciousness per kcal.

Some foods are delicious but too calorific to be considered.

Some taste OK but turn out to be exceptional value!


Not only forests but even just tree cover. I live in Berlin and there were a few days a few years ago over 40 degrees, and I remember the difference was stark going from my relatively tree-lined street to an area of the city with only concrete.


I always felt unwrap is preferable for diagnosing issues, which makes it useful in tests. A failed unwrap will point you directly to the line of code which panics, which is much simpler than trying to trace an issue through many layers of Results.

If you use `assert` in tests, I don't understand why you wouldn't also prefer unwrap in this context.

I think it's also perfectly reasonable to use in a context like binary you run locally for private consumption, or for instance a serverless function which allows you to pinpoint the source of errors more easily in the logs.

It's not a good fit for cases where you do expect cases where the unwrap will fail, but it's a great fit for cases where you believe the unwrap should always succeed, as it allows you to pinpoint the cases where it fails.


I call unwrap a lot in unit tests where it feels obvious to me that this succeeds, so, if it doesn't that's a huge mistake and should definitely fail the test but I have no pre-existing description of what's wrong, it's just a big red flag that I'm wrong.

Fallible conversions that never fail in the tested scenario are an example - parsing "15/7" as a Rational is never going to fail, so unwrap(). Converting 0.125 into a Real is never going to fail, so unwrap(). In a doctest I would write at least the expect call, because in many contexts you should handle errors and this shows the reader what they ought to do, but in the unit test these errors should never happen, there's no value in extensive handling code for this case.


> I always felt unwrap is preferable for diagnosing issues, which makes it useful in tests. A failed unwrap will point you directly to the line of code which panics, which is much simpler than trying to trace an issue through many layers of Results.

Take a closer look at testresult since it also points directly at the exact line of failure (due to panics being used under hood) but looks nicer.


I was a huge Swift fan but SwiftUI and the changes supporting it in the language got me to switch to Rust for all my personal projects


I’ll admit the builder DSL stuff is a bit of a Turing tarpit for me. I may have wasted the day yesterday trying to implement a BNF grammar DSL.


Doesn’t rust’s static linking also have to do with the strategy of aggressive minimization? Iirc for instance every concrete instantiation of a dynamic type will have its own compiled binary, so it would basically be impossible for a dynamic library to do this since it wouldn’t know how it would be used, at least not without some major limitations or performance tradeoffs


I believe MS is trying to standardize this, in the same way as they do with DirectX support levels, but I agree it's probably going to be inherently a bit less consistent than Apple offerings


DirectML can use multiple backends.


There was a rumor floating around that Apple might try to enter the server chip business with an AI chip, which is an interesting concept. Apple's never really succeeded in the B2B business, but they have proven a lot of competency in the silicon space.

Even their high-end prosumer hardware could be interesting as an AI workstation given the VRAM available if the software support were better.


> Apple's never really succeeded in the B2B business

Idk every business I’ve worked and all the places my friends work seem to be 90% Apple hardware, with a few Lenovo issued for special case roles in finance or something.


They mean server infrastructure.


I love the air form factor. I do serious work on it as well. I have used a pro, but the air does everything I need without breaking a sweat, and it's super convenient to throw in a bag and carry around the house.


> a dedicated Nvidia rig

I am honestly shocked Nvidia has been allowed to maintain their moat with cuda. It seems like AMD would have a ton to gain just spending a couple million a year to implement all the relevant ML libraries with a non-cuda back-end.


AMD doesn’t really seem inclined toward building developer ecosystems in general.

Intel seems like they could have some interesting stuff in the annoyingly named “OneAPI” suite but I ran it on my iGPU so I have no idea if it is actually good. It was easy to use, though!


There are quite a few back and forth X/Twitter storms in teacups between George Hotz / tinygrad and the AMD management about opening up the firmware for custom ML integrations to replace CUDA but last I checked they were running into walls


I don't understand why you would need custom firmware. It seems like you could go a long way just implementing back-ends for popular ML libraries in openCL / compute shaders


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: