Worth separating “the algorithm” from “the trained model.” Humans write the architecture + training loop (the recipe), but most of the actual capability ends up in the learned weights after training on a ton of data.
Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data.
Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities.
I have a script called catfiles that I store in ~/.local/bin that recursively dumps every source file with an associated file header so I can paste the resulting blob in to Gemini and ChatGPT in order to have a conversation about the changes I would like to make before I send off the resulting prompt to Gemini Code Assist.
Heres my script if anyone is interested in as I find it to be incredibly useful.
It’s not even just about that. The real advantage is being able to dock a phone and edit photos or videos shot earlier in the day within a desktop-style environment directly on the device. This removes the need to transfer files to a separate editing system. The phone itself becomes a complete creative workstation that can sync to the cloud whenever needed.
Nobody is saying that it is caused by one thing, only that one of the causes are directly attributed by acetaminophen.
This is coming from studies by Johns Hopkins University, Harvard University, and Mount Sinai to name a few.
You're joking right? Google is integrating photo editing features that can exist at the point of inception. It doesn't get more integrated or fit better than that.
But it shouldn't have been conceived with that design. Adobe products blend a timesaving tool into a truly productive workflow. Photoshop, a layer or a filter. Lightroom, a brush. Mystery-meat autocorrect/enhance buttons only get you so far and may alter far more than you want, which is dangerous when you want to want to present a slightly polished version of reality.
The Chinese state operates the country much like a vast conglomerate, inheriting many of the drawbacks of a massive corporation. When top leadership imposes changes in tools or systems without giving engineers the opportunity to run proof-of-concept trials and integration tests to validate feasibility, problems like these inevitably arise.
The Europeans invented the car and Ford mass produced it.
Yet, we see Ford as extremely innovative and revolutionary. I think we can draw lots of parallels between a 19th and early 20th century industrializing US and current China.
You may disparage TikTok as a Vine clone, but it redefined the state of the art for recsys algorithms. Google and Meta had to play catch-up with how quickly and how good TikTok is at discovering videos users find interesting out of the ocean of available content.
Developers should revisit using indexed color formats so they only map the colors that they are using within the texture rather than an entire 32bit color space. This coupled with compression would greatly reduce the amount of ram and disk space that each texture consumes.
BC1 uses 4 bits per pixel without being limited to 16 colors though.
In a way the hardware-compressed formats are paletted, they just use a different color palette for each 4x4 pixel block.
Having said that, for retro-style pixel art games traditional paletted textures might make sense, but when doing the color palette lookup in the pixel shader anyway you could also invent your own more flexible paletted encoding (like assigning a local color palette to each texture atlas tile).
Indexed color formats stopped being supported on GPUs past roughly the GeForce 3, partly due to the CLUTs being a bottleneck. This discourages their use because indexed textures have to be expanded on load to 16bpp or 32bpp vs. much more compact ~4 or 8bpp block compressed formats that can be directly consumed by the GPU. Shader-based palette expansion is unfavorable because it is incompatible with hardware bilinear/anisotropic filtering and sRGB to linear conversion.
Tbf, you wouldn't want any linear filtering for pixel art textures anyway, and you can always implement some sort of custom filter in the pixel shader (at a cost of course, but still much cheaper than a photorealistic rendering pipeline).
Definitely might make sense IMHO since block compression artefacts usually prohibit using BCx for pixel art textures.
I generally agree, but I think there is a real disconnect. Middle and upper management often do not understand how developers and engineers are actually supposed to use these tools.
For example, I work in operations, so most of what I touch is bash, Ansible, Terraform, GitHub workflows and actions, and some Python. Recently, our development team demonstrated a proposed strategy to use GitHub Copilot: assign it a JIRA ticket, let it generate code within our repos, and then have it automatically come back with a pull request.
That approach makes sense if you are building web or client-side applications. My team, however, focuses on infrastructure configuration code. It is software in the sense that we are managing discrete components that interact, but not in a way where you can simply hand off a task, run tests, and expect a PR to appear.
Large language models are more like asking a genie. Even if you give perfectly clear instructions, the result is often not exactly what you wanted. That is why I use Copilot and Gemini Code Assist in VS Code as assistive tools. I guide them step by step, and when they go off track, I can nudge them back in the right direction.
To me, that highlights the gap between management’s expectations and the reality of how these tools actually work in practice.
Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data.
Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities.
reply