"Enabling wrapping behaviour" for signed integers disallows a lot of optimizations based on signed overflow being undefined behaviour, which is a matter of language and compiler design. This says nothing about the cost of checked arithmetic itself on the CPU.
It does, though. UB and associated optimisations wouldn't be an issue if defined behaviour would not have an impact on performance. If the cost would be zero or negligible, the compiler wouldn't need to care and hence warnings like this wouldn't need to be explicitly stated.
This is kind of a thing already in the EU. Under NIS 2, vulnerabilities should be notified to a CSIRT as well as upstream, and the CSIRT shall identify downstream vendors and negotiate a disclosure timeline. I don't know whether they're any good at it or not, though.
Cool to see F# here! Emulators are a great way to learn a language. On first sight you chose well between more or less idiomatic F# for each job.
Some low hanging fruit to reduce allocations: the discriminated unions in Instructions.fs could be [<Struct>], reusing field names to reuse internal fields.
Also, minor nitpick but I'm confused about some of the registers. They are already of type byte, the setters with `a &&& 0xFFuy` don't add anything over `member val A = 0uy with get, set`. I'm guessing this changed over time.
// Registers can't be a record type because the values need to be truncated to 8 bits when writing, so setters are needed
// This is for the web renderer as Fable transpiles uint8 to Number (more than 8 bits) in JS and doesn't apply any truncation
// Known non-standard behaviour in Fable (https://fable.io/docs/javascript/compatibility.html#numeric-types)
So, I think, it's just conservatively cleaning the data due to Fable's widening via js Number on the web target.
I haven't used Fable much, but apparently it maps .NET arrays to js TypedArray. Presumably you could keep the registers in 8-element array and fable will properly produce a Uint8Array. I'd like to benchmark that.
One of cool things about Factor (and part of why I brought it up) is that it basically does something similar out of the box. There is a full-featured optimizing compiler and a simpler, faster non-optimizing compiler for eval-like functionality. They work seamlessly together in the interactive Factor environment:
Right now the cost of c interop in ruby is too high. It's actually more perfomant in the general case to rewrite any c lib wrappers in pure ruby these days and let jit do the work
WebDAV is kinda bad, and back then it was a big deal that corporate proxies wouldn't forward custom HTTP methods. You could barely trust PUT to work, let alone PROPFIND.
reply