There was an Instagram account or YouTube channel that used to make funny videos of the films ending with the credits rolling at the exact point the title of the film was said—anyone have any recollection of that?
Who does this appeal to and why? I use Rectangle on OSX to give me the layout options I need which basically amounts to switching between full screen and two apps split vertically, then I use the in-built OSX shortcuts for moving apps between screens.
What kind of workflows suit something as advanced as Window Maker? Where does the line sit between this and something like Gnome? I'm struggling to get my head around the nuances.
Window maker is a glimpse into the past. A modern-ish look into the NeXTSTEP UI, which would become OS X’s Aqua. personally it’s my preferred Linux UI. Simply because I find it cozy.
Something I've never been able to wrap my head around is how ROMs are dumped for emulators from cartridges? Dumping instructions and assets makes total sense to me, and packaging that up in a data file that can be interpreted by an emulator too, but how does an emulator model the hardware of every 'expansion' chip in a cartridge? How is that dumped from an original cartridge?
Expansion chips aren't ROMs; they need to be emulated as well.
The situation was IMHO a bit worse with the SNESs precedessor, the NES.
There were quite a few expansion chips--called mappers--even though their general function was expanding the NES's memory space instead of adding additional processors or capabiliies - and they were in most games because without them the NES is limited to 32KB of PRG ROM and I think 4KB or 8KB of CHR (graphic) ROM. Most games after the year the NES came out had them.
These all had to be reverse engineered along with the console itself - fortunately much simpler than reverse engineering an add-on CPU or accelerator though. Some are common and in many games (MMC1, MMC3) and others are pretty much for a specific game only (MMC2 is for Punch-Out only).
They're not dumped. The emulator implementation recreates the expansion chip functionality in software. There are only so many expansion chips, so its not intractable.
Yeah I too wondered why the author would target 60Hz at all. The fact that the SNES controller is polled at 60Hz is simply a consequence of the game reading the input at that rate. As mentioned in the other comment in this setup the polling of the controller is not in sync with the game reading the input at all. Thus even if you target a polling rate of 60Hz perfectly you'd actually have worse input latency than the original hardware.
It would be much better to target 120Hz or higher to reduce input latency and bring it as close to the original hardware as possible by ensuring there is always an up-to-date input state ready for the game to read.
I had a feeling the polling rate was probably to do with the frame rate of the SNES rather than some requirement of the controller. I'll try out a higher polling rate and see what happens.
Yeah I wouldn't be worried about the hardware at all. The only chips on there are two 8-bit shift registers and according to the datasheet at 5V they can run up to 3MHz. You're much more likely to run into the limits of the USB protocol than the SNES controller hardware.
If for the sake of argument we are plugging this into an SNES emulator running at 60Hz, then the stack between the USB gamepad and the emulator already has to handle that the gamepad is not rigidly synced to the emulated SNES and will presumably take every input from the gamepad and use it as the emulated input to the SNES next time it asks, unless a new one comes in first.
At slightly faster than 60Hz, the net effect will be that the delay between input and having an effect on the SNES will wander about 1/60th of a second over time, as the sync varies, and every once in a while a particular input will be overwritten before it makes it into the emulator. It may be very difficult to perceive this, though, due to other delays already built into such a set up. On a real SNES, it would be right on the edge of perception anyhow.
Being very close to the poll rate but not quite there is probably near the theoretically worst case. It would probably be much better to poll at 240+Hz, cutting the latency between input to a consistent 1/4ish of a frame. However, I doubt this improvement could be "felt" by very many people at all.
Right, and that's expected of the model that the author originally bought into. The problem isn't with the users, the problem is with the author's false expectations of the model.
Thats ironic considering how much modern world is built on open source. The Linux kernel could start sending telemetry to Microsoft in exchange for say a billion a year. And also do it unannounced, tomorrow. Let alone the other millions of free-as-in-for-now projects. Interesting times!