Alternatively one could also use OneCore patched XP with MSYS2/MinGW/Cygwin with Bash, gnu tooling and the pacman package manager. One could compile most necessary software by hand. It runs a modern firefox, libreoffice and Windows7 games. Perhaps most of python, rust and node ecosystems would run.
Or if one really needs a linux/wsl light alternative one could run virtualbox, qemu or colinux (up to the ancient kernel 2.6.33).
Who needs 64 bit if the lean and mean 32-bit suffices and the Windows classic theme is included? Small llm's would probably not work, while they would with Loss32
one could use a video llm to generate the video, diagrams or the stills automatically based on the text. except when it's boardgames playthroughs or programming i just transcribe to text, summarise and read youtube video's.
How do you read youtube videos? Very curious as I have been wanting to watch PDF's scroll by slowly on a large TV. I am interested in the workflow of getting a pdf/document into a scrolling video format. These days NotebookLM may be an option but I am curious if there is something custom. If I can get it into video form (mp4) then I can even deliver it via plex.
I use yt-dlp to download the transcript, and if it's not available i can get the audio file and run it through parakeet locally. Then I have the plain text, which could be read out loud (kind of defeating the purpose), but perhaps at triple speed with a computer voice that's still understandble at that speed.
I could also summarize it with an llm. With pandoc or typst I can convert to single column or mult column pdf to print or watch on tv or my smart glasses. If I strip the vowels and make the font smaller I can fit more!
One could convert the Markdown/PDF to a very long image first with pandoc+wkhtml, then use ffmpeg to crop and move the viewport slowly over the image, this scrolls at 20 pixels per second for 30s - with the mpv player one could change speed dynamically through keys.
Alternatively one could use a Rapid Serial Visual Presentation / Speedreading / Spritz technique to output to mp4 or use dedicated rsvp program where one can change speed.
One could also output to a braille 'screen'.
Scrolling mp4 text on the the TV or Laptop to read is a good idea for my mother and her macula degeneration, or perhaps I should make use of an easier to see/read magnification browser plugin tool.
DuckDB is an open-source column-oriented Relational Database Management System (RDBMS). It's designed to provide high performance on complex queries against large databases in embedded configuration.
"DICT FSST (Dictionary FSST) represents a hybrid compression technique that combines the benefits of Dictionary Encoding with the string-level compression capabilities of FSST.
This approach was implemented and integrated into DuckDB as part of ongoing efforts to optimize string storage and processing performance."
https://homepages.cwi.nl/~boncz/msc/2025-YanLannaAlexandre.p...
I agree, and Calmira LFN 3.3 and Microsoft Office 4.3
Would Notepad++ work?
What would you use as a graphical www browser?
I mean even with win32s and modern ssl support somehow built-in it'd be challenge.
The README.md is 9k of dense text, but does explain it: faster, more efficient, more accurate & more sensible.
Rust port feature: The implementation "passes 93.8% of Mozilla's test suite (122/130 tests)" with full document preprocessing support.
Test interpretation/sensibility: The 8 failing tests "represent editorial judgment differences rather than implementation errors." It notes four cases involving "more sensible choices in our implementation such as avoiding bylines extracted from related article sidebars and preferring author names over timestamps."
This means that the results are 93.8% identical, and the remaining differences are arguably an improvement.
Further improvement, extraction accuracy: Document preprocessing "improves extraction accuracy by 2.3 percentage points compared to parsing raw HTML."
Performance:
* Built in Rust for performance and memory safety
* The port uses "Zero-cost abstractions enable optimizations without runtime overhead."
* It uses "Minimal allocations during parsing through efficient string handling and DOM traversal."
* The library "processes typical news articles in milliseconds on modern hardware."
It's not explicitly written but I think it's a reasonable assumption that its "millisecond" processing time is significantly faster than the original JavaScript implementation based on these 4 points. Perhaps it's also better memory wise.
I would add a comparison benchmark (memory and processing time), perhaps with barcharts to make it more clear with the 8 examples of the differing editorial judgement for people who scan read.
I was responsible for third party e-mail clients able to connect to Exchange, it was decided Thunderbird was allowed and support was implemented. It can be done if people are aware of the needs, can implement it securely and can evaluate risks.
For me it's like the pebble in smart glasses land, simple and elegant.
Less is more, just calendar, tasks, notes and AI. The rest I can do on my laptop or phone (with or without other display glasses).
I do wish there's a way to use the LLM on my android phone with it and if possible write my own app for it. So I am not dependent on the internet and have my HUD/G2 as a lightweight custom made AI assistent.
Strictly speaking the mobile Oculus/Meta Go/Quest headsets were linux/android based, you can run Termux terminal with Fedora/Ubuntu on them and use an Android VNC/X app to run the 2D graphical part. But I share your SteamOS enthousiasm.
reply