The proposers of `light-dark()` themselves recognized that `light-dark()` was presumably a "stepping stone" towards (and then eventually just a shorthand for) a deeper `schemed-value()` function similar to what you are asking for, once CSS also picked up a way to define custom color schemes. (Often proposed as an `@color-scheme` rule or block.)
It complicates the Store UX, too, if they have to add "This book is/is not supported by your device" warnings to every book which also needs to know which device you are intending. With the average kindle owner often buying books directly from Amazon.com rather than the on-device Store and often having 2+ devices, they'd possibly need an exponential number of those warnings ("This book is supported by your Kindle Oasis and Kindle Paperwhite C, but not your Kindle Paperwhite B or Kindle Paperwhite A").
Also, maybe the publisher of that book in 2015 wants to upgrade to new ebook features for that book in 2026, for instance they want to add the physical book's original illustrations now that Kindle finally supports more illustrations. Does Amazon have to keep both of the 2015 and 2026 versions of the book depending on which device the user wants to use? How confused is the user when some of their devices have lovely illustrations and others don't? Should the user be able to choose to read the 2015 version of the file even on devices that support the 2026 version because they hate the book's illustrations and find them distracting?
(That gets into a larger discussion that Amazon has always preferred updating books in place on kindles with later editions as they are published, which archivists hate especially because the kindle doesn't have a great "edition version number" to rely on to track for when Amazon has delivered an update to a file, but which often consumers prefer because typos slowly disappear and books subtly become better than the last time you read them, presuming the Publisher isn't doing some drastic bait and switch and it focused only on "plussing" the book.)
The original AZW format was MOBI-based, not PDF-based. MOBI originally from a company called MobiPocket, which Amazon eventually acquired, was built to be an ePub competitor and like ePub was an HTML and JS-based solution, but in a somewhat different, proprietary DRM-friendlier container format. (ePub is "just" a ZIP file, with the DRM applied sometimes inside the container rather than outside it.)
MOBI stopped keeping up with ePub standards and standard features, in part because Amazon acquired MobiPocket. The KFX is just ePub with a new proprietary DRM container around the ZIP file that is ePub's container.
The 2013 boundary is also the "supports ePUB files directly without a conversion process" boundary in Amazon's kindle OS. It's not just useful to know for book file authors, but as a consumer it becomes useful for a quick "Can I buy a standards compliant DRM-free EPUBs such as from sites like DriveThruFiction and just send them to my Kindle with no other steps?"
No Kindle supports ePub natively. Amazon converts ePub to a supported format when you use the send to kindle email service. If you just load the book on over USB it won't work.
Every kindle that supports the new format (Kindle devices since 2013 with latest OS upgraded) support loading non-DRM ePubs directly over USB. There's no conversion anymore. (I've done this.)
Amazon's not going to openly advertise that this deprecation is also the line in the sand where "non-DRM ePub just works", but that's what has happened.
Of course one of the sadder problems with the ePub ecosystem is that it uses the same file extension for DRM contained and non-DRM contained ePubs. At a glance it isn't easy to tell if an ePub is not DRMed. Amazon does not support any of the existing ePub DRM schemes. Their own KFX DRM is very unique and proprietary and doesn't play nice with ePub DRM "standards". You can't load DRMed ePubs over USB, those don't work. Sometimes that gives an impression still that "Amazon does not support ePubs natively", but that's the nature of DRM and how much DRM hurts the entire ebook industry in every direction.
Are you sure about that? Even Amazon's own sales page state: "Kindle Format 8 (AZW3), Kindle (AZW), TXT, PDF, unprotected MOBI, PRC natively; PDF, DOCX, DOC, HTML, EPUB, TXT, RTF, JPEG, GIF, PNG, BMP through conversion; Audible audio format (AAX). Learn more about supported file types for personal documents." implying that ePub only works through conversion. They don't support DRMed ePubs through conversion either so it's a bit odd they say that instead of including it natively.
As I said, anecdotally I've already done it. Amazon only just enabled the PC "Send to Kindle" to support ePub directly instead of the old silly work around of rename the .epub to .kfx (and no other change). They've been very bad at keeping their list of formats up to date in their own documentation. Some of that perhaps because they don't want it to be so obvious and it is intentional obfuscation (to keep people using their store rather than going elsewhere for books), some of that because a lot of their kindle documentation seems to be in a "isn't broke, don't fix it" frozen state for years at time. You'll also note that the text you found doesn't mention "Kindle Format 10 (KFX)" at all and also you might notice that TXT and PDF are mentioned on both sides of that text as both "natively" and "through conversion" which seems to imply the original text was from the era when they were converted and they were added to the "natively" side later without remembering to clean up the other side. (They both have native support today.)
TXT and PDF are on both sides because Amazon will convert them to the appropriate Amazon format if you use send to kindle. TXT has always been natively supported on Kindle. As fair as Amazon is concerned, KFX is an internal format only for their use so there's no need to list it. When Amazon added ePub conversion officially KFX had already existed for over half a decade.
Your anecdote also seems to be the only instance of it working natively. Keep in mind Calibre will autoconvert for you.
Also Kobo's ecosystem exhibits many of the same DRM problems that Amazon has. The majority of book publishers still require DRM. You get DRM locked copies regardless of if you buy them from Amazon or Kobo (or Google Play Store).
Some of this post just seems that an "Android Authority" only just now realized there are less-forked Android-based e-readers versus Kindle and they feel happier with the Android ecosystem (and its DRM) than Amazon's. To me it feels a bit like a choice between Purple Drazi and Green Drazi. Many of the same problems but a different ascot color.
Similarly, I like .NET's TimeProvider abstraction [1]. You pass a TimeProvider to your functions. At runtime you can provide the default TimeProvider.System. When testing FakeTimeProvider has a lot of handy tools to do deterministic testing.
One of the further benefits of .NET's TimeProvider is that it can also be provided to low level async methods like `await Task.Delay(time, timeProvider, cancellationToken)` which also increases the testability of general asynchronous code in a deterministic sandbox once you learn to pass TimeProvider to even low level calls that take an optional one.
> One of the huge things that I love about Rust is the lack of a runtime framework. I don't need to figure out how to bundle / ship / install a framework at runtime.
Rust has a runtime, it's just tiny and auto-bundled (for now). Modern .NET's support for self-contained bundling has gotten pretty good. AOT is getting better too, and AOT-ready code (switch Reflection for Source Generators as much as possible, for instance) can do some very heavy treeshaking of the runtime.
Also, yeah native embedding has gotten somewhat easier in recent years depending on the style of API you want to present to native code. Furthermore both Godot and Unity (differently) embed .NET/C# as options in game development. I certainly expect Typhon is primarily targeting Godot and/or (eventually) Unity (when it finishes switching to coreclr to support more of these features) embedding (but maybe also Stride, a fully C# game engine).
D&D is a bit like Monopoly in that very few people play by the rules as written and instead most tables play by a semi-unique/regional subset of the rules and with a mixture of house rules and DM preferences. Especially people who have been playing for years, not only have they had more time to house rule and build DM opinions, but they also may have seen multiple versions of the rules over that time and interacted with a wider variety of other tables.
To some extent, this is weirdly a good thing: if you want strictly enforced rules, you may just want to play a videogame instead. D&D succeeds best as a social lubricant enabling a framework in which social gaming (roleplaying) can happen to be "fun". Rarely is strictly following rules "fun", especially socially with friends; the rules in D&D are meant to be guideposts and tools for enough structure that people that want structure find comfort and enough flexibility that "fun" isn't lost in the process.
Which is a long way to say that you probably aren't going to learn the right lessons from a well fuzzed computer spec of the rules, you probably are going to learn more lessons asking the people you play with what rules they find important, to explain things you feel you don't understand, and to suggest which chapters in which books to try to read to best improve your understanding for that group. At the end of the day, if the table seems too hard to play at you might also just be playing with the wrong group, especially if you aren't having fun.
The third bullet is also presumably referring to C#'s ancient wider support for unsafe { } blocks for low level pointer math as well as the modern tools Span<T> and Memory<T> which are GC-safe low level memory management/access/pointer math tools in modern .NET. Span<T>/Memory<T> is a bit like a modest partial implementation of Rust's borrowing mechanics without changing a lot of how .NET's stack and heap work or compromising as much on .NET's bounds checking guarantees through an interesting dance of C# compiler smarts and .NET JIT smarts.
The FFM API actually does cover a lot of the same ground, albeit with far worse ergonomics IMO. To wit,
- There is no unsafe block, instead certain operations are "restricted", which currently causes them to emit warnings that can be suppressed on a per-module basis; it seems the warnings will turn into exceptions in the future
- There is no "fixed" statement and frankly nothing like it all, native code is just not allowed to access managed memory period; instead, you set up an arena to be shared between managed and native code
- MemorySegment is kinda like Memory<T>/Span<T> but harder to actually use because Java's type-erased generics are useless here
- Setting up a MemoryLayout to describe a struct is just not as nice as slapping layout attributes on an actual struct
- Working with VarHandle is just way more verbose than working with pointers
> - There is no unsafe block, instead certain operations are "restricted", which currently causes them to emit warnings that can be suppressed on a per-module basis; it seems the warnings will turn into exceptions in the future
Which sounds funny because C# effectively has gone the other direction. .NET's Code Access Security (CAS) used to heavily penalize unsafe blocks (and unchecked blocks, another relative that C# has that I don't think has a direct Java equivalent), limiting how libraries could use such blocks without extra mandatory code signing and permissions, throwing all sorts of weird runtime exceptions in CAS environments with slightly wrong permissions. CAS is mostly gone today so most C# developers only ever really experience compiler warnings and warnings-as-errors when trying to use unsafe (and/or unchecked) blocks. More libraries can use it for low level things than used to. (But also fewer libraries need to now than used to, thanks to Memory<T>/Span<T>.)
> There is no "fixed" statement and frankly nothing like it all, native code is just not allowed to access managed memory period; instead, you set up an arena to be shared between managed and native code
Yeah, this seems to be an area that .NET has a lot of strengths in. Not just the fixed keyword, but also a direct API for GC pinning/unpinning/locking and many sorts of "Unsafe Marshalling" tools to provide direct access to pointers into managed memory for native code. (Named "Unsafe" in this case because they warrant careful consideration before using them, not because they rely on unsafe blocks of code.)
> MemorySegment is kinda like Memory<T>/Span<T> but harder to actually use because Java's type-erased generics are useless here
It's the ease of use that really makes Memory<T>/Span<T> shine. It's a lot more generally useful throughout the .NET ecosystem (beyond just "foreign function interfaces") to the point where a large swathe of the BCL (Base Class Library; standard library) uses Span<T> in one fashion or another for easy performance improvements (especially with the C# compiler quietly preferring Span<T>/ReadOnlySpan<T> overloads of functions over almost any other data type, when available). Span<T> has been a "quiet performance revolution" under the hood of a lot of core libraries in .NET, especially just about anything involving string searching, parsing, or manipulation. Almost none of those gains have anything to do with calling into native code and many of those performance gains have also been achieved by eliminating native code (and the overhead of transitions to/from it) by moving performance-optimized algorithms that were easier to do unsafely in native code into "safe" C#.
It's really cool what has been going on with Span<T>. It's really wild some of the micro-benchmarks of before/after Span<T> migrations.
Related to the overall topic, it's said Span<T> is one of the reasons Unity wants to push faster to modern .NET, but Unity still has a ways to go to where it uses enough of the .NET coreclr memory model to take real advantage of it.
Yeah, coming to C# from Rust (in a project using both), I’ve been extremely impressed by the capabilities of Span<T> and friends.
I’m finding that a lot of code that would traditionally need to be implemented in C++ or Rust can now be implemented in C# at no or very little performance cost.
I’m still using Rust for certain areas where the C# type system is too limited, or where the borrow checker is a godsend, but the cooperation between these languages is really smooth.
A lot of C#'s reputation for not being viable for Linux came from the other direction and a lot of FUD against Mono. There were a lot of great Linux apps that were Linux first and/or Linux only (often using Gtk# as UI framework of choice) like Banshee and Tomboy that also had brief bundling as out-of-the-box Gnome apps in a couple of Linux distros before anti-Mono backlash got them removed.
Also, yeah today Linux support is officially maintained in modern .NET and many corporate environments are quietly using Linux servers and Linux docker containers every day to run their (closed source) projects. Linux support is one of the things that has saved companies money in running .NET, so there's a lot of weird quiet loyalty to it just from a cost cutting standpoint. But you don't hear a lot about that given the closed-source/proprietary nature of those projects. That's why it is sometimes referred to as "dark matter development" from "dark matter developers", a lot of it is out there, a lot of it doesn't get noticed in HN comments and places like that, it's all just quietly chugging along and doesn't seem to impact the overall reputation of the platform.
Yes, however as acknowledged by the .NET team themselves, in several podcast interviews, this is mostly Microsoft shops adopting Linux and saving Windows licenses.
They still have a big problem gaining .NET adoption among those that were educated in UNIX/Linux/macOS first.
Mandy Mantiquila and David Fowler have had such remarks, I can provide the sources if you feel so inclined.
One take on it is that yes, the single dot operator was an ancient mistake which is why so many programming language features are about making it smarter. Properties as mentioned in this article are an ancient way to fake the dot operator into a "field" but actually make method calls. Modern C# also picked up the question dot operator (?.) for safer null traversal and the exclamation dot operator (!. aka the "damnit operator" or "I think I know what I'm doing operator") for even less safe null traversal.
It can be an interesting discussion to follow: https://github.com/w3c/csswg-drafts/issues/9660
reply