When I saw "Perfect Software" in the title, I thought it referred to Perfect Software, the developer who produced the Perfect Writer word processor, Perfect Calc spreadsheet, and Perfect Filer database. These were a suite of office software products developed in the early '80s for CP/M and MS-DOS computers.
> I do know it's rare to open an existing .docx in LibreOffice and have it look right; who knows what it looks like in Word after I've edited and saved it.
This is not true except as hyperbole. Most docx open and let themselves edit quite well in LibreOffice Writer, and they look right.
However, you still have a point. There are always some cases when the compatibility is not good, and the only way to use said docx files would be in MS Word.
I submit most people had better luck than you and stayed silent. The fringe cases are the most vocal ones in their complaints. Of course, this is not implying that your experience is not true or less valid.
The reason is that it would finally motivate game developers to be more realistic in their minimum hardware requirements, enabling games to be playable on onboard GPUs.
Right now, most recent games (for example, many games built on Unreal Engine 5) are unplayable on onboard GPUs. Game and engine devs simply don't bother anymore to optimize for the low end and thus they end up gatekeeping games and excluding millions of devices because for recent games, a discrete GPU is required even for the lowest settings.
They're not targeting high-end PCs. They're targeting current generation consoles, specifically the PS5 + 1080p. It just turns out that when you take those system requirements and put them on a PC—especially a PC with a 1440p or 2160p ultrawide—it turns out to mean pretty top of the line stuff. Particularly if as a PC gamer you expect to run it at 90fps and not the 30-40 that is typical for consoles.
Without disagreeing with the broad strokes of your comment, it feels like 4K should be considered standard for consoles nowadays - a very usable 4K HDR TV can be had for $150-500.
Thats a waste of image quality for most people. You have to sit very close to a 4k display to be able to perceive the full resolution. On PC you could be 2 feet from a huge gaming monitor, but an extremely small percentage of console players have the tv size and distance ratio where they would get much out of full 4k. Much better to spend the compute on higher framerate or higher detail settings.
I think higher detail is where most of it goes. A lower resolution, upscaled image of a detailed scene, at medium framerate reads to most normal people as "better" than a less-detailed scene rendered at native 4k, especially when it's in motion.
> You have to sit very close to a 4k display to be able to perceive the full resolution.
Wait, are you sure you don't have that backward? IIUC, you don't[] notice the difference between a 2K display and a 4K display until you get up to larger screen sizes (say 60+ inches give or take a dozen inches; I don't have exact numbers :) ) and with those the optimal viewing range is like 4-8 feet away (depending on the screen size).
Either that or am I missing something...
[]Generally, anyway. A 4K resolution should definitely be visible at 1-2 feet away as noticeably crisper, but only slightly.
My first 4K screen was a 24" computer display and let me tell you, the difference between that and a 24" 1080p display is night and day from 1-2 feet away. Those pixels were gloriously dense. Smoothest text rendering you've ever seen.
I didn't use it for gaming though, and I've "downgraded" resolution to 2x 1440p (and much higher refresh rates) since then. But more pixels is great if you can afford it.
It's one thing to say you don't need higher resolution and fewer pixels works fine, but all the people in the comments acting like you can't see the difference makes me wonder if they've ever seen a 4K TV before.
I still use 4K@24", unfortunately they're getting scarce. 4K@27" is where it's at now unfortunately. But I'll never go back to normal DPI. Every time at the office it bugs me how bad regular DPI is.
That's fair, but it makes me wonder if perhaps it's not the resolution that makes it crisper but other factors that come along with that price point, such as refresh rate, HDR, LCD layer quality, etc.
For example, I have two 1920x1080 monitors, but one is 160 Hz and the other is only 60 Hz, and the difference is night and day between them.
It’s best to think about this as angular resolution. Even a very small screen could take up an optimal amount of your field of view if held close. You get the max benefit from a 4k display when it is about 80% of the diagonal screen distance away from your eyes. So for a 28 inch monitor, that’s a little less then 2 feet, pretty typical desk setup.
Assuming you can render natively at high FPS, 4k makes a bigger difference on rendered images than live action because it essentially brute forces antialiasing.
I think you're underestimating the computing power required to render (natively) at 4K. Some modern games can't even natively render at 1440p on high-end PCs.
1440p and 2160p is a total waste of pixels, when 1080p is already at the level of human visual acuity. You can argue that 1440p is a genuine (slight) improvement for super crisp text, but not for a game. HDR and more ray tracing/path tracing, etc. are more sensible ways of pushing quality higher.
> 1440p and 2160p is a total waste of pixels, when 1080p is already at the level of human visual acuity.
Wow, what a load of bullshit. I bet you also think the human eye can't see more than 30 fps?
If you're sitting 15+ feet away from your screen, yeah, you can't tell the difference. But for most people, with their eyes only being 2-3 feet away from their monitor, the difference is absolutely noticeable.
> HDR and more ray tracing/path tracing, etc. are more sensible ways of pushing quality higher.
HDR is an absolute game-changer, for sure. Ray-tracing is as well, especially once you learn to notice the artifacts created by shortcuts required to get reflections in raster-based rendering. It's like bad kerning. Something you never noticed before will suddenly stick out like a sore thumb and will bother the hell out of you.
Text rendering alone makes it worthwhile. 1080p densities are not high enough to render text accurately without artefacts. If you double pixel density, then it becomes (mostly) possible to renderi text weight accurately, and things like "rythm" and "density" which were things that real typographers concerned themselves with start to become apparent.
You're probably looking up close at a small portion of the screen - you'll always be able to "see the pixels" in that situation. If you sit far back enough to keep the whole of the screen comfortably in your visual field, the argument applies.
You are absolutely wrong on this subject. Importantly, what matters is PPI, not resolution. 1080P would look like crap in a movie theater or on a 55" TV, for example, while it'll look amazing on a 7" monitor.
It's pretty consistently been shown that this just can't provide low-enough latency for gamers to be comfortable with it. Every attempt at providing this has experience has failed. There's few games where this can even theoretically be viable.
The economics of it also have issues, as now you have to run a bunch more datacenters full of GPUs, and with an inconsistent usage curve leaving a bunch of them being left idle at any given time. You'd have to charge a subscription to justify that, which the market would not accept.
I am pretty sure that the current demand of gpu's can pretty much eat the left idle time issue at major datacenters because of the AI craze.
Not that its good or bad tho but we could probably have something more akin to spot instances of gpu being given for gaming purposes.
I do see a lot of company are having GPU access costs per second/instant shutdown/restart I suppose but overall I agree
My brother recently came for the holidays and I played ps5 for the first time on his mac connected to his room 70-100 kms away and honestly, the biggest factor of latency was how far the wifi connection (which was his phone's carrier) and overall, it was a good enough experience but I only played mortal kombat for a few minutes :)
Current datacenter GPUs are optimized for LLM compute, not for real-time rendering. The economics for running such beefy GPUs just for game streaming won't add up.
and I meant that I think that the ps5 can run far away and you can still connect to it from your laptop and even connect a controller to your laptop (as my brother did) to play with a controller which runs on a mac and then it uses the ps5 itself
All in all, I found it really cool for what its worth.
> Game and engine devs simply don't bother anymore to optimize for the low end
All games carefully consider the total addressable market. You can build a low end game that runs great on total ass garbage onboard GPU. Suffice to say these gamers are not an audience that spend a lot of money on games.
It’s totally fine and good to build premium content that requires premium hardware.
It’s also good to run on low-end hardware to increase the TAM. But there are limits. Building a modern game and targeting a 486 is a wee bit silly.
If Nvidia gamer GPUs disappear and devs were forced to build games that are capable of running on shit ass hardware the net benefit to gamers would be very minimal.
What would actually benefit gamers is making good hardware available at an affordable price!
Everything about your comment screams “tall poppy syndrome”. </rant>
I don't think it's insane. In that hypothetical case, it would be a slightly painful experience for some people that the top end is a bit curtailed for a few years while game developers learn to target other cards, hopefully in some more portable way. But also feeling hard done by because your graphics hardware is stuck at 2025 levels for a bit is not that much of hardship really, is it? In fact, if more time is spent optimising for non-premium cards, perhaps the premium card that you already have will work better then the next upgrade would have.
It's not inconceivable that the overall result is a better computing ecosystem in the long run. The open source space in particular, where Nvidia has long been problematic. Or maybe it'll be a multi decade gaming winter, but unless gamers stop being willing to throw large amounts of money chasing the top end, someone will want that money even if Nvidia didn't.
There is a full actual order of magnitude difference between a modern discrete GPU and a high end card. Almost two orders of magnitude (100x) compare to an older (~2019) integrated GPU.
> In fact, if more time is spent optimising for non-premium cards, perhaps the premium card that you already have will work better then the next upgrade would have.
Nah. The stone doesn’t have nearly that much blood to squeeze. And optimizations for ultralow-end may or may not have any benefit to high end. This isn’t like optimizing CPU instruction count that benefits everyone.
The swirly background (especially on the main screen), shiny card effects, and the CRT distortion effect would be genuinely difficult to implement on a system from that era. Balatro does all three with a couple hundred lines of GLSL shaders.
(The third would, of course, be redundant if you were actually developing for a period 486. But I digress.)
I always chuckle when I see an entitled online rant from a gamer. Nothing against them, it's just humorous. In this one, we have hard-nosed defense of free market principles in the first part worthy of Reagan himself, followed by a Marxist appeal for someone (who?) to "make hardware available at an affordable price!".
True. Optimization is completely dead. Long gone are the days of a game being amazing because the devs managed to pull crazy graphics for the current hardware.
Nowadays a game is only poorly optimized if it's literally unplayable or laggy, and you're forced to constantly upgrade your hardware with no discernible performance gain otherwise.
Crazy take, in the late 90s/early 00s your GPU could be obsolete 9 months after buying. The “optimisation” you talk about was the CPU in the ps4 generation was so weak and tech was moving so fast that any pc bought in 2015 onwards would easily brute force overpower anything that had been built for that generation.
No T&L meant everything was culled, clipped, transformed and per-vertex divided (perspective, lighting) on CPU.
Then you have brute force approach. Voodoo 1/2/3 doesnt employ any obvious speedup tricks in its pipeline. Every single triangle pushed into it is going to get textured (bilinear filtering, per pixel divide), shaded (lighting, blending, FOG applied) and then in the last step the card finally checks Z-buffer to decide between writing all this computed data to buffer or simply throwing it away.
If you think that the programmers are unmotivated (lazy) or incompetent; you’re wrong on both counts.
The amount of care and talent is unmatched in my professional career, and they are often working from incomplete (and changing) specifications towards a fixed deadline across multiple hardware targets.
The issue is that games have such high expectations that they didn’t have before.
There are very few “yearly titles” that allow you to nail down the software in a nicer way over time, its always a mad dash to get it done, on a huge 1000+ person project that has to be permanently playable from MAIN and where unit/integration tests would be completely useless the minute they were built.
The industry will end, but not because of “lazy devs”, its the ballooned expectations, stagnant revenue opportunity, increased team sizes and a pathological contingent of people using games as a (bad) political vehicle without regard for the fact that they will be laid off if they can’t eventually generate revenue.
—-
Finally, back in the early days of games, if the game didn’t work, you assumed you needed better hardware and you would put the work in fixing drivers and settings or even upgrading to something that worked. Now if it doesn’t work on something from before COVID the consensus is that it is not optimised enough. I’m not casting aspersions at the mindset, but it’s a different mentality.
Most gamers don't have the faintest clue regarding how much work and effort a game requires these days to meet even the minimum expectations they have.
That's bullshit. I don't care about graphics, I play lots of indie games, some of them are made by a single person. There are free game engines, so basically all one needs for a successful game is just a good idea for the game.
And a friend of mine still mostly plays the goddamn Ultima Online, the game that was released 28 years ago.
and if a new game
came out today that looked and played the same as Ultima online… What would you (and the rest of gamers) think about it?
Your expectations of that game are set appropriately. Same with a lot of Indy games, the expectation can be that its in early access for a decade+. You would never accept that from, say, Ubisoft.
Depends on what that game brings, I might like it a lot. Again, me and all my friends love indie games, most of them with pixel graphics or just low polygon. The market for such games is big enough. Just look up some popular indie games sales estimations.
You are a minor share of the overall market and the sad truth is that most indie games sell a pityfull handfull of copies and can't sustain their creators financially. And even indie games have to meet certain standards and given that they are developed nostly by single devs, meeting even those "minimal" standards takes years for many devs.
> The amount of care and talent is unmatched in my professional career, and they are often working from incomplete (and changing) specifications towards a fixed deadline across multiple hardware targets.
I fully agree and I really admire people working on the industry. When I see great games which are unplayable in the low end because of stupidly high minimum hardware requirements, I understand game devs are simply responding to internal trends within the industry, and especially going for a practical outcome by using an established game engine (such as Unreal 5).
But at some time I hope this GPU crunch forces this same industry to allocate time and resources either at the engine or at the game level to truly optimize for a realistic low end.
the lead time for a new engine is about 7 years (on the low end).
I don’t think any company that has given up their internal engine could invest 7 years of effort without even having revenue from a game to show for it.
So the industry will likely rally around Unreal and Unity- and I think a handful of the major players will release their engines on license… but Unreal will eat them alive due to the investments in Dev UX (which is much-much higher than proprietary game engines IME). Otherwise the only engines that can really innovate are gated behind AAA publishers and their push for revenue (against investment for any other purpose).
All this to say, I’m sorry to disappoint you, its very unlikely.
Games will have to get smaller and have better revenues.
Optimisation is almost universally about tradeoffs.
If you are a general engine, you can’t easily make those tradeoffs, and worse you have to build guardrails and tooling for many cases, slowing things down further.
The best we can hope for is even better profiling tools from Epic, but they’ve been doing that for the last couple of years since borderlands.
Obsolete in that you’d probably not BUY it if building new, and in that you’d probably be able to get a noticeably better one, but even then games were made to run in a wide gamut of hardware.
For awhile there you did have noticeable gameplay differences- those with GL quake could play better kind of thing.
The GP was talking about Unreal Engine 5 as if that engine doesn't optimize for low end. That's a wild take, I've been playing Arc Raiders with a group of friends in the past month, and one of them hadn't upgraded their PC in 10 years, and it still ran fine (20+ fps) on their machine. When we grew up it would be absolutely unbelievable that a game would run on a 10 year old machine, let alone at bearable FPS. And the game is even on an off-the-shelf game engine, they possibly don't even employ game engine experts at Embark Studios.
>And the game is even on an off-the-shelf game engine, they possibly don't even employ game engine experts at Embark Studios.
Perhaps, but they also turned off Nanite, Lumen and virtual shadow maps. I'm not a UE5 hater but using its main features does currently come at a cost. I think these issues will eventually be fixed in newer versions and with better hardware, and at that point Nanite and VSM will become a no-brainer as they do solve real problems in game development.
A lot of people who were good at optimizing games have aged out and/or 'got theirs' and retired early or just got out of the demanding job and secured a better paying job in a sector with more economic upside and less churn. On the other side there's an unending almost exponential group of newcomers into the industry who are believe the hype given by engine makers who hide the true cost of optimimal game making and sell on 'ease'.
> Long gone are the days of a game being amazing because the devs managed to pull crazy graphics for the current hardware.
DOOM and Battlefield 6 are praised for being surprisingly well optimized for the graphics they offer, and some people bought these games for that reason alone. But I guess in the good old days good optimization would be the norm, not the exception.
That's not how it ACTUALLY worked. How it actually worked is that top video card manufactures would make multi-million dollar bids to to the devs of the three or four AAA games that were predicted to be best-sellers in order to get the devs to optimize their rendering for whatever this year's top video adapter was going to be. And nobody really cared if it didn't run on your crappy old last-year's card, because everybody undrerstood that the vast majority of games revenue comes from people who have just bought expensive new systems. (Inside experience, I lived it).
I don't think it has ever been the case that this year's AAA games play well on last year's video cards.
This is such a funny take, I remember all throughout the 90s and 00s (and maybe even 10s, not playing much then anymore) you often could new games on acceptable settings with a 1-2 year old high spec machine, in fact to play at highest settings you often needed ridiculously spec'ed machines. Now you can play the biggest titles (CP77, BG3 ...) on 5-10 year old hardware (not even top spec), with non or minimal performance/quality impact. I mean I've been playing BG3 and CP77 on highest settings on a PC that I bought 2 years ago used for $600 (BG3 I was playing when it had just come out).
One wonders what would happen in a SHtF situation or someone stubs their toe on the demolition charges switch at TSMC and all the TwinScans get minced.
Would there be a huge drive towards debloating software to run again on random old computers people find in cupboards?
Consoles and their install base set the target performance envelope. If your machine can't keep up with a 5 year old console then you should lower expectations.
And like, when have onboard GPUs ever been good? The fact that they're even feasible these days should be praised but you're imagining some past where devs left them behind.
> The reason is that it would finally motivate game developers to be more realistic in their minimum hardware requirements, enabling games to be playable on onboard GPUs.
They'll just move to remote rendering you'll have to subscribe to. Computers will stagnate as they are, and all new improvements will be reserved for the cloud providers. All hail our gracious overlords "donating" their compute time to the unwashed masses.
Hopefully AMD and Intel would still try. But I fear they'd probably follow Nvidia's lead.
The lag is high. Google was doing this with stadia. A huge amount of money comes from online multiplayer games and almost all of them require minimal latency to play well. So I doubt EA, Microsoft, Activision is going to effectively kill those cash cows.
Game streaming works well for puzzle, story-esque games where latency isn't an issue.
Hinging your impression of the domain on what Google (notoriously not really a player in the gaming world) tried and failed will not exactly give you the most accurate picture. You might as well hinge your impression of how successful a game engine can be on Amazon's attempts at it.
GeForce NOW and Xbox Cloud are much more sensible projects to look at/evaluate than Stadia.
It doesn't matter who does it. To stream you need to send the player input across the net, process, render and then send that back to the client. There is no way to eliminate that input lag.
Any game that is requires high APM (Action Per Minute) will be horrible to play via streaming.
I feel as if I shouldn't really need to explain this on this site, because it should be blindingly obvious that this will always be an issue with any streamed games for the same reason you have a several seconds lag between what happening on a live sports event and what you see on the screen.
GeForce NOW is supposedly decent for a lot of games (depending on connection and distance to server), although if Nvidia totally left gaming they'd probably drop the service too.
I have a 9 year old gaming PC with an RX480 and it is only now starting to not be able to run certain games at all (recent ones that require ray tracing). It can play Cyberpunk and Starfield on low settings very acceptably.
Gaming performance is so much more than hardware specs. Thinking that game devs optimizing their games on their own could fundamentally change the gaming experience is delusional.
And anyone who knows just a tiny bit of history of nvidia would know how much investment they have put into gaming and the technology they pioneered.
I fail to see how my comment could be construed as being pro-monopoly.
There are a huge number of onboard GPUs being left out of even minimum requirements for most recent games. I'm just saying that maybe this situation could led game devs to finally consider such devices as legitimate targets and thus make their games playable on such devices. This is by no means a pro-monopolistic take.
> I'm sure ODT works well for many personal use cases, but can guarantee it will never see adoption in the legal industry. Microsoft Word is the only viable option for lawyers.
I'm a lawyer, though I'm practicing in a wholly different legal system (Romanic civil law) and another country. Why would you say that?
No issues against .docx and and Word per se, but I hate that stupid ribbon with undying hatred. Thus I use LibreOffice as much as I can, while maintaining a licensed Office 365 setup under dual boot with Windows for cases when I have no other choice.
I don't think it's too surprising that another country's legal profession would have a different culture than that of the US. When OP says that ODT will never see adoption in the legal industry, I think it's fair to say there was an implied "in the US" there.
Universal (country-wide) professional adoption, which implicitly requires equivalent display of features that are not often used by the general public.
For a similar reason, faxes will never die out in the US, because the legal industry requires them.
> Universal (country-wide) professional adoption, which implicitly requires equivalent display of features that are not often used by the general public.
That was my hunch, too. Okay, but which features, that only are in Word but not on other products?
fvwm is still one of the default graphical environments in Slackware (even in -current), and fvwm95 came packaged for some time, too. Now fvwm95 is no longer part of the basic Slackware distribution but there's a SlackBuild for it:
I like the Win95 aesthetic, but I like a close relative, KDE1, better; and I have configured my Plasma 6 setup along these lines.
Screenshot: https://imgur.com/a/Q9Gfs08
Back into FVWM, Slackware also has a SlackBuild for the next-gen fvwm3. FVWM configurability could be amazing, although it can be a challenge.
To me, the big aesthetic of early Qt/KDE1 is "Obvious Motif ripoff". Aside from the Win95/Warp style titlebars, if you don't have the big thick bevels and the distinct scroll bars, it's not quite right.
It really galls me that they removed the Motif style in Qt6, since I target that as my default look and feel. It gives a nice "This is expensive professional software with a codebase tracing back to the Reagan administration" vibe.
There are themes that come close in various attempts-- "Commonality" for Qt6/Kvantum, and some of the assets from NsCDE for GTK, but it feels like a pitched battle against design teams that desperately want to mimick whatever Apple is doing this week.
> To me, the big aesthetic of early Qt/KDE1 is "Obvious Motif ripoff". Aside from the Win95/Warp style titlebars, if you don't have the big thick bevels and the distinct scroll bars, it's not quite right.
Is this really a Motif ripoff? For me it's much more like Win95 with a better titlebar in the window decoration. To each their own, but it seems quite right to me.
I can recall trying Beta 4 and probably 1.0, but at the time it felt like a weird situation. It wasn't quite everything you needed, and a lot of the apps were still obviously sort of immature. The HTML-driven file manager was interesting (ISTR OS/2 offered a similar way to customize things on a per-directory level) but it seemed like a lot of resources when a 486/80 with an obscene 32Mb of memory was my Linux machine.
Of course you thought that was a Motif ripoff. The screenshot you reference is from an alpha pre-release version. KDE1 had neither the widget style nor the window decorations of that screenshot.
To each his own. I had a phase of emulating the classic W2K look, but like W95/98/ME all of this it feels too dark, dirty greyish for me now. Still in times of late KDE3 I then switched to https://store.kde.org/p/1100735/ but with different more colorful (soft pastel) icons which I can't remember the name of anymore, later then to https://en.wikipedia.org/wiki/Bluecurve , but not like the ugly default depicted there. It could be customized to a mix of that Reinhardt style and Microsofts 'dot.net' style, still using the forgotten soft pastel icons. Which would then be applied to apps for other toolkits as well. Very consistent. I like consistency.
Meanwhile, Plasmas Breeze (light) does all of that for me, again. One could maybe depart from the breeze window decorations, and exchange them for 'Klassy', they can 'fit', there is much to change, chose from. I'm trying them out at the moment. The thing with Breeze is, many other apps have presets for that also, like LibreOffice, which leads to even more visual consistency :-)
My desktop is blank, a mix between soft pastel yellow and 'manila' paper. No icons, widgets, clocks, weather. I don't care about CPU or Network speed there. I wouldn't see them anyway, since I tend to have windows maximized. If something would be wrong Kget or Ktorrent would make themselves known, which they won't ;-> CPU speeds suffice, even if mostly clocked down to 800Mhz :-)
My 'taskbar' is at the top, only 24px high. I switch between 3 by 3 virtual desktops by either using that too small (for that arrangement, it should grow a little when hovering the pointer over it) widget in the taskbar, or by jamming the mousepointer into one of the four corners, which makes that 'expose'-like thing appear.
I concur in your liking of Breeze. It's an amazing teme, really. My only objection is that the default light color scheme is too light. Fortunately there are good color schemes such as Steel or StormClouds that solve that problem, a light theme that isn't too "white".
As for the monitors, I have them because sometimes I have issues with CPU speed (due to a hardware quirk of my laptop) and the network connection is kinda iffy at the time.
I tried both of them out. They are not for me, ATM. Maybe the too light for you results from different hardware. I have screens where I can adjust brightness and contrast separately, running at a color temperature of 5000K (warm).
That's different from what most laptops do, or fiddling with xgamma, or one of its frontends, using 'redshift', etc.
Even at brightest sunshine I don't go over 55% brightness, otherwise during the day, between 38% to 44%, at night just 20%, with contrast always two below these settings, or any I may use in between.
Despite all this, pictures look just right, even if I visit sites for calibration.
It uses a semi-flat design with raised 3D motifs to help give depth to icons.
I remember Noia, they were too 'shiny/glassy' and 'playful' for me. Looked different and funny for a while, but unusable for daily driving. I prefer something more 'serious', unobstrusive, not distracting.
This resonates with me. LLM output in Spanish also has the tendency to "write like me", as in the linked article.
On that regard, I have an anecdote not from me, but from a student of mine.
One of the hats I wear is that of a seminary professor. I had a student who is now a young pastor, a very bright dude who is well read and is an articulate writer.
"It is a truth universally acknowledged" (with apologies to Jane Austen) that theological polemics can sometimes be ugly. Well, I don't have time for that, but my student had the impetus (and naiveté) of youth, and he stepped into several ones during these years. He made Facebook posts which were authentic essays, well argued, with balanced prose which got better as the years passed by, and treating opponents graciously while firmly standing his own ground. He did so while he was a seminary student, and also after graduation. He would argue a point very well.
Fast forward to 2025. The guy still has time for some Internet theological flamewars. In the latest one, he made (as usual) a well argued, long-form Facebook post, defending his viewpoint on some theological issue against people who have opposite beliefs on that particular question. One of those opponents, a particularly nasty fellow, retorted him with something like "you are cheating, you're just pasting some ChatGPT answer!", and pasted a screenshot of some AI detection tool that said that my student's writing was something like "70% AI Positive". Some other people pointed out that the opponent's writing also seemed like AI, and this opponent admitted that he used AI to "enrich" some of his writing.
And this is infuriating. If that particular opponent had bothered himself to check my student's profile, he would have seen that same kind of "AI writing" going on back to at least 2018, when ChatGPT and the likes were just a speck in Sam Altman's eye. That's just the way my student writes, and he does in this way because the guy actually reads books, he's a bonafide theology nerd. Any resemblance of his writing to a LLM output is coincidence.
In my particular case, this resonated with me because as I said, I also tend to write in a way that would resemble LLM output, with certain ways to structure paragraphs, liberal use of ordered and unordered lists, etc. Again, this is infuriating. First because people tend to assume one is unable to write at a certain level without cheating with AI; and second, because now everybody and their cousin can mimic something that took many of us years to master and believe they no longer need to do the hard work of learning to express themselves on an even remotely articulate way. Oh well, welcome to this brave new world...
I used it sometimes when I needed a live Linux, and I liked it, because it used to be based in Slackware, and it had KDE as its desktop. Now they added Debian as another base, and that's great. Sadly, they dropped KDE.
There's also Porteus, also a Slackware derivative offering KDE Plasma -- https://www.porteus.org
Slax was and still is a great live distribution. The fact that (at least one of its flavors) is based on Slackware shows that the parent distro (Slackware) isn't that hard to use, something that few people believe. Slackware is in fact very simple in comparison to other distros.
I support the change, though the rationale used for it seems to me to be nonsense.
Times New Roman might not be the world's most beautiful font, but at least is a little bit less atrocious than Calibri (which is awful). So, whatever the rationale invoked, I welcome the change.
Sometimes, when I have to work on documents which will be shared with many users, I use Times New Roman as serif, and Arial as a sans serif. Both choices are (admittedly in my very subjective opinion) better than Calibri, and it's almost guaranteed that every PC will have these fonts available, or at least exact metric equivalents of them.
reply