Tweaks are divided into essential and advanced. The essential ones shouldn't have any negative impact on the system. They also document the changes each tweak makes (so you can undo them): https://winutil.christitus.com/dev/.
I think the issue is a bit out of the author's control, where tools like this are word of mouth advertised as 'one simple trick' by geeks to a broad audience to fix what they see as wrong with windows. People love convenience and adding "oh by the way, make sure you read/understand the docs first" rarely happens. I think it's part of the move for computing to being appliances that ongoing maintenance isn't seen as needed
Maybe. The appeal of distros like these is lost on those who know linux well. If you are new to linux, the difference between Aurora/Bazzite/Bluefin and base fedora (silverblue, kinoite) can be like day and night.
I feel like the ship has sailed for X11 or any fork of it, regardless of its technical merits (or the lack thereof). All the major DEs (KDE, Gnome, COSMIC) are either already wayland exclusive or soon will be. New DEs like niri, hyprland, sway, etc. are all wayland only from the start (some like niri dont even have xwayland support, instead pointing users to an external project, xwayland satellite, for running X11 apps).
And for almost all the somewhat famous traditional X11 DEs or tiling managers or wms, there is now a wayland compositor mimicking them. Cinnamon and XFCE both have advanced wayland sessions (a recent review of LMDE 7 by distrowatch praised Cinnamon's wayland session as even better than KDE's wayland). They might support X11 for now but it will be increasingly harder to maintain both especially if the majority of their users use the wayland session. This will lead to bit-rotting of the X11 code paths both here and upstream (GTK, mutter, etc).
There are obviously people unhappy with wayland because it has issues with accessibility or automation or other more niche use cases. As hard as it may be, I think the time would be better spent solving these issues in wayland instead. If it cant be solved upstream, downstream protocols like the wlr-protocols can be an option. In fact, even upstream, ext-namespace protocols only require 2 ACKs which shouldn't be too hard to get especially once more wayland compositors join upstream development.
This starts to impact the entire stack as toolkits, mesa drivers, etc. are increasingly developed with Wayland in mind and are simply better tested there. IMO wayback is probably a more fruitful investment than an x11 fork for those who want to run traditional X11 DEs.
You can (now?) create profiles from the account icon in the toolbar [1] and at least on my firefox install, you can also do it from the hamburger menu.
I use firefox via flatpak and had no issues so far accessing profile data (in one of the folders in ~/.var/app/org.mozilla.firefox/.mozilla/firefox/ - I keep a regular archive of the entire folder as backup).
I'm on 144.0.2 on MacOS and I do have it. Under the hamburger menu in the upper right and near the top of the list. Never set up a profile on this machine before, so maybe that could be related?
I think this criticism is unfair because most common packages are covered by the core and extra repos which are maintained by Arch Linux. AUR is a collection of user build scripts and using it has a certain skill cliff such that I expect most users to have explicit knowledge of the security dangers. I understand your concern but it would be weird and out of scope for Arch to maintain or moderate AUR when what Arch is providing here amounts to little more than hosting. Instead Arch rightly gives the users tools to moderate it themselves through the votes and comments features. Also the most popular AUR packages are maintained by well known maintainers.
The derivatives are obviously completely separate from Arch and thus are not the responsibility of Arch maintainers.
Disagree. AUR isn’t any trickier than using pacman most of the time. Install a package manager like Yay or Paru and you basically use it the same way as the default package manager.
It’s still the same problem, relying on the community and trusted popular plugin developers to maintain their own security effectively.
I understood GP's point to be that because Obsidian leaves a lot of functionality to plugins, most people are going to use unverified third party plugins. On arch however most packages are in core or extra so for most people they wont need to go to AUR. They are more likely to install the flatpak or get the appimage for apps not in the repos as thats much easier.
yay or paru (or other aur helpers afaik) are not in the repos. To install them one needs to know about how to use AUR in the first place. If you are technically enough to do that, you should know about the security risks since almost all tutorials for AUR come with the security warnings. Its also inconvenient enough that most people wont bother.
In obsidian plugins can seem central to the experience so users might not think much of installing them, in Arch AUR is very much a non essential component. At least thats how I understand it.
> Its also inconvenient enough that most people wont bother.
> in Arch AUR is very much a non essential component.
While somewhat true, we are talking about a user who has installed Arch on their machine. If a user wanted to not bother with installation details, they would've installed Ubuntu.
I feel like the only other solution to kernel-level anticheat is some kind of measured and verified system image. The whole chain has to be signed and trusted from the TPM through the kernel to userspace. This way if anyone tampers with the system the game will refuse to launch. I think something like this is already possible with systemd or is at least the long term goal IIRC from Lennart's blog.
IME these systems can be quite fragile in practice. All it takes is one pre-signature exploit (like U-boot parsing ext4 and devicetree before verifying signature) and your whole chain becomes useless.
And while the kernel is quite secure against hacks from userspace, the hardware interfaces are generally more trusted. This is not a problem on smartphones or embedded devices where you can obfuscate everything on a small SoC but the whole PC/x86_64 platform is much more flexible and open. I doubt there is a way to get reliable attestation on current desktop systems (many of which are assembled from independent parts) unless you get complete buy-in from all the manufacturers.
Finally, with AI systems recently increasing in power, perhaps soon the nuclear option of camera + CV + keyboard/mouse will become practical.
I don't know much about TPM APIs, but I think (barring some hardware attestation scheme) a malicious kernel could intercept any game-TPM communication.
The verified bootloader would register the signature of the kernel into the TPM, so a malicious kernel would be noticeable. You could still exploit the kernel, of course.
Even a hacked kernel won't have access to the key material stored inside of the TPM, though, so it wouldn't be able to fake the remote attestation key material used to sign any challenges.
Using TPMs this way requires secure boot which only permits non-exploited, signed kernels to load signed operating system images and signed drivers. Revocation of exploitable software and hardware must be harsh and immediate. That means most dTPMs (which have been proven vulnerable to numerous side-channel attacks) are unusable, as well as some fTPMs from CPUs running old microcode. Several graphics cards cannot be used anymore because their drivers contain unpatched vulnerabilities. Running tools with known-exploitable drivers, such as CPU-Z and some motherboard vendor software, would imply a permanent ban.
This approach can work well for remotely validating the state of devices in a highly secure government programme with strict asset management. For gaming, many hardware and software configurations wouldn't be validatable and you'd lose too much money. Unfortunately, unlike on consoles, hardware and software vendors just don't give a shit about security when there's a risk of mild user inconvenience, so their security features cannot be relied upon.
You can do what some games do and use TPMs as your system's hardware identifier, requiring cheaters to buy whole new CPUs/motherboards every time an account is banned. You can also take into account systems like these but don't rely on them entirely, combining them with kernel-level anticheat like BF6 does (which requires secure boot to be enabled and VBS to be available to launch, though there are already cheaters in that game).
Then cheaters will be able to just patch the game startup code so it skips the TPM check. If the game executable were encrypted to the TPM somehow, that might work then though.
None of the Samsungs I have owned so far had this feature and neither did my last Pixel.