It's mainly the lack of emergency services support in my country. Every time I called the operator can't see where I am through GPS, first question asked was what state im in.
I used GrapheneOS for about half a year as my primary phone OS. It does not scramble your GPS in any way (it has the same course/fine-grained GPS permissions as regular Android), but it does allow you to block a lot more app permissions. It's more likely that they haven't set the correct permission(s) for that information to bubble through to emergency services.
I would also be surprised if there weren't cell phone system-based fallbacks for emergency services. The carriers have a good idea of where you're at based on the towers you're connected to. There are plenty of situations where GPS doesn't work.
From what I’ve read Google’s new process sounds much like Apple’s app notarization process. Apple is still in complete control the user just isn’t required to go through the App Store.
Social accountability, for one. Never underestimate shame as a motivating factor for humans. I'm generally in favor of protecting anonymity, so I'm not fully in agreement that this should be a hard requirement for a software project, but I can at least see the appeal of the idea.
Web browsers are also a rare class of software with high complexity and also high privilege (considering the data that typically passes through them), so perhaps higher scrutiny of this class of software is warranted.
Imagine that you have a choice between two pieces of software. The developer of one of those pieces of software is Linus Torvalds. The developer of the other piece of software is Mikhail Vasiliev.
The one that puts the source code online and it compiles on my computer.
And if both do that, have same features, work the same, etc., no other difference, then I'll take the smaller one - because the larger one most probably includes something I don't want, even if it's just bloat or inefficient code.
For me, he lost his credibility, when, with childish "historical" arguments, he choose to ban russian developers from the linux kernel.
Can we still trust him to not insert a backdoor in the kernel, "to fight the russians" ?
An anonymous individual might also have multiple anonymous accounts, for example. Without that anonymity, other projects might ban their contributions, and users might not use their software.
Sure, it's pretty simple. I had WG provided by an Deciso OPNsense router with an automatic VPN profile on most user devices. All of my infrastructure also had PKI. (I moved recently and have yet to set it up again.)
I just learned about this. Looks like this system is only available to banks, and they would have no incentive to break the old system by being quick to implement this. If the Federal Reserve provided this directly to individuals, then we'd have a lot of new payment apps that bypasses the middle network (that would be a paradigm shift).
The point about Pix, it's that Central Bank made it mandatory to all banks with over 500k clients if I recall (smaller banks already wanted of course).
Besides the branding (which tbh it's a big deal), also made guidelines requirements on how the banks needed to implement. Exactly so they couldn't hide or made it worse to use.
This was a whole controversy in Brazil. There was a conspiracy theory that the banks wanted to sabotage the Pix rollout because they would lose out on the fees for using the old transaction systems. In any case, there is a lot more money circulating through the banks now.
It is brand new, released in 2023. It is a backend protocol, it requires every bank to implement the protocol. And there are some big changes compared to ACH, like having to present requests to the user. Or how to deal with reversals.
You need a publicly routable address in the mix. You would need a way of knowing that address.
I have that same feeling with the self hosting I do. To alleviate the small amount of stress it would bring me I rent a VPS that’s public on the internet. I configure a persistent keep alive, on the client I run locally to keep a connection to the server open, no port forwarding needed.
data-test-id attributes and other attributes are hardcoded and need to be know by the automator at run time. MCP-B clients request what they can call at injection time and the server responds with standard MCP tools. (functions LLM's can call with context for how to call them)
I’m not sure what electricity costs where you live, but my calculations tell me I’d have to run an intel n4000 for 5+ years before I break even compared to buying a CanaKit rpi 5.
And you’d be mislead. The video shows the original file is converted to different formats, depending on the user’s selection. The video shows jpeg to html (using AI to perform OCR?).
That argument really skips over what most people actually need. Nobody outside of a tech bubble wants to learn half a dozen Pandoc flags, stitch together shell commands and temp files, or write Lua filters just to reshape a document. With our drive layer you literally rename a file or type “make this header bold and export as PDF” and the work just happens, no scripts required.
This isn’t about replacing power-user workflows, it’s about giving anyone on your team the ability to reshape data and documents without ever opening a terminal. You getflexibility with the simple UX of renaming a file. Calling it “Pandoc plus AI” misses the fact that 90 percent of users neither know nor care about Pandoc’s internals. They just want “I have a file, make it look like this, or formatted with these sections to share with X person who works in X field...” and that’s exactly what our natural-language, filesystem-driven approach delivers.