If your electrical installation allows it: You can connect your ev plug before the battery so that it does not drain the battery. You can do this by placing the fuse/connection before the measurement clamps for the battery. Somewhere in between your mains connection and your battery/solar system.
This way the battery does not see the load and does not provide power to your EV.
That way you can still use excess solar (before you inject it into the mains) to charge your car + you do not pull power from your battery :)
The ideal solution is for the battery to have a third set of clamps to measure the EV. But as I don't have installer access to the software (centrally managed for the win) I'm not sure thats possible.
I might ask to see if thats possible. I probably need more panels to cover the winter load.
That document says you need to lean into a turn. That's what I would expect. Turn left, lean your body left. The picture on the cover shows the rider turning left and leaning left.
>Since sidecar outfits are not symmetrical, the technique for left turns is somewhat different from right turns. The outfit won't lean into the turn like a "solo" bike, but instead rolls slightly towards the outside of the turn like an automobile. The sidecar driver compensates by leaning body weight towards the turn and by applying extra force to the handlebars.
I can recommend Zotero. You also don’t have to pay for storage if you have a server/device that is webdav capable. I connected it to my Synology nas and the setup was trivial.
Basically you are eligible for your admin roles but you have to activate them first. Usually there are additional checks + notifications to other admins. These permissions are also only available for a set amount of time and then you will need to re request them :)
Good question actually! There are multiple layers that add to the security:
- Your login session as a user is normally valid for a day (~10 hours). But a pimmed session that gives you global admin permissions can be for example capped to max 1 hour.
- A normal login as a user can just require login + mfa. But if you want to PIM to certain admin roles you for example are required to use your yubikey as well. Yes it's an extra step but if your account is hacked they only have access to you as a user and not you as an admin unless they also capture your security key.
Also it creates some additional awareness for admins that they are now handling the keys of the kingdom and that the role that they just activated can do a lot of harm. In some organizations users get an admin account without fully understanding the consequences.
- It is way easier to audit. In normal circumstances a user's admin permissions are "always on". Once you start using pim you can also audit when and where additional permissions where requested. This is especially handy when you are monitoring everything and you get an alert saying "Hey sfn42 just requested global admin from a location that they normally do not request this. Can you look into this to make sure that it is legit?" With always on permissions this becomes way harder.
- Easier to manage via groups. You can have groups tied to eligible permissions and subsets of permissions. This is really handy once you start having external consultants who can request permissions via IGA (Identity Governance) policies.
Basically consultants can go to a url (https://myaccess.microsoft.com/) and request an "access package" that might contain 1 or more roles.
For example somebody who has to audit certain items in our organization can request a package that contains the needed admin roles and get automatically added to the correct groups. Once they request that package we can have automated processes (with multiple stages if needed) that first contact the teamlead of that person, and later on maybe another group of person(s) to approve that access.
These groups have access reviews done by the security team / app owner (weekly/monthly depending..) to make sure that all accesses are still needed. It is also really easy to let these permissions expire. So our auditor will have a valid account for the entire year but will have to re-request their permissions every 3 months (or whatever we choose).
This is also _really_ easy to audit :)
- When someone in our security team requests a role the rest of the team automatically receives an email so we know what is going on with our collegues :)
Thanks for the response! You pretty much just described exactly how it works in the organization I work for, as an outside contractor.
But PIM has a max duration of 8 hours and does not require additional authentication like yubikey, it doesn't even require that I authenticate again with my regular MFA login.
In practice everyone just writes what amounts to nothing as their reason. We literally write our team name.
It's also badly set up so all kinds of bullshit like viewing application logs requires PIM and nobody really knows how it works so we just request all the roles instead of considering which one we need because it's all just a big box of magic that few people actually understand. And we do so pretty much every day because we always need to do something in Azure.
So with the way we use it it still seems pointless to me, even with your explanations. Maybe we get some small benefits from it but for the most part it seems like security posturing to me.
Same here. We had a very fragmented landscape (multiple idp tools, some tools using internal users,...). We consolidated everything to entra (450 apps and counting) and everybody couldn't be happier. Full sso on everything + scim where available.
We do offcourse have conditional access + PIM for admins but that is to be expected.
You just need a good strategy on how you are going to tackle IAM and then just stick to it.
You can also just setup a pihole adblocker on a vm. It has a local dns feature as well (that is nothing more that a textfile containing all your local records). Super easy to setup and maintain :)
1) Home Assistant is not an officially sanctioned option by the devices and will run into technical issues regardless whether it's cloud hosted or not (as seen by the very post we're all commenting on).
2) Even if the above were not true, at that point you're back to an internet enabled smart home device system, and now we're simply picking which vendor to trust over the other. But in both cases, the option for the vendor to collect telemetry data about your usage of the products exists.
There is really no viable way for the typical consumer to be able to both have a good product experience for something like this, and to prevent a cloud vendor from having access to their data. Unless I'm missing something obvious.
> Even if the above were not true, at that point you're back to an internet enabled smart home device system
Home Assistant Cloud is essentially a TCP-level proxy (IOW Nabu Casa sees jack squat):
> The remote UI encrypts all communication between your browser and your local instance. Encryption is provided by a Let’s Encrypt certificate. Under the hood, your local Home Assistant instance is connected to one of our custom built UI proxy servers. Our UI proxy servers operate at the TCP level and will forward all encrypted data to the local instance.
> Routing is made possible by the Server Name Indication (SNI) extension on the TLS handshake. It contains the information for which hostname an incoming request is destined, and we forward this information to the matching local instance. To be able to route multiple simultaneous requests, all data will be routed via a TCP multiplexer. The local Home Assistant instance will receive the TCP packets, demultiplex them, decrypt them with the SSL certificate and forward them to the HTTP component.
> The source code is available on GitHub:
> SniTun - End-to-End encryption with SNI proxy on top of a TCP multiplexer
> hass-nabucasa - Cloud integration in Home Assistant
Yeah so this is why I said "no way for the typical consumer to have a product experience like this" because what you're saying is true, but not something an individual can rely on.
Typical consumers have no way of ensuring their UI is, in fact, encrypting the data and not farming it out. They cannot verify the source code themselves, because they don't have the technical skill set they'd need to do so (nor, frankly, the time). They're reliant on the goodwill of whoever packaged and installed the offering for them not doing anything to that offering.
Technical power users can circumvent this because they can build/install from source, verify keychains, read the source, etc. Non-technical users can't do this, and need someone to help them. That someone will most likely be in the form of a third party organization that does this in exchange for money. They're placing their trust in that third party.
The point I'm getting at is that, eventually, a consumer has to trust a third party who may have incentives that don't align with their own. They're just playing a game of which vendor to place that trust in. This is why centralization is still the predominant architecture choice for the overwhelming majority of products, even in a world where myriad decentralized solutions exist for almost everything. It turns out that having bespoke third parties run decentralized solutions for customers is often not a better product experience, and still has the same root problem even if it manifests in different ways.
> a consumer has to trust a third party who may have incentives that don't align with their own
That's true for literally anything, not just IoT security and privacy. I mean, even for highly technical users, one can't do everything from scratch, nor even check and control every single aspect: you gotta trust the the computer hardware or OS you're using isn't backdoored, you gotta trust the people that built the place you live in didn't put half the rebar actually needed or wired the whole thing backwards or with thinner-than-required wires, you gotta trust that the food you eat isn't going to make you sick...
Same for HASS, one could delegate trust to a specialist that would install a HA Green or Yellow box for them, just as they do for electrical wiring. HA is only "third party" because the IoT place lacks standards but is in essence no different than wiring stuff from different vendors, where "myriads of decentralised solutions" exist only because of standards, and for which decentralisation essentially means everyone is a third party to everyone else.
So I don't think dismissing HASS as third party is fair, and wiring IoT with virtual wires is no different than wiring a breaker box. If you don't know how to do it it can be dangerous, and so you delegate and trust someone to do their job properly.
> The point I'm getting at is that, eventually, a consumer has to trust a third party who may have incentives that don't align with their own. They're just playing a game of which vendor to place that trust in.
The problem is that approximately NONE of the commercial vendors are in any way trustworthy. They're really pushing hard the degree of abuse they inflict on the customers, and social immunity takes long time to build.
The ultimate solution IMO is to have people trust in people they can actually trust - that is, make the third parties local. A partner, a kid, a neighbor, a small company servicing the local community and physically located in it. At this scale, trust can be managed through tried-and-true social techniques humans are innately good at, and have successfully used for many thousands of years. This is how you make most of the tech industry and adjacent problems go away.
I suppose the vendor could sell a home server device, which runs some kind of Tailscale-like technology to make it available from the internet, and the app talks to that locally hosted server.
Eg. it's easy to ask copilot: can you give me a list of free, open source mqtt brokers and give me some statistics in the form of a table
And copilot (or any other ai) does this quite nicely. This is not something that you can ask a traditional search engine.
Offcourse you do need to know enough of the underlying material and double check what output you get for when the AI is hallucinating.