tl;dr: Rippling does not ensure HSA contributions from your paycheck and deposits into the HSA actually match, leading to a situation where you may have money withheld from your paycheck that doesn’t actually get deposited into your HSA. If you use Rippling and have an HSA, you should really go audit your contributions and make sure they match the deposits into your HSA account. In my case, I was missing $400.
It's not so much about the environment itself, but the data in it. If the data you use for your tests is all hand crafted/manually created by someone who has left the company then you don't have any way to scale it (e.g. if you want to run a bunch of your tests in parallel and they may modify that data), or to make changes to address new functionality in your application, then your qa process will suffer immensely.
I wish people would think about the data they use for the tests a bit more, and how they can create it from scratch in a consistent and scalable way, that way they can always be testing against a clean environment with a known setup and avoid doing a bunch of bad things (like creating data on the fly as part of a test)
Been there, done that, hombre. Did the math, and at the last place I was at, our test data, if transcribed line by line into composition notebooks (our seed files were basically JSON), we'd be tossing 137 or so notebooks worth through the system every test run.
Can you get devs to care about what valid data looks like? Nigh impossible. Hell, I had a hard enough time keeping my testers authoring new test data in a reasonably spec compliant way. A proper data lifecycle is the key, but it will almost always be the least popular part of your process because most people just don't want to think about it.
At some point in your process someone has to know what they are doing. There is no machine that knows correct data for you. It's part of of what makes testing difficult. Everyone else can live in fantasy land, but you, as a tester, have to bring the hammer of reality crashing down. Won't make you many friends, but it is what it is. Your test data must reflect a reality. Someone has to do the footwork observe that reality. Only someone who has done so can then do the next step of authoring valid/representative test data.
I've been in the QA space for a while now, and one thing that repeatedly comes up is people neglecting their QA data. It's always an afterthought about how to get their application into the right state to be able to test their functionality properly.
So I'm curious, how do you all manage the seeded test data that you need for your QA tests?
This[1] is something I've come across but not had a chance to play with, designed for reading non-smart meters that might work for you. I'm not sure if there's any way to run it on an old phone though.
Wow. I was looking at hooking my water meter into home assistant, and was going to investigate just counting an optical pulse (it has a white portion on the gear that is in a certain spot every .1 gal) This is like the same meter I use, and perfect.
(It turns out my electric meter, though analog, blasts out it's reading on RF every 10 seconds unencrypted. I got that via my RTL-SDR reciever :) )
> Fixed an issue where Copy and Paste context menu items intermittently were not enabled when expected.
I'm glad to see they were able to get this fixed, it'd been bothering me for a little while now. Shortcuts (ctrl+c) still worked, but it was always a bit weird to see Copy grayed out when I had something selected.
I think the surprising thing to me here is the high usage of ChatGPT (82%). Every time I try to use it I find I can just search for an answer quicker than it, especially when taking into account the time I have to spend trying to figure out if what it's telling me is actually accurate or if it imagined functionality or features that don't actually exist.
For the more straightforward tasks it probably would do well at, copilot seems like the better solution since it's much more tightly integrated into my developer environment.
I think some people's brains just don't grok what LLMs are good for. They either ask complex gotcha questions, which get mostly wrong answers or use them as search engine replacements.
I just happen to have a real world example of what I used Gemini (free version) for just yesterday open in a tab.
Me: "I want to write a Go program that connects to a running Plex instance and gets a list of unwatched episodes from a show called 'After Midnight'"
Gemini: gives me source code for what I want to do
Me: checks that the Github url for the main library works, it doesn't -> "Library <url> doesn't exist" (Gemini has a habit of suggesting old or defunct Go libraries, or hallucinating them)
Gemini: another attempt with a different library
Me: checks that the Github url for the main library works, it doesn't -> "Library <url2> doesn't exist"
Gemini: Admits that it doesn't have any more official Go Plex libraries it can suggest, it suggest considering either using an unofficial one or a different language like Python, that has more options
Me: "Let's go with Python then instead of Go"
Gemini: Gives me 100% working code using the 'plexapi' library
At this point I had to spend more time getting my personal X-Plex-Token from the Plex server sources (personal project, no need to bring web auth into this), bootstrapping the project with Poetry etc. than it took me to get working code out of Gemini. After that I can start iterating towards what I want (a CLI tool to copy unwatched episodes from my Plex server locally for offline use).
All this would've taken me hours longer if I had to start the project from scratch and do the boring bits myself, manually digging through the API docs to find out how to connect to Plex and how the data is structured.
I've went through a similar process using ChatGPT and Gemini multiple times. There's a simple-ish thing I want to do, I use an LLM to get me to a point where I can start iterating towards a solution instead of having to start from scratch. Sometimes the first attempt Just Works, sometimes it's 90-95% complete, because the LLM has used a wrong property or function at one point.
You have to use the tools for what they are good at. As search-engine replacements, they are pretty lousy - that's not their strength. My three primary use cases:
- Correcting spelling and grammatical errors in a text I have written, especially in a language that is not my mother tongue. I still have to check the result, but this prevents a lot of silly mistakes.
- Getting examples of particular API calls, for things I don't do often. The AI (usually) delivers correct code examples, along with decent explanations. The result is generally better and more understandable than the official documentation.
- Just for fun, sometimes I ask about random obscure facts that have come up in conversation, or somehow aroused my curiosity. Ask a quick question, get a quick answer without wading through search results.
I kinda suspect it might be that chagpt is excellent at getting you to an "average" performance in any field.
My background is computational material science, but more on materials than the computational part. I have an ok broad knowledge of most CS topics but I'm always finding I'm playing catch up. My work also involves a lot of making research prototypes in areas I don't have time to get a proper background in.
For me GPT has had a transformative impact on my work.
For example I had a lot of projects that needed Docker. I have an ok idea of what Docker is and what i want to do with it. But, I don't have the time of a real software developer to learn the syntax and deal with subtle bugs or how to do basic things, e.g., "how do I ssh into my Docker container X"
I think I'm on the end of users that is best poised to make use of llms. A decent knowledge of what strategy i want to go for but don't know the tactics. And I'm mediocre enough at programming that the Llm can usually beat me. Another example, I would just never write any unit tests, not enough time. With llms I can get simple dirty tests done + I know enough about testing to filter out the bad ones and tune the best ones.
I see poor responders on two extremes on either side of me. People who really don't know what they are doing and can't prompt correct the llm into doing anything better. And people who really know what they are doing and are generally working on one tech stack/ project and don't need help getting dumb basics in place + have more time to write things themselves.
> Also, not sure why you wouldn’t just use Base64 encoding for which optimised versions already exist instead of rolling your own conversion to/from base 26 (or 52).
It's mentioned in the article, but base64 includes weird characters that might not be allowed in a name field, like `+=/`. I also wouldn't be surprised if the airline name field didn't allow numbers.
Along the same lines, there are some programming contests that use a web based system for submitting solutions. They often restrict internet access to just the contest website, but a motivated user could use the same trick with the user profile fields to sneak information in/out.
As a bonus one of those contest systems allows users to upload a profile photo, which would greatly increase the bandwidth!
The only other thing I've seen in this space is DodgerCMS. It may meet your needs as it seems to do a lot more than Tumbless already. I think I prefer the way it handles credentials too(saved in local storage iirc rather than in a "hidden" file that's technically available to everyone)
Regardless, I think this is something that would be awesome to have. You can even throw in some AWS Lambda if you want to move some stuff out of the browser(like generating thumbnails/index pages/etc). Or just use Lambda to run a complete static site generator(like jekyll or pelican) every time the content in an S3 bucket changes.
Note: I've never actually used either of these, but I did spend a fair amount of time researching this type of stuff.
I read a lot about DodgerCMS right now, then I discovered it did client-site rendering. Not that this is a bad thing, but it makes the job of implementing a site like this so much easier that I got angry.
Now I don't know if Tumbless does everything client-side too. I guess the other way is way too hard (recompiling everything every time, finding what to recompile etc.), like I tried to do and failed with https://github.com/fiatjaf/coisas
Tumbless uses two passwords: the admin one could benefit from the local storage approach, however I'm not sure if chrome's password saving feature would help.
The second one is the one you share with your guests if you choose to make the blog private. I see no other option here.
I'm not very happy either with the hidden file approach, as it might be vulnerable to brute force attack. Amazon does not specify what kind of protection they have in place.
I definitely recommend hard to guess passwords.
I've been using this for a few years to cache basically the whole of a USB drive after booting from it.
It helps make up for the slow read speed of many cheap flash drives by caching everything before a user even sits down to use it.(I run a programming contest so we set the computers up and boot them before hand, plenty of time for vmtouch to do its job)