Absolutely! This is oftentimes my first easy task in the morning to kick things off. For many teams the temptation to let dependencies ‚rot‘ is real, however I have found a reliable way to keep things up-to-date is enabling dependabot and merging relentlessly, releasing often etc.
If your test suite is up to the task you’ll find defects in new updates every now and then, but for me this has even led to some open source contributions, engaging with our dependencies’ maintainers and so on. So I think overall it promotes good practices even though it can be a bit annoying at times.
Cancer sucks, I wish all the best towards a recovery.
You’d also have to ‘fix’ DNA: unless we can re-engineer a bunch of key enzymes and then re-encode the entire genome (or maybe key parts) with forward error correction without breaking everything else, it might work. You might also break evolution to some degree by making random point mutations less likely.
But what I learned so far is that as soon as you’d attempt something like this in bacteria, the fitness advantage from an evolutionary standpoint is negligible compared to the efficiency loss introduced by FEC, so your colony would get outcompeted by other bacteria unless there is a niche your resistant bacteria survive in (high radiation environments?). The efficiency loss induced ‘disadvantages’ would probably be less pronounced in mammals though - If (big if) you manage to not also break anything essential in the wonderful yet surprisingly efficient Rube Goldberg machine that is life.
If you need less scale/features go for glitchtip. If you’re not going for k8s, the self-hosted docker-compose version of sentry works fine including proper releases and support by the sentry team etc. Just experimental newly introduced features can be a bit wonky.
They are doing much more than just throwing code over the fence. Also phone home telemetry is optional and there’s a switch for just errors mode. IMHO this really builds trust.
With regards to deployment complexity: well it’s built for handling high volumes of events. I’d reckon this is more a consequence of scaling the project rather than a coordinated plan to push people to their cloud offering.
If you do go for k8s or choose to deploy the stack yourself, you even get access to the full scale solution. But if you’re at that scale, you probably have someone hanging around who knows how to run your clickhouse setup. You still get the full sentry software and SDKs for free in that case. I think this is as fair as it gets with regards to the open source SaaS model.
This may very well be caused by my incompetence, but Sentry's docker-compose setup has never survived for more than a few months under my control. Something always destroys itself without an obvious reason sooner or later, and either refuses to start, or starts and doesn't really work. I tried updating it regularly, tried never updating it, getting the same treatment either way.
I did not intend to be critical of their work. They're doing OSS as best as they can and good for them. I am just saying that it's a different beast if Sentry is OSS vs a much simpler to operate OSS product. Licensing matters less when the operational cost acts as an inhibitor to adoption of your OSS offering.
True, opportunity cost is a factor, sorry if my reply sounded a bit brash. IMHO they are one of the few orgs who got this model right compared to lots of others who went the open core or support/consulting contract required OSS route.
Looks like they mostly use Sentinel-2 data and apply some processing on top. So it's basically (publicly funded?) open data [1, 2], available in multiple viewers [3, 4] and even on AWS [5]. Still a cool project!
On a related note: I think it's kind of puzzling that all this data is freely available but seems to be rather complicated to access. One would assume building/operating satellites is more complex than making that data available in formats the average startup web developer can use almost instantly. IMHO you can see a similar pattern in weather data: it seems like there are a bunch of platforms which are basically wrapping public weather service's domain specific formats in a 'nicer' REST API. One could argue there's documentation on these formats, just read the docs, duh. However, I think lowering the barrier to entry by providing easy access and documentation could enable way more people to work with this data.
IMHO if you know some basic python and know your way around the requests library, it should be trivial for you to get started working with these datasets. That's just my personal opinion based on my experiences from a few years ago. Maybe it's way easier by now and there's a bunch of great free open APIs for this stuff. It always feels like they fulfill all the basic requirements to open the dataset and write some documentation but the last (small) step of writing the basic 'getting started' docs/blog posts and maybe some REST API for access is missing. Instead, confronted with lower than expected usage, a bunch of industry-transfer projects are spun-up to increase adoption. I guess DX also matters for open data hosted by public entities. /rant
> One would assume building/operating satellites is more complex than making that data available in formats the average startup web developer can use almost instantly.
That sounds intentional to me.
They provide the data as a public service without getting their servers hammered by people who don't understand "free" isn't costless.
That sounds interesting! How come? Been doing devops stuff for a few years by now but also have some background in the Unity/HCI space which I enjoyed a lot. I'd be curious how this combination might be applicable in the VFX/media production industry.
I've also been on the fence of learning Unreal for a while now. Maybe you could shoot me an email (address is in my bio)? I'd be very interested in how the industry works outside the 'classic' VFX artist space.
Mainly because in 2019 I was offered an opportunity to be the "tech guy" for a small startup and you know how those go. Since I was alone it just became a DevOps/dev/helpdesk role over time.
If your test suite is up to the task you’ll find defects in new updates every now and then, but for me this has even led to some open source contributions, engaging with our dependencies’ maintainers and so on. So I think overall it promotes good practices even though it can be a bit annoying at times.
reply