Hacker Newsnew | past | comments | ask | show | jobs | submit | bojangleslover's commentslogin

Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). They give a basic version of it away for free hoping that some people, usually at companies, will want to pay for the premium features. MinIO going closed source is a business decision and there is nothing wrong with that.

I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.


> > Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun.

> Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software).

MinIO is dealing with two out of the three issues, and the company is partially providing work for free, how is that "completely different"?


The MinIO business model was a freemium model (well, Open Source + commercial support, which is slightly different). They used the free OSS version to drive demand for the commercially licensed version. It’s not like they had a free community version with users they needed to support thrust upon them — this was their plan. They weren’t volunteers.

You could argue that they got to the point where the benefit wasn’t worth the cost, but this was their business model. They would not have gotten to the point where the could have a commercial-only operation without the adoption and demand generated from the OSS version.

Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.


> Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.

No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users. It's worse if you are not being paid, but I'm not sure why you are asserting dealing with bullshit is just peachy if you are being paid.


If that is the case why did minio start with the open source version? If there were only downsides? Sounds like stupid business plan


They wanted adoption and a funnel into their paid offering. They were looking out for their own self-interest, which is perfectly fine; however, it’s very different from the framing many are giving in this thread of a saintly company doing thankless charity work for evil homelab users.


Where did I say there were only downsides? There are definitely upsides to this business model, I'm just refuting the idea that because there are for profit motives the downsides go away.

I hate when people mistreat the people that provide services to them: doesn't matter if it's a volunteer, underpaid waitress or well paid computer programmer. The mistreatment doesn't become "ok" because the person being mistreated is paid.


I doubt that minio pulled the open source version because they were mistreated. Really yeah there are some projects where this is a problem, but it’s mostly because the project only has a single maintainer.

People are angry about minio , but that’s because of their rugpull.


The minio people did a lot of questionable things even before the rugpull. They tried to claim AGPL infects software over the network, on a previous version of https://min.io/compliance

> Combining MinIO software as part of a larger software stack triggers your GNU AGPL v3 obligations. The method of combining does not matter. When MinIO is linked to a larger software stack in any form, including statically, dynamically, pipes, or containerized and invoked remotely, the AGPL v3 applies to your use. What triggers the AGPL v3 obligations is the exchanging data between the larger stack and MinIO.


> No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users.

So… aren't they providing (paid) support? same thing…

Absurd comparison.


“I don’t want to support free users” is completely different than “we’re going all-in on AI, so we’re killing our previous product for both open source and commercial users and replacing it with a new one”


I can also highly recommend SeaweedFS for development purposes, where you want to test general behaviour when using S3-compatible storage. That's what I mainly used MinIO before, and SeaweedFS, especially with their new `weed mini` command that runs all the services together in one process is a great replacement for local development and CI purposes.


I've been using rustfs for some very light local development and it looks.. fine: )


Ironically rustfs.com is currently failing to load on Firefox, with 'Uncaught TypeError: can't access property "enable", s is null'. They shoulda used a statically checked language for their website...


My Firefox access is working fine. The version is 147.0.3 (aarch64)


I'm running Firefox 145.0.2 on amd64.

It seems like the issue may be that I have WebGL disabled. The console includes messages like "Failed to create WebGL context: WebGL creation failed: * AllowWebgl2:false restricts context creation on this system."

Oh well, guess I can't use rustfs :}


I just disabled webgl on my firefox and it worked fine.

Your problems could be caused by a whiny fan. Here is the source https://github.com/rustfs/rustfs


I like the way multiple people feel the need to defend a buggy website with their anecdotal n=1 evidence.

It’s not difficult to make a website that works for everyone.


Oh is it the website that's failing? I kind of assumed it was the web UI for the software. Workarounds sort of kind of make sense there.. Maybe.. But the website. That's bad.


can vouch for SeaweedFS, been using it since the time it was called weedfs and my managers were like are you sure you really want to use that ?


Not seeing any one else comment about it, but I would caution against relying on Wasabi primarily. They actively and silently corrupted a lot of my data and still billed me for it. You'll just start seeing random 500s when trying to get data down from your bucket and it's just gone, no recovery, but it still counts as stored data so you're still paying for it.


Nothing wrong? Does minio grant the basic freedoms of being able to run the software, study it, change it, and distribute it?

Did minio create the impression to its contributors that it will continue being FLOSS?


Yes the software is under AGPL. Go forth and forkify.

The choice of AGPL tells you that they wanted to be the only commercial source of the software from the beginning.


> the software is under AGPL. Go forth and forkify.

No, what was minio is now aistor, a closed-source proprietary software. Tell me how to fork it and I will.

> they wanted to be the only commercial source of the software

The choice of AGPL tells me nothing more than what is stated in the license. And I definitely don't intend to close the source of any of my AGPL-licensed projects.


> Tell me how to fork it and I will.

https://github.com/minio/minio/fork

The fact that new versions aren't available does nothing to stop you from forking versions that are. Or were - they'll be available somewhere, especially if it got packaged for OS distribution.


The only packages I find of aistor, are binary packages. Not only that, the aistor license agreement explicitly states the following:

> You may not modify, reverse engineer, decompile, disassemble, or create derivative works of the Software.


Do you consider this a breach of the AGPL?


So fork the last minio, and work from there... nobody is stopping you.


aistor is proprietary software[1]. Having an old version of your software be open source does not make your software open-source. Why does this need an explanation?

[1] https://www.min.io/legal/aistor-free-agreement


You aren't entitled to the product of someone else's work even if they gave away older versions of that work... What is so hard for you to understand about that?


No, I no longer am, because aistor/minio decided they no longer respect their users' freedom. It's as simple as that -- aistor is unethical and borders on malware.


> And I definitely don't intend to close the source of any of my AGPL-licensed projects.

If a commercial company has "core" version under AGPL, it usually means their free version is an extended demo of the commercial product.


Wasabi looks like a service.

Any recommendation for an in-cluster alternative in production?

Is that SeaweedFS?


I’ve never heard of SeaweedFS, but Ceph cluster storage system has an S3-compatible layer (Object Gateway).

It’s used by CERN to make Petabyte-scale storage capable of ingesting data from particle collider experiments and they're now up to 17 clusters and 74PB which speaks to its production stability. Apparently people use it down to 3-host Proxmox virtualisation clusters, in a similar place as VMware VSAN.

Ceph has been pretty good to us for ~1PB scalable backup storage for many years, except that it’s a non-trivial system administration effort and needs good hardware and networking investment, and my employer wasn't fully backing that commitment. (We’re moving off it to Wasabi for S3 storage). It also leans more towards data integrity than performance, it's great at being massively-parallel and not so rapid at being single thread high-IOPs.

https://ceph.io/en/users/documentation/

https://docs.ceph.com/en/latest/

https://indico.cern.ch/event/1337241/contributions/5629430/a...


Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS and made heavy use of gluster's async geo-replication feature to keep two storage arrays in sync that were far away over a slow link. This was done after getting fed up with rsync being so slow and always thrashing the disks having to scan many TBs every day.

While there is a geo-replication feature for Ceph, I cannot keep using ZFS at the same time, and gluster is no longer developed, so I'm currently looking for an alternative that would work for my use case if anyone knows of a solution.


> "Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS"

I became a Ceph admin by accident so I wasn't involved in choosing it and I'm not familiar with other things in that space. It's a much larger project than a clustered filesystem; you give it disks and it distributes storage over them, and on top of that you can layer things like the S3 storage layer, its own filesystem (CephFS) or block devices which can be mounted on a Linux server and formatted with a filesystem (including ZFS I guess, but that sounds like a lot of layers).

> "While there is a geo-replication feature for Ceph"

Several; the data cluster layer can do it in two ways (stretch clusters and stretch pools), the block device layer can do it in two ways (journal based and snapshot based), the CephFS filesystem layer can do it with snapshot mirroring, and the S3 object layer can do it with multi-site sync.

I've not used any of them, they all have their trade-offs, and this is the kind of thing I was thinking of when saying it requires more skills and effort. for simple storage requirements, put a traditional SAN, a server with a bunch of disks, or pay a cheap S3 service to deal with it. Only if you have a strong need for scalable clusters, a team with storage/Linux skills, a pressing need to do it yourself, or to use many of its features, would I go in that direction.

https://docs.ceph.com/en/latest/rados/operations/stretch-mod...

https://docs.ceph.com/en/latest/rbd/rbd-mirroring/

https://docs.ceph.com/en/latest/cephfs/cephfs-mirroring/

https://docs.ceph.com/en/latest/radosgw/multisite/


Ceph is a non-starter because you need a team of people managing it constantly


I'm not posting to convince people they should use it, just that it's a really cool piece of open source infrastructure that I think is less well known, and I resepect it. It is very configurable and tunable, has a lot of features, command lines, and things to learn, and that does need people with skills and time.

That said, it doesn't need constant management; it's excellent at staying up even while damaged. As long as the cluster has enough free space it will rebuild around any hardware failure without human intervention, it doesn't need hot spares; if you plan it carefully then it has no single point of failure. (The original creator introduces the design choice of 'placement groups' and tradeoffs in this video[1]).

Most of the management time I've spent has been ageing hardware flaking out without actually failing - old disks erroring on read, controllers failing and dropping all the disks temporarily causing tens of seconds of read latency which had knock-on effects, or when we filled it too full and it went read-only. Other management work has been learning my way around it, upgrades, changing the way we use it for different projects, onboarding and offboarding services that use it, all of which will vary with what you actually do with it.

I've spent less time with VMware VSAN, but VSAN does a lot less, it takes your disks and gives you a VMFS datastore and maybe an iSCSI target. There can't be many alternatives which do what Ceph does, and require less skill and effort, and don't involve paying a vendor to manage it for you and give you a web interface?

[1] https://www.youtube.com/watch?v=PmLPbrf-x9g


That's was not my experience. Deploying and configuring ceph was a nightmare due to the mountain of options and considerations, but once it was deployed, ceph is extremely hands-off and resilient.


Yeah sure. I manage a ceph cluster (4PB) and have a few other responsibilities at the same time.

I can tell you that ceph is something I don't need to touch every month. Other things I have to baby more regularly


The complexity is what gets you. One of AWS's favorite situations is

1) Senior engineer starts on AWS

2) Senior engineer leaves because our industry does not value longevity or loyalty at all whatsoever (not saying it should, just observing that it doesn't)

3) New engineer comes in and panics

4) Ends up using a "managed service" to relieve the panic

5) New engineer leaves

6) Second new engineer comes in and not only panics but outright needs help

7) Paired with some "certified AWS partner" who claims to help "reduce cost" but who actually gets a kickback from the extra spend they induce (usually 10% if I'm not mistaken)

Calling it it ransomware is obviously hyperbolic but there are definitely some parallels one could draw

On top of it all, AWS pricing is about to massively go up due to the RAM price increase. There's no way it can't since AWS is over half of Amazon's profit while only around 15% of its revenue.


One of the biggest problems with the self-hosted situations I’ve seen is when the senior engineers who set it up leave and the next generation has to figure out how to run it all.

In theory with perfect documentation they’d have a good head start to learn it, but there is always a lot of unwritten knowledge involved in managing an inherited setup.

With AWS the knowledge is at least transferable and you can find people who have worked with that exact thing before.

Engineers also leave for a lot of reasons. Even highly paid engineers go off and retire, change to a job for more novelty, or decide to try starting their own business.


>With AWS the knowledge is at least transferable

unfortunately it lot of things in AWS that also could be messed up so it might be really hard to research what is going on. For example, you could have hundreds of Lambdas running without any idea where original sources and how they connected to each-other, or complex VPCs network routing where some rules and security groups shared randomly between services so if you do small change it could lead to completely difference service to degrade (like you were hired to help with service X but after you changes some service Y went down and you even not aware that it existed)


Not much different from how it worked in companies I used to work for. Except the situation was even worse as we had no api or UI to probe for information.


There are many great developers who are not also SREs. Building and operating/maintaining have their different mindsets.


In my experience, a back end on the cloud isn't necessarily any less complex than something self hosted.


The end result of all this is that the percentage of people who know how to implement systems without AWS/Azure will be a single digit. From that point on, this will be the only "economic" way, it doesn't matter what the prices are.


That's not a factual statement over reality, but more of a normative judgement to justify resignation. Yes, professionals that know how to actually do these things are not abundantly available, but available enough to achieve the transition. The talent exists and is absolutely passionate about software freedom and hence highly intrinsically motivated to work on it. The only thing that is lacking so far is the demand and the talent available will skyrocket, when the market starts demanding it.


They actually are abundantly available and many are looking for work. The volume of "enterprise IT" sysadmin labor dwarfs that of the population of "big tech" employees and cloud architects.


I've worked with many "enterprise IT" sysadmins (in healthcare, specifically). Some are very proficient generalists, but most (in my experience) are fluent in only their specific platforms, no different than the typical AWS engineer.


Perhaps we need bootcamps for on prem stacks if we are concerned about a skills gap. This is no different imho from the trades skills shortage many developed countries face. The muscle must be flexed. Otherwise, you will be held captive by a provider "who does it all for you".

"Today, we are going to calculate the power requirements for this rack, rack the equipment, wire power and network up, and learn how to use PXE and iLO to get from zero to operational."


This might be my own ego talking (I see myself as a generalist), but IMHO what we need are people that are comfortable jumping into unfamiliar systems and learning on-the-fly, applying their existing knowledge to new domains (while recognizing the assumptions their existing knowledge is causing them to make). That seems much harder to teach, especially in a boot camp format.


As a very curious autodidact, I strongly agree, but this talent is rare and can punch it's own ticket (broadly speaking). These people innovate and build systems for others to maintain, in my experience. But, to your point, we should figure out the sorting hat for folks who want to radically own these on prem systems [1] if they are needed.

[1] https://xkcd.com/705/


I don't really think so. That was a ship that sailed ten years ago and nearly every sysadmin who is still proficient with managing on-prem stacks has adapted to also learn how to manage VPCs in an arbitrary cloud. It's not like this is a recent change.


Yeah, anyone who has >10 years experience with servers/backend dev has almost certainly managed dedicated infra.


> and the talent available will skyrocket, when the market starts demanding it.

Part of what clouds are selling is experience. A "cloud admin" bootcamp graduate can be a useful "cloud engineer", but it takes some serious years of experience to become a talented on prem sre. So it becomes an ouroboros: moving towards clouds makes it easier to move to the clouds.


> A "cloud admin" bootcamp graduate can be a useful "cloud engineer",

If by useful you mean "useful at generating revenue for AWS or GCP" then sure, I agree.

These certificates and bootcamps are roughly equivalent to the Cisco CCNA certificate and training courses back in the 90's. That certificate existed to sell more Cisco gear - and Cisco outright admitted this at the time.


In part - yes. Useful as in capable of spinning up services without opening glaring security holes or bringing half of the infra down. Like with any tech, it takes experience and guardrails to use it efficiently and effectively.


> A "cloud admin" bootcamp graduate can be a useful "cloud engineer"

That is not true. It takes a lot more than a bootcamp to be useful in this space, unless your definition is to copy-paste some CDK without knowing what it does.


Moving towards the brothel makes it easier to get away from the brothel.


> The only thing that is lacking so far is the demand and the talent available will skyrocket, when the market starts demanding it.

But will the market demand it? AWS just continues to grow.


Only time will tell. It depends on when someone with a MBA starts asking questions about cloud spending and runs the real numbers. People promoting self hosting often are not counting all the cost of self hosting (AWS has people working 24x7 so that if something fails someone is there to take action)


> AWS has people working 24x7 so that if something fails someone is there to take action..

The number of things that these 24x7 people from AWS will cover for you is small. If your application craps out for any number of reasons that doesn't have anything to do with AWS, that is on you. If your app needs to run 24x7 and it is critical, then you need your own 24x7 person anyway.


All the hardware and network issues are on them. I agree that you still need your own people to support you applications, but that is only part of the problem.


I've got thousands of devices over hundreds of sites in dozens of countries. The number of hardware failures are a tiny number, and certainly don't need 24/7 response

Meanwhile AWS breaks once or twice a year.


From what I've seen, if you're depending on AWS, if something fails you too need someone 24x7 so that you can take action as well. Sometimes magic happens and systems recover after aws restarts their DNS, but usually the combination of event causes the application to get into an unrecoverable state that you need manual action. It doesn't always happen but you need someone to be there if it ever happens. Or bare minimum you need to evaluate if the underlying issue is really caused by AWS or something else has to be done on top of waiting for them to fix.


How many problems is AWS able to handle for you that you are never aware of though?


How many problems do you think there are?

I've only had one outage I could attribute to running on-prem, meanwhile it's a bit of a joke with the non-IT staff in the office that when "The Internet" (i.e. Cloudflare, Amazon) goes down with news reports etc our own services are all running fine.


Distributed systems can partly fail in many subtly different ways, and you almost never notice it because there are people on-call taking care of them.


It already is like that, but not because of the cloud. Those of us who begun with computers in the era of the command line were forced to learn the internals of operating systems, and many ended up turning this hobby into a job.

Youngsters nowadays start with very polished interfaces and smartphones, so even if the cloud wasn't there it would take them a decade to learn systems design on-the-job, which means it wouldn't happen anyway for most. The cloud nowadays mostly exists because of that dearth of system internals knowledge.

While there still are around people who are able to design from scratch and operate outside a cloud, these people tend to be quite expensive and many (most?) tend to work for the cloud companies themselves or SaaS businesses, which means there's a great mismatch between demand and supply of experienced system engineers, at least for the salaries that lower tier companies are willing to pay. And this is only going to get worse. Every year, many more experienced engineers are retiring than the noobs starting on the path of systems engineering.


It’s all anecdotal but in my experiences it’s usually opposite. Bored senior engineer wants to use something new and picks a AWS bespoke service for a new project.

I am sure it happens a multitude of ways but I have never seen the case you are describing.


I’ve seen your case more than the ransom scenario too. But also even more often: early-to-mid-career dev saw a cloud pattern trending online, heard it was a new “best practice,” and so needed to find a way to move their company to using it.


Is that what I should be doing? I'm just encouraging the devs on my team to read designing data intensive apps and setting up time for group discussions. Aside from coding and meetings that is.


Since when is "bored" a synonym for "dishonest"?


Please elaborate. Such a bold statement with zero logic around it.


> 3) New engineer comes in and panics

> 4) Ends up using a "managed service" to relieve the panic

It's not as though this is unique to cloud.

I've seen multiple managers come in and introduce some SaaS because it fills a gap in their own understanding and abilities. Then when they leave, everyone stops using it and the account is cancelled.

The difference with cloud is that it tends to be more central to the operation, so can't just be canceled when an advocate leaves.


> One of AWS's favorite situations

I'll give you an alternative scenario, which IME is more realistic.

I'm a software developer, and I've worked at several companies, big and small and in-between, with poor to abysmal IT/operations. I've introduced and/or advocated cloud at all of them.

The idea that it's "more expensive" is nonsense in these situations. Calculate the cost of the IT/operations incompetence, and the cost of the slowness of getting anything done, and cloud is cheap.

Extremely cheap.

Not only that, it can increase shipping velocity, and enable all kinds of important capabilities that the business otherwise just wouldn't have, or would struggle to implement.

Much of the "cloud so expensive" crowd are just engineers too narrowly focused on a small part of the picture, or in denial about their ability to compete with the competence of cloud providers.


> Much of the "cloud so expensive" crowd are just engineers too narrowly focused on a small part of the picture, or in denial about their ability to compete with the competence of cloud providers

This has been my experience as well. There are legitimate points of criticism but every time I’ve seen someone try to make that argument it’s been comparing significantly different levels of service (e.g. a storage comparison equating S3 with tape) or leaving out entire categories of cost like the time someone tried to say their bare metal costs for a two server database cluster was comparable to RDS despite not even having things like power or backups.


You are welcome to criticise my DB cluster comparison: https://news.ycombinator.com/item?id=46910521


That leaves out staffing, backups, development and testing of a multi-location failover mechanism as robust as the RDS one, and a bunch of security compliance work if that’s relevant.

It’s totally possible to beat AWS and volume is the way to do it–your admin’s salary doesn’t scale be linearly with storage–but every time I’ve tried to account for all of the costs it’s been close enough that it’s made sense to put people on things which can’t be outsourced.


If this database is a large portion of the infrastructure required then the fixed-ish costs don't scale so well, but a smaller cloud/hosting company should be considered.

But I have over 60 servers. Using the pricing calculator for the two AWS SaaS services that closely align with our primary service (40+ of those servers), we'd face a cost of over $1.2M/year if reserved for 3 years and paid upfront — that's for the service alone, I haven't added any bandwidth costs, or getting the data into those systems, and I've picked the minimum values for storage and throughput as I don't know what these should be. (Probably not the minimum.)

Add the remaining compute (~20 decent servers), a petabyte-scale storage pool, and all the rest, and the bill would likely exceed our entire IT budget including hardware, hosting, cloud services we do use, and all the salaries.

My rough estimate is our infrastructure costs would increase 8-10 times using AWS, our staff costs wouldn't reduce, and the risk to the budget would increase with variable usage.

This is tax money being spent, so I am asked every few years to justify why we aren't using cloud. (That's why I'm putting this much effort into a HN reply, the question was asked again recently.)

I know someone working in another country on essentially the same system for that country. They went all-in on AWS and pay every 1-2 months what we spend in a year, but have a fraction of our population/data.


Fron what I've seen this can work as a stopgap until IT get their hooks into the cloud system in which case you circle back to paying to costs of incompetence and the costs of the cloud (sometimes stacking on top of each other).


There's still a benefit in terms of infrastructure reliability. Recovery times are faster, backups more reliable, etc. Basically, vendor managed is better than customer managed in most situations, assuming a competent vendor.

Also, if the cloud systems are architected properly before IT gets hold of them, then they tend to retain their good properties for a long time, especially if others are paying attention to e.g. gitops pull requests.

My current company ended up replacing its (small) operations team in order to get people with cloud expertise. We hired the new team for the skills we needed. It's worked out well.


> 7) Paired with some "certified AWS partner"

What do you think RedHat support contracts are? This situation exists in every technology stack in existence.


Great comment. I agree it's a spectrum and those of us who are comfortable on (4) like yourself and probably us at Carolina Cloud [0] as well, (4) seems like a no brainer. But there's a long tail of semi-technical users who are more comfortable in 2-3 or even 1, which is what ultimately traps them into the ransomware-adjacent situation that is a lot of the modern public cloud. I would push back on "usage-based". Yes it is technically usage-based but the base fee also goes way up and there are also sometimes retainers on these services (ie minimum spend). So of course "usage-based" is not wrong but what it usually means is "more expensive and potentially far more expensive".

[0] https://carolinacloud.io, derek@


The problem is that clouds have easily become 3 or 5 times the price of managed services, 10x the price of option 3, and 20x the price of option 4. To say nothing of the fact that almost all businesses can run fine on "pc under desk" type situations.

So in practice cloud has become the more expensive option the second your spend goes over the price of 1 engineer.



Really it's for both. One of our cofounders came from a genomics background and so that's why we have the Marimo notebooks + R Studio Server + genomics container. But we've also had a lot of requests for general VMs so we made some of those as well.


Did you try to log in with Github or gmail?


We have no egress fees but you're right about Hetzner. They're cheap. We aim to offer more value-added software to make up for it. Also that machine is in Europe, Americans don't like the latency (at least I don't), they have been known to shut people down (within reason).

Also I'm seeing that the most they can go RAM-wise for a dedicated US location is 192G. We go to 512 and will soon go beyond. I'm sure they'll get there soon but that's a consideration as well.


Please go an checkout the whole offer of hetzner is already everything you need.


Hetzner doesn’t offer their cheap dedicated servers in the US, only their cloud offering which is similarly priced to the OP’s example ($223).


We have about 1000 cores right now.

We're really excited for the AMD EPYC Venice — 256 physical cores each -> 512 vCPUs -> 1024 vCPUS on a single board with dual-socket. It will probably be about $40k per machine with these RAM prices but we're definitely going to buy a few. A full data center on a single motherboard!

So we're limited on capacity since we own all our own hardware. Please do not use us for auto-scaling just yet. Our software would have no issue with us linking up other cloud machines such as AWS EC2 to our fleet and offering it there, which could help with auto-scaling, but we would not make any money on that and it would be a lot of engineering effort for us right now.


We don't have on-demand API-based bare metal provisioning right now. Sorry if that is misleading on the website. Our bare metal is OTC right now (over the counter).

For the rest of our provisioning (VM and container) I wrote the software myself. It's based on a Django app called the "master" that hosts the console and keeps track of who has rented what etc + a bunch of "host" nodes that listen for instructions from the "master". Pure python, the only thing that's in Go is the CLI.

I looked into Proxmox but ultimately decided I wanted full control. ZFS storage from Proxmox is something I do sometimes wish we had — going to offer s3-compatible storage very soon but I know Proxmox does ZFS out of the box really well.


How do you sanitize hardware between workloads?


Yes we're going to offer this very soon, we have a bunch of SSDs but we're still deliberating on what to use now that MinIO is moving toward closed source.


garage (written in rust) / seaweedfs (written in golang) are something which I have heard is mostly recommended by the community.

Someone on lowendtalk recently created a very minimalist php s3 thing too if you only need the basic crud app but I would recommend you to select garage from what I've heard

I have also heard someone creating a new rust s3 application but I forgot its name but maybe you can look out for it too and see what fits the best for your goal.

Also coolify team is maintaining a docker image of MinIO but that was before they said that minio is going closed source and that they had just stopped providing docker images so I am not sure how valid it is right now. There must be forks who are maintaining Minio too so you can look at it too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: