Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the main problem with all of these systems is that are just so damn complex.

Docker, its great if you have no state. But then if you have no state shit is easy. Mapping images to highspeed storage securely and reliably is genuinely hard. (unless you use NFSv4 and kerberos)

Mesos is just over kill for everything. How many people actually need shared image programs bigger than a 64 core machine with 512gigs of ram? (and now good are you at juggling NUMA or NUMA like interfaces)

I can't help thinking that what people would really like is just a nice easy to use distributed CPU scheduler. Fleet basically, just without the theology that comes with it.

Seriously, mainframes look super sexy right now. Easy resource management, highly scalable real UNIX. (no need to spin up/down, just spawn a new process)



> Mesos (sic) is just over kill for everything.

That's definitely an opinion. :)

I have seen Mesosphere deployed with great success.

Insofar as state, this is one reason I'm not crazy about CoreOS - I feel more comfortable containerizing the application tier than the data tier, though both are certainly possible.

I'm really not eager to replace a highly tuned MySQL or Postgres machine with a container environment experiencing several levels of abstraction and redirection. I get frustrated enough trying to align partitions with block boundaries through RAID controllers.

But if you have 20 front-end app servers and 5 machines that run cron jobs, container services can help you to utilize your capacity much better. I can't say how many times I've worked somewhere that we desperately needed capacity, but didn't have the budget to expand until we cleaned up a bunch of machines that were vastly underutilized.

Anyway, Mesosphere isn't perfect, I have only even used it moderately, but there's a lot of tooling out there which we can use.

Def agree on the wierd theology of fleet, but also generally that it just doesn't do enough for me. It's way too much fucking trouble to say, "Run an http proxy on each physical machine".


The thing I don't get about the storage issue is this: Does volume mounting into a Docker container impact performance meaningfully at all? No? Can't you just punt on the migratable storage question until then? You'd not be any worse off than you are and doing upgrades to the db engine would be pretty easy, right? I dunno, maybe there's something I'm missing, block storage, what are you gonna do?


You might like Lattice[0], which is extracted from Cloud Foundry.

Basically, everyone is racing back to PaaSes. Heroku pioneered it and are still out there. Red Hat have OpenShift and are making noises about turning it into a Docker+Kubernetes thing in version 3. Cloud Foundry has been around for a few years now. There are other also-rans.

The thing is that apart from Heroku, you've not heard of installable PaaSes because they're being pitched to the Fortune 500s.

I've worked on Cloud Foundry and I work for the company which donates the most effort to the Foundation. It's been surreal to watch other people introduce pieces of a PaaS and see the excitement about the pieces. Meanwhile, we literally have an entire turnkey system already. If you need a full PaaS -- push your app or service and have it running in seconds, with health management, centralised logging, auto-placement, service injection, the works -- we built it already. Free and opensource, owned by an independent Cloud Foundry Foundation.

Anyhow, I'm obviously biased, YMMV etc etc. But I'd play with Lattice, to get the hang of things.

[0] http://lattice.cf/


"Red Hat have OpenShift and are making noises about turning it into a Docker+Kubernetes thing in version 3"

They've already decided that, and have even reached code-freeze. Their conference is later this month, so that's when it's going to all be announced and rollout plans detailed.

https://github.com/openshift/origin


I remember reading the design docs last year. Interesting times ahead.


There are some surprising places where you can enforce state where it seems impossible. Once you have that, the benefits coalesce.

That's why I built ShutIt, which we've used to encapsulate complex legacy environments to produce stateless builds:

http://ianmiell.github.io/shutit/

For example, teams can have a development environment (with _everything_ in it) rebuilt daily. As everyone uses it, everyone curates it, and they're all talking about the same thing - one pet if you like, rather than n, where n is the number of developer/development envs.


Docker is great also to hide complexity during implementation. Discourse.org is doing a great work "enveloping" their complex rails app in containers to easy the install process. And is not stateless.


"hide complexity"

You mean hide ugly sprawling uninstallable messes of rube goldberg code? :)

I'm referring to things that require PHP+MySQL, Node, Redis, and an old JVM process running Struts/Spring all managed by nginx except for that one situation where Apache2 .htaccess semantics are required for rewrite rules in which case it runs Apache proxied by nginx.


By shipping only in a Docker container you will limit the audience for your app. That is the reason we ship GitLab as Omnibus packages (deb/rpm).


While deb/rpm packages are better than a monolithic container, having used one of your rpms, I'd say they're only just barely better.


What didn't you like about the Omnibus rpms?


Omnibus packages are a horrible idea. They are security issues waiting to happen.

A great example is the Chef server rpm. It is a 500mb mini distribution in one package. It has copies of perl, python, Ruby, and Erlang in it. If any of these has a security vulnerability, I have to wait on the maintainer to release a new version, and hope it included the security fixes.

They also tend to include things like python header files for no reason. You wouldn't compile against an Omnibus package, but they are there anyway. Examples of this are Sumologic's and Datadog's agents.


We are aware that we'll have to patch any security issues and have done so reliably. I agree it is not ideal and we'll always be slower than the distribution packages. On the other hand the installation is much faster to perform (2 minutes instead of 10 pages of copy pasting) and we're able to ship with very secure settings for the integration points (sockets, etc.). But we recognize that some people will prefer native packages and are sponsoring work to make native Debian packages.


I disagree that they're doing great work, to me it looks like they've basically painted themselves in a corner by including too many dependencies, and the only way to get out of this corner was Docker.

The result is that their forum software requires another operating system to run. Had they been more disciplined in their development approach, Docker would have been merely a convenient way to test Discourse, and not the only supported option.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: