> This is a conceptual error. There is no such thing as "the application as a whole". There is only the currently-deployed set of services, and their versions.
Im using Martin Fowlers description (whom I admire greatly):
"In short, the microservice architectural style is an approach to developing a single application as a suite of small services" http://martinfowler.com/articles/microservices.html
My problem is knowing which microservice dependencies the application has.
> You should have a build, test, and deploy pipeline (i.e. continuous deployment) which is triggered on any commit to master of any service. The "test" part should include system/integration tests for all services, deployed into a staging environment. If all tests pass, the service that triggered the commit can rolled out to production. Ideally that rollout should happen automatically, should be phased-in, and should be aborted and rolled back if production monitoring detects any problems.
We have this, and it is what builds the manifest (as a git repo containing the submodules).
The problems occur when you have multiple commits to different services. If a build is marked "red" because it fails a "future integration" then it just means that it is failing at that point in time. It may, following a commit of a dependency, become valid. However, it would need to have its build manually retriggered in order to be classified as such.
This becomes cumbersome when you have a not insignificant number of services being commit to on a regular basis.
I admire Martin Fowler as well but he is so wrong here.
Microservices should be designed, developed and deployed as 100% autonomous entities. The whole point of them is that you can reuse individual services for new applications or integrate them with third party (e.g. different team in your company) services without affecting other applications. Otherwise what is the point of doing this ?
Using git submodules absolutely violates this and is a bad idea. I would have separate repos and deploy scripts and then set the version number of each service to the build number which guarantees traceability and makes it easier for change management.
The whole git submodule thing is solely a manifest of all the git refs for compliant microservices. Lets just pretend that it is a text file, and call it the service manifest (why the submodule hate!) :)
The microservices are indeed "deployed" independently, based on the version (ref) as indicated in the service manifest.
> I would have separate repos and deploy scripts and then set the version number of each service to the build number which guarantees traceability and makes it easier for change management.
They have seperate repos, they have seperate "versions" (we use the git ref, which is constant).
As for why I use submodules, this is (in pseudo-bash) what a service deployment looks like:
git pull
git submodule update SERVICE_NAME
cd SERVICE_NAME
./start.sh
Microservices that depend on other microservices obviously are not entirely autonomous. You have to version large API changes and integration-test the whole chain of dependent services on every change.
It's not unlike libraries, just linked via network.
> "In short, the microservice architectural style is an
> approach to developing a single application as a suite
> of small services"
Sure, it's a single application from the perspective of the user, and that's a great abstraction to keep in mind when you're talking about things like uptime. But from the perspective of the operator, it's purely an abstraction -- a name you give to the collection of things you're manipulating.
As such, I'm not sure building a single repo of git submodules makes much sense. Maybe for accountability reasons, but not for actual operations. The ·raison d'être· of microservices is to decouple the lifecycle of each service totally. By recombining them into a tagged release repo, you're arguably subverting that intent.
> The problems occur when you have multiple commits to
> different services. If a build is marked "red" because
> it fails a "future integration" then it just means that
> it is failing at that point in time. It may, following a
> commit of a dependency, become valid. However, it would
> need to have its build manually retriggered . . .
Yep. To solve this problem, your CD system needs to automatically trigger rebuilds (and re-tests) of dependent (downstream) services when a dependency (upstream) changes. The easiest way to do that, in my experience, is to have a manifest in every repo that specifies dependencies as a flat list, which the CD system can parse to build a dependency graph.
Tangential point, but -- with this, we're just scratching the surface of the incidental complexity that comes when you move to a microservices architecture. This is the stuff that isn't in all of those exalting blog posts -- beware! :)
You shouldn't have services rebuilding/redeploying other services. We don't insist that Facebook rebuilds their applications if my Facebook app changes. There should be either an (a) API client, (b) stub version of that service's APIs or (c) deployed test service for any services you depend on.
The architecture of microservices is pretty simple. It is a mini version of how the internet works.
> The easiest way to do that, in my experience, is to have a manifest in every repo that specifies dependencies as a flat list, which the CD system can parse to build a dependency graph.
Thats pretty much how we manage the contracts.
> To solve this problem, your CD system needs to automatically trigger rebuilds (and re-tests) of dependent (downstream) services when a dependency (upstream) changes
This is GOLD. I think you may have just solved the problem with something so obviously simple that it has made me feel like a fool... Many thanks!!!
Im using Martin Fowlers description (whom I admire greatly): "In short, the microservice architectural style is an approach to developing a single application as a suite of small services" http://martinfowler.com/articles/microservices.html
My problem is knowing which microservice dependencies the application has.
> You should have a build, test, and deploy pipeline (i.e. continuous deployment) which is triggered on any commit to master of any service. The "test" part should include system/integration tests for all services, deployed into a staging environment. If all tests pass, the service that triggered the commit can rolled out to production. Ideally that rollout should happen automatically, should be phased-in, and should be aborted and rolled back if production monitoring detects any problems.
We have this, and it is what builds the manifest (as a git repo containing the submodules).
The problems occur when you have multiple commits to different services. If a build is marked "red" because it fails a "future integration" then it just means that it is failing at that point in time. It may, following a commit of a dependency, become valid. However, it would need to have its build manually retriggered in order to be classified as such.
This becomes cumbersome when you have a not insignificant number of services being commit to on a regular basis.