Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From your description, it sounds like your pain points don't come from versioning your microservice code; they come from versioning the data models that those microservices either 'own' or pass around to each other. While your approach of collecting your microservices as a collection of submodules is novel, that also defeats the purpose of microservices -- you should be able to maintain and deploy them independently without having to be concerned with interoperability.

While it's possible to alleviate some of your pains with versioned APIs to track changes to your data models, you also conflict with data you already have stored in schemaless DBs when those models update.

In a Node or frontend JS stack, I solve that problem with Vers [1]. In any other stack, the idea is fairly simple to replicate: Version your models _inside_ of your code by writing a short diff between it and the previous version every time it changes. Any time you pull data from a DB or accept it via an API, just slip in a call to update it to the latest version. Now your microservice only has to be concerned with the most up-to-date version of this data, and your API endpoints can use the same methods to downgrade any results back to what version that endpoint is using. And frankly that makes versioning your APIs far simpler, as now you move the versioning to the model layer (where all data manipulation really should be) and only need to version the actual API when you change how external services need to interact with it.

And now your other microservices can update to the new schema at their leisure. No more dependency chain driven by your models.

[1] https://github.com/TechnologyAdvice/Vers



> you should be able to maintain and deploy them independently without having to be concerned with interoperability.

How do you ensure that a service can be consumed? Or that an event is constructed with the correct type or parameters? Surely interoperability is the key for any SOA?

Vers looks interesting - I'll have a look at that! Thanks!


You're 100% correct -- interoperability is the key, but you ensure that by making sure everything at the interface layer (whether that's a REST API, a polled queue, a notification stream, etc) is versioned, and the other microservices using that interface include the version they're targeting.

If you include the version at the data level, any time it gets passed into a queue or message bus or REST endpoint, your microservice can seamlessly upgrade it to the latest version, which all of its own code has already been updated to use. If a response is required back to the service that originated the request, use your same versioning package (Vers if you go with that) to downgrade it back down to the version the external service is expecting.

If your interface layer is more complex, having responses independent from the data that change, that calls for a versioned API. Either throwing /v1/* or /v2/* into your URLs, or accepting some header that declares the version. But even in this case, you can drastically simplify changes to the model itself by implementing model versioning behind the scenes.


> you ensure that by making sure everything at the interface layer (whether that's a REST API, a polled queue, a notification stream, etc) is versioned, and the other microservices using that interface include the version they're targeting.

How would you handle deprecation, for example, in the case of a complete service rewrite?


You answered that one yourself :) Deprecation means to publicly notify that some API endpoint has hit end-of-life, and that a better alternative is available. If you completely rewrite a service, it's your responsibility to implement the same interface that you had before on top of it. Then you deprecate it and also publish your new api or data schema. Once you get around to migrating the rest of your application's services away from the deprecated endpoints, the next version of the microservice in question can remove that old code entirely.

Imagine Twitter or AWS completely rewriting their backend -- if they were to announce to the public that at a specific time, their old API URLs would 404 and the new ones would go live, it would be a wreck. They'd support the old API through deprecated methods and tell users they have X months to migrate away, if they remove that layer at all. Stress-free SOA must employ that same level of discipline in order to stay stress-free.

--And, functionally, the much easier alternative here isn't to re-implement your old API on top of your ground-up rewritten shiny new service, it would be to reduce your old service to an API shell and proxy any requests to the new service in the new format. Far less work that way. Use more traditional API versioning for the much more common updates. Unless you're rewriting your services every other week, in which case you have an entirely different problem ;-)


> You answered that one yourself :) Deprecation means to publicly notify that some API endpoint has hit end-of-life, and that a better alternative is available.

Thats exactly what the contracts do. The build would fail integration because the client is expecting the older version. This does mean that we don't have multiple incompatible versions of the service in the field, which is a mixed blessing.

> If you completely rewrite a service, it's your responsibility to implement the same interface that you had before on top of it.

Duplicate effort for redundant functionality?

> Then you deprecate it and also publish your new api or data schema. Once you get around to migrating the rest of your application's services away from the deprecated endpoints, the next version of the microservice in question can remove that old code entirely.

Which is exactly what the contracts do, wait until the services have started using the new API before the build is approved. The ONLY difference is that we can remove dead code and apis immediately, rather than doing it in the proverbial "tomorrow".

> Imagine Twitter or AWS completely rewriting their backend -- if they were to announce to the public that at a specific time, their old API URLs would 404 and the new ones would go live, it would be a wreck

None of these services are public facing. The public facing services are exposed via a border-api (which does indeed have a versioned api).


Vers is a really interesting idea. Is there anything analogous to this in python?


I'd love to hear back if you find something like this. Up until Vers was released, I lazily boilerplated it into whichever of my services needed the functionality. Even so, the pattern has been such a boon to our development cycles that it seems strange others haven't independently come to the same approach.


IMO this makes more sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: