Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not really. For example, it's easier to mock a microservice, than a module, for testing purposes. Let's say you have component A and component B, A depends on B (dependency implemented via runtime sync or async call), B is computationally intensive or has certain requirements on resources that make it harder or impossible to test on developer's machine. You may want to test only A: with monolithic architecture you'll have to produce another build of the application, that contains mock of B (or you need something like OSGi for runtime module discovery). When both components are implemented as microservices, you can start a container with mock of B instead of real B.


Mocking modules for unit testing has been a solved problem for decades in almost every language.


And running an integration test on a single modular application is easy. Doing it for a distributed system is very hard.


Running E2E blackbox test is equally simple for all kinds of architectures, especially today, when it's so easy to create a clean test environment with multiple containers even on developer's machine. It may be harder to automate this process for a distributed system, but, frankly speaking, I don't see a big difference between a docker-compose file or a launch script for monolith - I've been writing such tests for distributed systems casually for several years and from my personal experience it's much easier to process the test output and debug the microservices than monolithic applications.


> it's much easier to process the test output and debug the microservices than monolithic applications.

You easier to debug end-to-end tests of a microservice architecture that monolith? That's not my experience. How do you manage to put side by side all the events when they are in dozen of files?


Using only files for logging is the last thing I would do in 2018.

I use Serilog for structured logging. Depending on the log destination, your logs are either stored in an RDMS (I wouldn’t recommend it) or created as JSON with name value pairs that can be sent directly to a JSON data store like ElasticSearch or Mongo where you can do adhoc queries.

https://stackify.com/what-is-structured-logging-and-why-deve...


I just don't use files for anything (services should be designed with the assumption that container can be destroyed any time, so files are simply not an option here). If you are talking about the logs, there are solutions like Graylog to aggregate and analyze them.


Easy until you have 100,000 of them anyway, in which case it's expensive and slow to run it for every dev. (At that point you have enough devs that microservices 100% make sense, though)


Mocking modules for unit testing - yes. Testing the specific build artifact with a module replaced by a mock - no, it is not.


A dependency injection framework where you use flags at the composition root to determine whether the “real” implementation class or the mock class is used based on the environment.

IOrderService -> OrderService in production.

IOrderService ->FakeOrderService when testing.


You will end up with something like OSGi. That can be the right choice, but is also a quite 'heavyweight' architecture.

For a certain class of applications and organizational constraints, I also would prefer it. But it requires a much tighter alignment of implementation than microservices (e.g., you can't just release a new version of a component, you always have to release the whole application).


See the below response for more details. But that’s not how modern testing works (neither is my feature flag suggestion).

https://www.developerhandbook.com/unit-testing/writing-unit-...

It works similarly in almost every language.

For a certain class of applications and organizational constraints, I also would prefer it. But it requires a much tighter alignment of implementation than microservices (e.g., you can't just release a new version of a component, you always have to release the whole application).

Why is that an issue with modern CI/CD tools? It’s easier to just press a button and have your application go to all of your servers based on a deployment group.

With a monolith, with a statically typed language, refactoring becomes a whole lot easier. You can easily tell which classes are being used, do globally guaranteed safe renames, and when your refactoring breaks something, you know st compile time or with the correct tooling even before you compile.


> It’s easier to just press a button and have your application go to all of your servers based on a deployment group.

It's not so much about the deployment process itself (I agree with you that this can be easily automated), but rather about the deployment granularity. In a large system, your features (provided by either components or by independent microservices) usually have very different SLAs. For example, credit card transactions need to work 24x7, but generating the monthly account statement for these credit cards is not time-critical. Now suppose one of the changes in a less critical component requires a database migration which will take a minute. With separate microservices and databases, you could just pause that microservice. With one application and one database, all teams need to be aware of the highest SLA requirements when doing their respective deployments, and design for it. It is certainly doable, but requires a higher level of alignment between the development teams.

I agree with your remark about refactoring. In addition, when doing a refactoring in a microservice, you always need a migration strategy, because you can't switch all your microservices to the refactored version at once.


With separate microservices and databases, you could just pause that microservice. With one application and one database, all teams need to be aware of the highest SLA requirements when doing their respective deployments, and design for it. It is certainly doable, but requires a higher level of alignment between the development teams.

That’s easily accomplished with a Blue-Green deployment. As far as the database, you’re going to usually have a replication set up anyway. So your data is going to live in multiple databases anyway.

Once you are comfortable that your “blue” environment is good, you can slowly start moving traffic over. I know you can gradually move x% of traffic every y hours with AWS. I am assuming on prem load balancers can do something similar.


Could you elaborate a bit more?

If your database is a cluster, then it is still conceptually one database with one schema. You can't migrate one node of your cluster to a new schema version and then move your traffic to it.

If you have real replicas, then still all writes need to go to the same instance (cf. my example of credit card transactions). So I also don't understand how your migration strategy would look like.

blue-green is great for stateless stuff, but I fail to see how to apply it to a datastore.


Do you realize that this is actually an anti-pattern, that adds unnecessary complexity and potential security problems to your app? Test code must be separated from production code - something, that should know every developer.


It’s basically a feature flag. I don’t like feature flags but it is a thing.

But if you are testing an artifact, why isn’t the artifact testing part of your CI process? What you want to do is no more or less an anti pattern than swapping out mock services to test a microservice.

I’m assuming the use of a service discovery tool to determine what gets run. Either way, you could screw it up by it being misconfigured.


First of all it is the test code, no matter whether it's implemented as a feature flag or in any other way. Test code and test data shall not be mixed with the production one for many well-documented and well-known reasons: security, additional points of failure, additional memory requirements, impact on architecture etc.

>But if you are testing an artifact, why isn’t the artifact testing part of your CI process?

It is and it shall be part of the CI process. Commit gets assigned build number in tag, artifact gets the version and build number in it's name and metadata, deployment to CI environment is performed, tests are executed against specific artifact, so every time you deploy to production you have a proof, that the exact binary that is being deployed has been verified in its production configuration.

>I’m assuming the use of a service discovery tool to determine what gets run.

Service discovery is irrelevant to this problem. Substitution of mock can be done with or without it.


What exactly are you trying to accomplish?

If you are testing a single microservice and don’t want to test the dependent microservice - if you are trying to do a unit test and not an integration test, you are going to run against mock services.

If you are testing a monolith you are going to create separate test assemblies/modules that call your subject under test with mock dependencies.

They are both going to be part of your CI process then and either way you aren’t going to publish the artifacts until the tests pass.

Your deployment pipeline either way would be some type of deployment pipeline with some combination of manual and automated approvals with the same artifacts.

The whole discussion about which is easier is moot.

Edit: I just realized why this conversation is going sideways. Your initial assumptions were incorrect.

you may want to test only A: with monolithic architecture you'll have to produce another build of the application, that contains mock of B (or you need something like OSGi for runtime module discovery).

That’s not how modern testing is done.

https://www.developerhandbook.com/unit-testing/writing-unit-...


> What exactly are you trying to accomplish? Good test must verify the contract on the system boundaries: in case of the API, it's verification done by calling the API. We are discussing two options here: integrated application, hosting multiple APIs, and microservice architecture. Verification on the system boundaries means running the app, not running a unit test (unit tests are good, but serve different purpose). Feature flags make it only worse, because testing with them covers only non-production branches of your code.

> Your initial assumptions were incorrect. With nearly 20 years of engineering and management experience, I know very well how modern testing is done. :)


Verification on the system boundaries means running the app, not running a unit test

What is an app at the system boundaries if not a piece of code with dependencies?

If you have a microservice - FooService that calls BarService. The "system boundary" you are trying to test is FooService using a fake BarService. I'm assuming that you're calling FooService via HTTP using a test runner like Newman and test results.

In a monolithic application you have class FooModule that depends on BarModule that implements IBarModule. In your production application you use create FooModule:

var x = FooModule(new BarModule)

y = x.Baz(5);

In your Unit tests, you create your FooModuleL

var x = FooModule(new FakeBarModule) actual= x.Baz(5) Assert.AreEqual(10,actual)

And run your tests with a runner like NUnit.

There is no functional difference.

Of course FooModule can be at whatever level of the stack you are trying to test - even the Controller.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: