Hacker Newsnew | past | comments | ask | show | jobs | submit | trentdk's commentslogin

Seemed like a fine comment. I now know there are people who think those mentioned technologies are important enough to require mention in this sort of write-up. I’m now curious, and will look them up.


On the flip side to your closing statement, YouTube has been a boon to DIY. What used to be trade secrets are now on display by people all all professions. Anecdotally, I’ve remodeled nearly my entire house over they past ten years with YouTube.


yeah, the democratization of knowledge was 100% worth it. the only thing we can do from here is try to make it better


Insightful, thanks.


My mind just exploded. Would you have to breach a threshold to jump to the next state?


From my understanding, you start with a 0% chance to be in one state and 100%[1] to be in the other. Then the probability starts shifting more and more, till it is the other way around.

At least that is what I think it is, due to me having a very limited amount of knowledge in quantum physics. If a expert could confirm/deny my reasoning, it'd be appreciated.

[1] That is probably an exagerration as it would likely span more than just 2 possible states at any one time, but I chose two as a way to make it more understandable.


It’s true for converting liquid into gas.


I think what we’re concluding here is that using Google to obscure the linguistic style is flawed, because a state actor could obtain the original linguistic style from Google records, or from their own records of snooped traffic.

In other words: the blog should find a way to obscure linguistic style offline.


The idea is to stop the respiratory droplets that contain viruses. Droplets measure on the order of micrometers, which is the range that surgical masks are made to be effective.

A Google search result for droplet sizes: https://www.pnas.org/content/115/10/E2386

A Google search result for effectiveness of various masks related to particle size: https://blogs.cdc.gov/niosh-science-blog/2009/10/14/n95/


I wish this was the top comment; it would save us a lot of debate :).


A major benefit to microservices (over monoliths) that I haven’t seen mentioned yet is testability. I find it hard, or improbable to achieve a healthy Pyramid of Tests on a large monolith.

For example: a high level, black box test of a service endpoint requires mocking external dependencies like other services, queues, and data stores. With a large monolith, a single process might touch a staggering number of the aforementioned dependencies, whereas something constrained to be smaller in scope (a microservice) will have a manageable number.

I enjoy writing integration and API tests of a microservice. The ones that we manage have amazing coverage, and any refactor on the inside can be made with confidence.

Our monoliths tend to only support unit tests. Automated end-to-end tests exist, but due to the number of dependencies these things rely on, they’re executed in a “live” environment, which makes them hardly deterministic.

Microservices allow for a healthy Pyramid of Tests.


This is absolutely a fallacy. If you're testing a microservice and stubbing the other microservices, you aren't doing the equivalent of the high-level test on the monolith. You're doing something like a macro-unit test of the monolith with internal services stubbed.


I think there's a tendency to believe that a microservice written by someone else can be replaced with a stub that just validates JSON or something.

But in my experience, that thinking leads to bugs where the JSON was correct, but it still triggers an error when you run real business logic on it.

It's an easy trap to fall into because that microservice exists to abstract away that business logic, but you can't just pretend it doesn't exist when it comes to testing.

So stubs may be good for unit tests, but only if there are integration tests to match.


It's also useful if the team providing the service can provide a fake or a stub with interesting pre-programmed scenarios, this reduces the number of false assumptions someone can make, is sort of a living documentation (assuming people update their fakes). Something like contract testing (Pact etc) can also be useful, although I haven't seen it being used in practice yet.


Microservice testing come with version combination hell.

If you have 10 microservices, each of which can be on one of two versions, that's 1024 combinations. How do you test that?


Sounds like the services shouldn't be split up into 10 if there's that much dependency going on.

Like, services are an abstraction. If one service has to call all other 9 services, and the same occurs for the other 9 services -- then that's a monolith acting over the network.


Im yet to see a system that consists of other versions of code than ”new” and “current”. You test against changes only, what you described is some mess in deployed versions / versions management.


How is this different from what I'm describing?

"New" and "current" are two different versions.


In that you always test against only the versions you have deployed + new version of single service.

Which downplays your exaggerated 1024 cases to 1.


OK, but then you have a very controlled way of deploying each service.

Each team can't just deploy a new version of their microservice when it makes sense to them.

So your collection of microservices becomes a bit of a distributed monolith, losing some of the classic microservice advantages.

Or so it seems to me. I just read about this stuff, have never used it. Happy to be educated.


Its losing „some” adventages of startup grade microservices and gain maintainability adventages of „netflix/facebook” level grid... Depends whats your scale. Shipping shit fast is often not the best solution at that scale, doing it right is. And I have already explained to someone else in this thread why that approach is important.


This is true for moderately-sized microservices. If your microservices are too small, though, it's essentially impossible to write integration tests of the overall system as a whole--any such test would need to spin up N different dependencies.


So a solution has now caused problems which were solved 30 years ago


> I find it hard, or improbable to achieve a healthy Pyramid of Tests on a large monolith

I'd venture to say that this is a strong indication that You're Holding it Wrong

> a high level, black box test of a service endpoint

Then maybe don't do these kinds of high-level black box tests?

Because...

> requires mocking external dependencies

...if you're stubbing out everything, then it's not actually a high-level, black-box test. So no use pretending it is and having all these disadvantages.

Instead, use hexagonal architecture to make your code usable and testable without those external connections and only add them to a well-tested core.

See: Why I don't Mock (2014) https://blog.metaobject.com/2014/05/why-i-don-mock.html

Discussed at the time: https://news.ycombinator.com/item?id=7809402


I appreciate your thoughts, but they are sad thoughts to me.

Isn’t there something about “exploration” that seems important to a non-neglible amount of humans? I don’t want everything we do to serve the rat race we’re in. Let’s loose our money on something amazing.


Sure, as a society we do some things purely for emotional reasons. But we should be honest about that - we shouldn't pretend that the reason we're building the Washington monument is because of it's potential use as a grain silo.

There's a lot more to exploration than space exploration (I think exploring something like the human brain will end up having a greater emotional and practical impact myself). But even if we just look at space exploration, we don't really get a lot from human space flight. As I said, in situations where it's possible to use human astronauts, robots seem like they would be more effective. And for most of the places in our solar system, we simply won't be able to use humans at all anytime soon. The missions to, say, Europa are going to be done by robots.

Still, many people, including myself, enjoyed watching the launch. Maybe something like this is more analagous to a cathedral or memorial.


Light lag is one reason for humans to go. Until we crack robotics and AI to the point science can do itself (which I doubt it'll happen in the next 30-50 years), we'll still need humans micromanaging the robots remotely, and it's much easier to do with HD video streams and with millisecond RTT when you're on a science ship orbiting the body in question, vs. using occasional photos that take hours to send, on top of 3-22 minutes of lag, as we have now with Mars.

Not to mention, a probe or rover sent far away all have to go through one of the few DSNs, that can only do so much and so fast[0], and are generally scarce services. Forget about e.g. running 20 simultaneous robotic missions in different areas of Mars.

--

[0] - https://en.wikipedia.org/wiki/NASA_Deep_Space_Network#Curren...


which I doubt it'll happen in the next 30-50 years

I have to ask, why the rush? Why not wait 30-50 years until the robots are smart enough? Or we can bioengineer humans that are better adapted to that harsh environment?

This feels a bit like a real time strategy game, where someone at the start of the game wants to spend all their resources on big expensive research projects near the end of the tech tree. You're always better off building up the economy so you have enough surplus to tackle those expensive items later.


This is building the relevant aspects of the economy. Sitting on our butts, playing zero-sum games to squeeze more money out of each other with ads and products increasingly optimized to be as fragile as possible... that isn't getting us anywhere.

(Note that RTS economy is based around acquiring new resources. You can't make minerals by advertising, speculating, or even having your soldiers trade with each other. There are games that try to simulate trade, e.g. Stellaris, but even there it behaves more like a mine than like an economy.)

(Also: the rush is because our lifespans are finite, and I'd like to at least see some of that before I die.)


The leading self-driving vehicle company was bankrolled by ad money, so I think your cynicism is misplaced.

Also, in terms of bettering our lives... I'd bet advertising beats the ISS. What has the ISS given us? I know hundreds of small businesspeople that would never have gotten off the ground without modern advertising. There are people out there that click on ads and buy things, and that makes them happy.


https://www.nasa.gov/mission_pages/station/research/benefits...

It certainly is way overpriced in my opinion, but for me, it has served as a reason to wake up excited on many of my days off.

That being said, ads are a great way to bootstrap so many awesome things. I think the cynicism comes from the feeling of privacy invasion.


Our notion of exploration is very much tied to ancient history and in particular exploring places that we could not otherwise learn anything about other than by going there. The problem with space exploration is that we already have explored it to a huge degree just by using telescopes and remote probes. It's not entirely clear what humans could learn by going to somewhere like Mars, that couldn't otherwise be learned using satellites, robots and telescopes.


We send humans into space because we want to, science and engineering benefits are side rewards.


I didn’t feel like it painted just middle management as the good guys; it argued that all levels played some sort of managerial part.

Also, I think it argued that the costs saved from concentrating management went mostly to shareholders and executives.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: