Hacker Newsnew | past | comments | ask | show | jobs | submit | sspies's commentslogin

Thank you for creating dataclasses!


Hey there. Please go ahead creating an account using the form just below the hero. You should be able to provision a router right away.


I have been running my own postgres helm chart with read replication and pgpool2 for three years and never had major trouble. If you're interested check out https://github.com/sspies8684/helm-repo


Thanks for sharing!

Curious after quick look: how come the primary container (pg upstream image) has a volume for the replica which doesn't seem used and the replica (custom image which wipes $PGDATA at start) has no host volume (hence data is in the container)? (very possible I've missed something)


Actually, each tenant has just a view of the full table. There is no need to save the same routes for each customer over and over again. Furthermore, there is aggregation done before a route is installed on the switch.


Can you talk more about how your product works? For instance this reminds me of an Internap product:

http://www.internap.com/network-services/miro-controller/

Their strategy I believe was that if you were optimizing for latency(as opposed to price if you had multiple transit providers), was that it would do traceroutes out to your TOP 100 destination AS and find the AS path with the lowest latency and set the route preference within your AS to prefer a certain transit provider outbound.

How does your product work given that you don't have any control or interaction with the AWS AS?


I can fully agree with your experience using DNS failover vs anycast failover. Made a video of how I implemented anycast https://www.youtube.com/watch?v=tsXpQHi7Udo


Interesting video, thanks for sharing


We are working on this as well at http://datapath.io. You can use multiple iaas providers using global elastic IP addresses that you can announce from multiple locations at the same time. If you want to participate in our beta please write to beta@datapath.io

Best, Sebastian


I do not like fat jars. We use JVM + mvn + appassembler and pack the output into docker images. Not a big deal.



You cannot do graceful failovers with Route 53. This is, because DNS is a system of many dependend caching layers and the TTL value is not realiable.


the TTL value is not realiable.

How is the TTL not reliable?

In my personal experience DNS cut-overs have never been a problem. Is there any major OS or ISP doing DNS wrong?


We have seen providers set the TTL value to 300 seconds no matter what. Chrome has been caching names forever while running...just two examples


Id adds one hop (our appliance) to the path. It tries to increase performance of your path by taking non-standard BGP metrics into account: congestion and latency


Does that mean all traffic passes through your appliance?

Where is it physically located, what about congestion of the appliance itself, what are your peerings?


Traffic passes the appliance at the edge of the hosting provider. It takes congestion at all links and the appliance itself into account and re-routes accordingly. Depending on the characteristics of the location, at least three very unsimilar transit connections are used.


at least three very unsimilar transit connections are used.

What does transit mean in this context; like between my VPC and your appliance? Can you give an example for e.g. an AWS region?

How is the appliance implemented, is it physical hardware, EC2 instances? Is it redundant, how do you scale it in terms of bandwidth?


Transit providers sell access to the whole internet to us. We are in negotitations with them, but we will be more specific, once we have the commitments. Technically, we choose at least three different providers, so our appliance has a good basis of decision-making. For the connection between your VPC and our appliance, we use the regular AWS DC API.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: