I have been running my own postgres helm chart with read replication and pgpool2 for three years and never had major trouble. If you're interested check out https://github.com/sspies8684/helm-repo
Curious after quick look: how come the primary container (pg upstream image) has a volume for the replica which doesn't seem used and the replica (custom image which wipes $PGDATA at start) has no host volume (hence data is in the container)? (very possible I've missed something)
Actually, each tenant has just a view of the full table. There is no need to save the same routes for each customer over and over again. Furthermore, there is aggregation done before a route is installed on the switch.
Their strategy I believe was that if you were optimizing for latency(as opposed to price if you had multiple transit providers), was that it would do traceroutes out to your TOP 100 destination AS and find the AS path with the lowest latency and set the route preference within your AS to prefer a certain transit provider outbound.
How does your product work given that you don't have any control or interaction with the AWS AS?
We are working on this as well at http://datapath.io. You can use multiple iaas providers using global elastic IP addresses that you can announce from multiple locations at the same time. If you want to participate in our beta please write to beta@datapath.io
Id adds one hop (our appliance) to the path. It tries to increase performance of your path by taking non-standard BGP metrics into account: congestion and latency
Traffic passes the appliance at the edge of the hosting provider. It takes congestion at all links and the appliance itself into account and re-routes accordingly.
Depending on the characteristics of the location, at least three very unsimilar transit connections are used.
Transit providers sell access to the whole internet to us. We are in negotitations with them, but we will be more specific, once we have the commitments. Technically, we choose at least three different providers, so our appliance has a good basis of decision-making.
For the connection between your VPC and our appliance, we use the regular AWS DC API.