fix(multipooler): read cached copy of pooler topo#243
fix(multipooler): read cached copy of pooler topo#243cuongdo merged 4 commits intomultigres:mainfrom
Conversation
follow up for: multigres#226 (comment) Also added some of the configs from the Supabase pgBackRest config supabase/postgres#1878 Signed-off-by: Cuong Do <cuongdo@users.noreply.github.com>
| defer pm.mu.Unlock() | ||
| if pm.multipooler != nil && pm.multipooler.MultiPooler != nil { | ||
| return pm.multipooler.TableGroup | ||
| // getCachedTableGroup returns the table group from the multipooler record |
There was a problem hiding this comment.
I wonder whether we should keep the original names for these functions. Putting Cached in the name makes it very explicit that we aren't touching the "real" object, but the fact that it is cached should be an internal implementation detail that is not exposed to callers. As far as callers are concerned, they ask MultiPoolerManager for its TableGroup / Shard / PoolerType and they get whatever it gives them.
Eventually we might also have to make these functions exported, because not all manager code will live in the manager package. But that is something we can do when the time comes.
Signed-off-by: Cuong Do <cuongdo@users.noreply.github.com>
Restore wasn't handling the dead DB connections left behind by its stopping/restarting Postgres. However, this raises the question of how resilient multipooler.Executor needs to be in this situation. Signed-off-by: Cuong Do <cuongdo@users.noreply.github.com>
orch should detect this and initiate fix Signed-off-by: Cuong Do <cuongdo@users.noreply.github.com>
d0eeaa4 to
88ef130
Compare
|
@deepthi & @rafael thanks for your reviews! I've removed the However, on further reflection, this shouldn't be necessary. The connection pool should be more resilient. I can fix that in a separate PR. |
follow up to:
#226 (comment)
Also added some of the configs from the Supabase pgBackRest config supabase/postgres#1878