From f99051f99b5870f9cc68461fbc7f142d457de962 Mon Sep 17 00:00:00 2001 From: Kai Franz Date: Tue, 19 Nov 2024 14:17:33 -0800 Subject: [PATCH] [BACKPORT 2.20.7][#24951] Create separate memory context for RelationBuildTriggers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Summary: Original commit: 39b28c2b7c193d5ceaf2dd238e2df9e922c3be59 / D40005 Currently, there is an issue where on certain customer schemas, PG backend memory usage spikes up to 1GB during connection startup. In short, this is caused by our relcache preloading combined with inefficient memory management. The issue is as follows: 1. First, we prefetch the `pg_trigger` table by loading it from the master leader into the local tserver's memory. 2. For every table, we call `RelationBuildTriggers` to build the triggers section of the table's relcache entry. 3. Inside `RelationBuildTriggers`, we scan `pg_trigger` for the relevant triggers. Normally, we can use the `pg_trigger_tgrelid_tgname_index` index to seek directly to the triggers we want using the table's OID. But we can't do index seeks on the prefetched table, so we do a sequential scan on the entire table, copying every row of `pg_trigger` into PG memory and then filtering for the triggers we want. The entire YbUpdateRelationCache process uses a single memory context (`UpdateRelationCacheContext`), so each call to `RelationBuildTriggers` will allocate memory from this memory context which isn't freed until the entire relation cache is built. 4. As a result, we are using a lot of extra memory. If we have `m` total triggers and `n` total tables, we are allocating memory for `mn` rows in the `UpdateRelationCacheContext`. As a shorter-term fix for this issue, this revision creates a new memory context every time we invoke `RelationBuildTriggers` which gets freed at the end of `RelationBuildTriggers`. This way, even though we are still copying `mn` rows into memory, we are only allocating memory for `m` rows at a time before freeing the memory, avoiding the spike in memory usage. This issue does not address the CPU overhead of doing all of these extra copies—this is addressed in the longer-term fix D40003. Test Plan: Jenkins: urgent Reviewers: mihnea, myang Reviewed By: myang Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D40108 --- src/postgres/src/backend/commands/trigger.c | 22 +++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/src/postgres/src/backend/commands/trigger.c b/src/postgres/src/backend/commands/trigger.c index 917de6fde23b..f0256656457b 100644 --- a/src/postgres/src/backend/commands/trigger.c +++ b/src/postgres/src/backend/commands/trigger.c @@ -1930,8 +1930,19 @@ RelationBuildTriggers(Relation relation) SysScanDesc tgscan; HeapTuple htup; MemoryContext oldContext; + MemoryContext ybSavedContext; + MemoryContext ybTriggerContext; int i; + if (IsYugaByteEnabled()) + { + ybTriggerContext = + AllocSetContextCreate(CurrentMemoryContext, + "RelationBuildTriggers context", + ALLOCSET_DEFAULT_SIZES); + ybSavedContext = MemoryContextSwitchTo(ybTriggerContext); + } + /* * Allocate a working array to hold the triggers (the array is extended if * necessary) @@ -2047,6 +2058,11 @@ RelationBuildTriggers(Relation relation) if (numtrigs == 0) { pfree(triggers); + if (IsYugaByteEnabled()) + { + MemoryContextSwitchTo(ybSavedContext); + MemoryContextDelete(ybTriggerContext); + } return; } @@ -2064,6 +2080,12 @@ RelationBuildTriggers(Relation relation) /* Release working memory */ FreeTriggerDesc(trigdesc); + + if (IsYugaByteEnabled()) + { + MemoryContextSwitchTo(ybSavedContext); + MemoryContextDelete(ybTriggerContext); + } } /*