Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Reduce likelihood of Postgres table scanning state_groups_state. #10359

Merged
merged 5 commits into from
Jul 15, 2021
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions changelog.d/10359.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix PostgreSQL sometimes using table scans for queries against `state_groups_state` table, taking a long time and a large amount of IO.
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
/* Copyright 2021 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/


-- By default the postgres statistics collector massively underestimates the
-- number of distinct state groups are in the `state_groups_state`, which can
-- cause postgres to use table scans for queries for multiple state groups.
--
-- To work around this we can manually tell postgres the number of distint state
erikjohnston marked this conversation as resolved.
Show resolved Hide resolved
-- groups there are by setting `n_distinct` (a negative value here is the number
-- of distinct values divided by the number of rows, so -0.02 means on average
-- there are 50 rows per disinct value). We don't need a particularly
erikjohnston marked this conversation as resolved.
Show resolved Hide resolved
-- accurate number here, as a) we just want it to always use index scans and b)
-- our estimate is going to be better than the one made by the statistics
-- collector.

ALTER TABLE state_groups_state ALTER COLUMN state_group SET (n_dinstict = -0.02)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

itym

Suggested change
ALTER TABLE state_groups_state ALTER COLUMN state_group SET (n_dinstict = -0.02)
ALTER TABLE state_groups_state ALTER COLUMN state_group SET (n_distinct = -0.02)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I was confused why this didn't error, but it turns out if you forget the semicolon on the end we don't even run the command...


-- Ideally we'd do an `ANALYZE state_groups_state (state_group)` here so that
-- the above gets picked up immediately, but that can take a bit of time so we
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a sidenote: on my 72GB sized db the ANALYZE above it took about 1500 ms.

Copy link
Contributor

@MTRNord MTRNord Jul 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my db it was around 2.5s but the result of analyze caused it to be noticeable slower than with the ALTER TABLE.

-- rely on the autovacuum eventually getting run and doing that in the
-- background for us.