1
0
mirror of https://github.com/spantaleev/matrix-docker-ansible-deploy.git synced 2024-12-12 08:43:55 +02:00
matrix-docker-ansible-deploy/docs/maintenance-postgres.md
Wobbel The Bear 1b41e9c7dd
Update PostgreSQL Maintenance page
Added a mid-sized VPS configuration with configuration changes to the PostgreSQL database config.

Deleted single quotes in one of the examples to unify the examples
2021-05-26 13:50:35 +02:00

7.1 KiB

PostgreSQL maintenance

This document shows you how to perform various maintenance tasks related to the Postgres database server used by Matrix.

Table of contents:

Getting a database terminal

You can use the /usr/local/bin/matrix-postgres-cli tool to get interactive terminal access (psql) to the PostgreSQL server.

If you are using an external Postgres server, the above tool will not be available.

By default, this tool puts you in the matrix database, which contains nothing.

To see the available databases, run \list (or just \l).

To change to another database (for example synapse), run \connect synapse (or just \c synapse).

You can then proceed to write queries. Example: SELECT COUNT(*) FROM users;

Be careful. Modifying the database directly (especially as services are running) is dangerous and may lead to irreversible database corruption. When in doubt, consider making a backup.

Vacuuming PostgreSQL

Deleting lots data from Postgres does not make it release disk space, until you perform a VACUUM operation.

To perform a FULL Postgres VACUUM, run the playbook with --tags=run-postgres-vacuum.

Example:

ansible-playbook -i inventory/hosts setup.yml --tags=run-postgres-vacuum,start

Note: this will automatically stop Synapse temporarily and restart it later. You'll also need plenty of available disk space in your Postgres data directory (usually /matrix/postgres/data).

Backing up PostgreSQL

To automatically make Postgres database backups on a fixed schedule, see Setting up postgres backup.

To make a one off back up of the current PostgreSQL database, make sure it's running and then execute a command like this on the server:

/usr/bin/docker exec \
--env-file=/matrix/postgres/env-postgres-psql \
matrix-postgres \
/usr/local/bin/pg_dumpall -h matrix-postgres \
| gzip -c \
> /matrix/postgres.sql.gz

If you are using an external Postgres server, the above command will not work, because neither the credentials file (/matrix/postgres/env-postgres-psql), nor the matrix-postgres container is available.

Restoring a backup made this way can be done by importing it.

Upgrading PostgreSQL

Unless you are using an external Postgres server, this playbook initially installs Postgres for you.

Once installed, the playbook attempts to preserve the Postgres version it starts with. This is because newer Postgres versions cannot start with data generated by older Postgres versions.

Upgrades must be performed manually.

This playbook can upgrade your existing Postgres setup with the following command:

ansible-playbook -i inventory/hosts setup.yml --tags=upgrade-postgres

The old Postgres data directory is backed up automatically, by renaming it to /matrix/postgres/data-auto-upgrade-backup. To rename to a different path, pass some extra flags to the command above, like this: --extra-vars="postgres_auto_upgrade_backup_data_path=/another/disk/matrix-postgres-before-upgrade"

The auto-upgrade-backup directory stays around forever, until you manually decide to delete it.

As part of the upgrade, the database is dumped to /tmp, an upgraded and empty Postgres server is started, and then the dump is restored into the new server. To use a different directory for the dump, pass some extra flags to the command above, like this: --extra-vars="postgres_dump_dir=/directory/to/dump/here"

To save disk space in /tmp, the dump file is gzipped on the fly at the expense of CPU usage. If you have plenty of space in /tmp and would rather avoid gzipping, you can explicitly pass a dump filename which doesn't end in .gz. Example: --extra-vars="postgres_dump_name=matrix-postgres-dump.sql"

All databases, roles, etc. on the Postgres server are migrated.

Tuning PostgreSQL

PostgreSQL can be tuned to make it run faster. This is done by passing extra arguments to Postgres with the matrix_postgres_process_extra_arguments variable. You should use a website like https://pgtune.leopard.in.ua/ or information from https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server to determine what Postgres settings you should change.

Note: the configuration generator at https://pgtune.leopard.in.ua/ adds spaces around the = sign, which is invalid. You'll need to remove it manually (max_connections = 300 -> max_connections=300)

Here are some examples:

These are not recommended values and they may not work well for you. This is just to give you an idea of some of the options that can be set. If you are an experienced PostgreSQL admin feel free to update this documentation with better examples.

Here is an example config for a small 2 core server with 4GB of RAM and SSD storage:

matrix_postgres_process_extra_arguments: [
  "-c shared_buffers=128MB",
  "-c effective_cache_size=2304MB",
  "-c effective_io_concurrency=100",
  "-c random_page_cost=2.0",
  "-c min_wal_size=500MB",
]

Here is an example config for a 4 core server with 8GB of RAM on a Virtual Private Server (VPS); the paramters have been configured using https://pgtune.leopard.in.ua with the following setup: PostgreSQL version 12, OS Type: Linux, DB Type: Mixed type of application, Data Storage: SSD storage:

matrix_postgres_process_extra_arguments: [
  "-c max_connections=100",
  "-c shared_buffers=2GB",
  "-c effective_cache_size=6GB",
  "-c maintenance_work_mem=512MB",
  "-c checkpoint_completion_target=0.9",
  "-c wal_buffers=16MB",
  "-c default_statistics_target=100",
  "-c random_page_cost=1.1",
  "-c effective_io_concurrency=200",
  "-c work_mem=5242kB",
  "-c min_wal_size=1GB",
  "-c max_wal_size=4GB",
  "-c max_worker_processes=4",
  "-c max_parallel_workers_per_gather=2",
  "-c max_parallel_workers=4",
  "-c max_parallel_maintenance_workers=2",
]

Here is an example config for a large 6 core server with 24GB of RAM:

matrix_postgres_process_extra_arguments: [
  "-c max_connections=40",
  "-c shared_buffers=1536MB",
  "-c checkpoint_completion_target=0.7",
  "-c wal_buffers=16MB",
  "-c default_statistics_target=100",
  "-c random_page_cost=1.1",
  "-c effective_io_concurrency=100",
  "-c work_mem=2621kB",
  "-c min_wal_size=1GB",
  "-c max_wal_size=4GB",
  "-c max_worker_processes=6",
  "-c max_parallel_workers_per_gather=3",
  "-c max_parallel_workers=6",
  "-c max_parallel_maintenance_workers=3",
]