PostgreSQL tuning tips
Troubleshooting
General system state
A slow database server will typically be running out of:
- cpu,
- memory or
- disk.
So the first thing you should do is to get an overview of the system state using common UNIX tools such as:
- top (press c to see the full command line): look at CPU and memory usage
- iostat -x -m 5: look at i/o wait and service time
If you notice that memory is exhausted, you could play with the following postgresql.conf settings:
shared_buffers work_mem effective_cache_size
If you see a lot of I/O, then try adjusting these settings:
checkpoint_segments autovacuum_naptime wal_buffers
Finding slow queries
On the database side, start by determining whether:
- there is one large query holding up all of the resources or
- the number of queries is the main problem.
To do this, make sure that the following setting is turned on in your postgresql.conf:
stats_command_string = on
Then fire up psql dbname and look at the currently executing queries:
SELECT usename, current_query, query_start FROM pg_stat_activity;
(You can also add the procid column to the query if you want to map a query to the process ID you see in top.)
If you can't get anything useful out of pg_stat_activity, you may want to enable slow query logging by adding this to your postgresql.conf:
log_min_duration_statement = 100
(All queries which take more than 100 ms to execute will be printed to the main Postgres log file.)
Examining a particular query
Once you have identified a slow query, you can time it by running this command before executing the query manually:
\timing
To get an idea of where Postgres spends its time when executing that query, look at the query plan:
EXPLAIN your_query;
The numbers you see there are estimates. To run the query and get actual numbers, use this instead:
EXPLAIN ANALYZE your_query;
If you still can't figure out why a certain query takes so long, have a look at the ratio between live and dead rows in the relevant tables:
ANALYZE VERBOSE tablename;
Having too many dead rows is often a sign of insufficient vacuuming. You might want to turn autovacuum on if it isn't already, or to make it a bit more aggressive by tweaking these settings:
autovacuum_vacuum_scale_factor autovacuum_analyze_scale_factor
Fast tuning solutions (no theory)
The values and suggestions expressed here are just that, it's basic rules that Edoceo uses, YMMV. As always one should experiment with what values work best for your environment.
- max_connections = N Set this to 140% of the average number of expected connections. 100 clients average means 140 max connections.
- shared_buffers = N Set anywhere from 1/4 to 1/2 physical memory, must set kernel shared memory max first. Will see noticeable difference. These are calculated as 8K blocks so 256M would be 32768 = 1024*1024*256/8192.
- fsync = true|false Setting this to false will speed up the file system but crashes or unexpeced stop will require a restore from backup, keep as fsync=true
- work_mem = N Default is 1M but 2M doesn't hurt, this can also be set per connection via SET command which is reccomended.
- effective_cache_size = N The assumption about cached files by the operating system, look in /proc/sys/fs/file-nr and file-max for clues.
- log_statement = 'none'
On production system, where it can be avoided, turn off statement logging and minimize the other logging parameters.
syslog_facility = 'LOCAL2' syslog_ident = 'postgres' log_connections = false log_disconnections = false log_duration = false log_statement = 'none'