Re: Opteron/FreeBSD/PostgreSQL performance poor

Поиск
Список
Период
Сортировка
От Stephen Frost
Тема Re: Opteron/FreeBSD/PostgreSQL performance poor
Дата
Msg-id 20060705154656.GS22057@kenobi.snowman.net
обсуждение исходный текст
Ответ на Opteron/FreeBSD/PostgreSQL performance poor  (andy rost <andy.rost@noaa.gov>)
Ответы Re: Opteron/FreeBSD/PostgreSQL performance poor
Список pgsql-performance
* andy rost (andy.rost@noaa.gov) wrote:
> We're in the process of porting from Informix 9.4 to PostgreSQL 8.1.3.
> Our PostgreSQL server is an AMD Opteron Dual Core 275 with two 2.2 Ghz
> 64-bit processors. There are two internal drives and an external
> enclosure containing 14 drives (configured as 7 pairs of mirrored drives
> - four pairs for table spaces, one pair for dbcluster, two pairs for
> point in time recovery). The operating system is FreeBSD 6.0-RELEASE #10

Not sure it matters, but is the mirroring done with a hardware
controller or in software?

> shared_buffers = 125000                 # min 16 or max_connections*2,
> 8KB each
> temp_buffers = 1000                     # min 100, 8KB each
> max_prepared_transactions = 0           # can be 0 or more
> # note: increasing max_prepared_transactions costs ~600 bytes of shared
> memory
> # per transaction slot, plus lock space (see max_locks_per_transaction).
> work_mem = 10000                        # min 64, size in KB
> maintenance_work_mem = 50000            # min 1024, size in KB
> max_stack_depth = 500000                # in 100, size in KB
>                                         # ulimit -a or ulimit -s

These seem kind of.. backwards...  Just an example of one system I've
got shows:

shared_buffers = 10000
work_mem = 32768
maintenance_work_mem = 65535

Defaults for the rest.  This is more of a data-warehouse than an OLTP,
so I'm sure these aren't perfect for you, but you might try playing with
them some.

> # - Free Space Map -
> max_fsm_pages = 600000                  # min max_fsm_relations*16, 6
> bytes each

This seems somewhat hgih from the default of 20,000, but for a very
frequently changing database it may make sense.

> archive_command = 'archive_wal -email -txtmsg "%p" "%f"'        #
> command to use

Are WALs being archived very frequently?  Any idea if this takes much
time?  I wouldn't really think it'd be an issue, but might be useful to
know.

> effective_cache_size = 27462            # typically 8KB each

This seems like it might be a little low...  How much memory do you have
in the system?  Then again, with your shared_mem set so high, perhaps
it's not that bad, but it might make sense to swap those two settings,
or at least that'd be a more common PG setup.

> random_page_cost = 2                    # units are one sequential page

That's quite a bit lower than the default of 4...  May make sense for
you but it's certainly something to look at.

> We're running an OLTP database with a small number of connections (<50)
> performing mostly reads and inserts on modest sized tables (largest is <
> 2,000,000 records).
>
> The symptoms are:
>
> a) All 4 CPUs are nearly always 0% idle;
> b) The system load level is nearly always in excess of 20;

At a guess I'd say that the system is doing lots of sequential scans
rather than using indexes, and that's why the processes are ending up in
a disk-wait state, which makes the load go up.  Have you looked at the
plans which are being generated for the most common queries to see what
they're doing?

I'd also wonder if the shared_mem setting isn't set *too* high and
causing problems with the IPC or something...  Not something I've heard
of (generally, going up with shared_mem doesn't degrade performance,
just doesn't improve it) but might be possible.

> We VACUUM ANALYZE user databases every four hours. We VACUUM template1
> every 4 hours. We make a copy of the current WAL every minute. We create
> a PIT recovery archive daily daily. None of these, individually seem to
> place much strain on the server.

This doesn't sound too bad at all.  How long do the vacuum's run for?
If it's 3 hours, then that might start to be an issue with disk I/O
contention...

> Hopefully I've supplied enough information to start diagnosing the
> problem. Any ideas, thoughts, suggestions are greatly appreciated ...

Just my 2c, hopefully you'll get some better answers too. :)

    Thanks,

        Stephen

Вложения

В списке pgsql-performance по дате отправления:

Предыдущее
От: andy rost
Дата:
Сообщение: Opteron/FreeBSD/PostgreSQL performance poor
Следующее
От: Eugeny N Dzhurinsky
Дата:
Сообщение: Re: managing database with thousands of tables