Re: Large Table With Only a Few Rows

Поиск
Список
Период
Сортировка
От Peter Childs
Тема Re: Large Table With Only a Few Rows
Дата
Msg-id a2de01dd0602271034lb3fb954u@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Large Table With Only a Few Rows  (Chris Browne <cbbrowne@acm.org>)
Список pgsql-performance


On 27/02/06, Chris Browne <cbbrowne@acm.org> wrote:
"Nik" <XLPizza@gmail.com> writes:
> I have a table that has only a few records in it at the time, and they
> get deleted every few seconds and new records are inserted. Table never
> has more than 5-10 records in it.
>
> However, I noticed a deteriorating performance in deletes and inserts
> on it. So I performed vacuum analyze on it three times (twice in a row,
> and once two days later). In the statistics it says that the table size
> is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even
> though the table has only 5-10 rows in it it. I was wondering how can I
> reclaim all this space and improve the performance?

You need to run VACUUM ANALYZE on this table very frequently.

Based on what you describe, "very frequently" should be on the order
of at least once per minute.

Schedule a cron job specifically to vacuum this table, with a cron
entry like the following:

* * * * * /usr/local/bin/vacuumdb -z -t my_table -p 5432 my_database

Of course, you need to bring it back down to size, first.

You could run CLUSTER on the table to bring it back down to size;
that's probably the fastest way...

   cluster my_table_pk on my_table;

VACUUM FULL would also do the job, but probably not as quickly.
--
(reverse (concatenate 'string "gro.gultn" "@" "enworbbc"))
http://cbbrowne.com/info/sgml.html
"Now they can put you in jail if they *THINK* you're gonna commit a
crime.  Let me say that again, because it sounds vaguely important"
--george carlin

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

You probably want to do one or two other things.

1> Switch on autovacuum.

2> improve the setting of max_fsm_pages in your postgresql.conf a restart will be required.

if you do a "vacuum verbose;" the last couple of lines should tell you how much free space is about against how much free space the database can actuall remember to use.

INFO:  free space map contains 5464 pages in 303 relations
DETAIL:  A total of 9760 page slots are in use (including overhead).
9760 page slots are required to track all free space.
Current limits are:  40000 page slots, 1000 relations, using 299 KB.

if the required page slots (9760 in my case) goes above the current limit (40000 in my case) you will need to do a vacuum full to reclaim the free space. (cluster of the relevent tables may work.

If you run Vacuum Verbose regullally you can check you are vacuuming often enough and that your free space map is big enough to hold your free space.

Peter Childs

В списке pgsql-performance по дате отправления:

Предыдущее
От: Chris Browne
Дата:
Сообщение: Re: The trigger can be specified to fire on time condition?
Следующее
От: "Jim C. Nasby"
Дата:
Сообщение: Re: fsync and battery-backed caches