Обсуждение: pg_autovacuum not having enough suction ?

Поиск
Список
Период
Сортировка

pg_autovacuum not having enough suction ?

От
"Otto Blomqvist"
Дата:
Hello !

I'm running pg_autovacuum on a 1GHz, 80Gig, 512Mhz machine. The database is
about 30MB tarred. We have about 50000 Updates/Inserts/Deletes per day. It
runs beautifully for ~4 days. Then the HDD activity and the Postmaster CPU
usage goes up ALOT. Even though I have plenty (?) of FSM (2 million) pages.
I perform a vacuum and everything is back to normal for another 4 days. I
could schedule a manual vacuum each day but the util is not called
pg_SemiAutoVacuum so I'm hoping this is not necessary. The same user that
ran the manual vacuum is running pg_autovacuum. The normal CPU usage is
about 10% w/ little HD activity.

Im running autovacuum with the following flags -d 3 -v 300 -V 0.1 -s 180 -S
0.1 -a 200 -A 0.1

Below are some snipplets regarding vacuuming from the busiest table

This is the last VACUUM ANALYZE performed by pg_autovacuum before I ran the
manual vacuum

[2005-03-24 02:05:43 EST] DEBUG:        Performing: VACUUM ANALYZE
"public"."file_92"
[2005-03-24 02:05:52 EST] INFO:         table name: secom."public"."file_92"
[2005-03-24 02:05:52 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-24 02:05:52 EST] INFO:         reltuples: 106228.000000;  relpages:
9131
[2005-03-24 02:05:52 EST] INFO:         curr_analyze_count: 629121;
curr_vacuum_count: 471336
[2005-03-24 02:05:52 EST] INFO:         last_analyze_count: 629121;
last_vacuum_count: 471336
[2005-03-24 02:05:52 EST] INFO:         analyze_threshold: 10822;
vacuum_threshold: 10922

This is the last pg_autovacuum debug output before I ran the manual vacuum

[2005-03-24 09:18:44 EST] INFO:         table name: secom."public"."file_92"
[2005-03-24 09:18:44 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-24 09:18:44 EST] INFO:         reltuples: 106228.000000;  relpages:
9131
[2005-03-24 09:18:44 EST] INFO:         curr_analyze_count: 634119;
curr_vacuum_count: 476095
[2005-03-24 09:18:44 EST] INFO:         last_analyze_count: 629121;
last_vacuum_count: 471336
[2005-03-24 09:18:44 EST] INFO:         analyze_threshold: 10822;
vacuum_threshold: 10922

file_92 had about 10000 Inserts/Deletes between 02:05  and 9:20

Then i Ran a vacuum verbose

23 Mar 05 - 9:20 AM
INFO:  vacuuming "public.file_92"
INFO:  index "file_92_record_number_key" now contains 94 row versions in
2720 pages
DETAIL:  107860 index row versions were removed.
2712 index pages have been deleted, 2060 are currently reusable.
CPU 0.22s/0.64u sec elapsed 8.45 sec.
INFO:  "file_92": removed 107860 row versions in 9131 pages
DETAIL:  CPU 1.13s/4.27u sec elapsed 11.75 sec.
INFO:  "file_92": found 107860 removable, 92 nonremovable row versions in
9131 pages
DETAIL:  91 dead row versions cannot be removed yet.
There were 303086 unused item pointers.
0 pages are entirely empty.
CPU 1.55s/5.00u sec elapsed 20.86 sec.
INFO:  "file_92": truncated 9131 to 8423 pages
DETAIL:  CPU 0.65s/0.03u sec elapsed 5.80 sec.
INFO:  free space map: 57 relations, 34892 pages stored; 34464 total pages
needed
DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
shared memory.

Also, file_92 is just a temporary storage area, for records waiting to be
processed. Records are in there typically ~10 sec.

Over 100'000 Index Rows removed, 300'000 unused item pointers ? How could
autovacuum let this happen ? I would estimate the table had about 10000
inserts/deletes between the last pg_autovacuum "Vacuum analyze" and my
manual vacuum verbose.

It is like the suction is not strong enough ;)

Any ideas ? It would be greatly appreciated as this is taking me one step
closer to the looney bin.

Thanks

/Otto Blomqvist



Re: pg_autovacuum not having enough suction ?

От
Tom Lane
Дата:
"Otto Blomqvist" <o.blomqvist@secomintl.com> writes:
> Over 100'000 Index Rows removed, 300'000 unused item pointers ? How could
> autovacuum let this happen ?

What PG version is this?

(The earlier autovacuum releases had some bugs with large tables, thus
the question...)

            regards, tom lane

Re: pg_autovacuum not having enough suction ?

От
"Otto Blomqvist"
Дата:
Sorry about that. I'm Running 8.0.0 on Linux Redhat 8.0


"Tom Lane" <tgl@sss.pgh.pa.us> wrote in message
news:4933.1111692779@sss.pgh.pa.us...
> "Otto Blomqvist" <o.blomqvist@secomintl.com> writes:
> > Over 100'000 Index Rows removed, 300'000 unused item pointers ? How
could
> > autovacuum let this happen ?
>
> What PG version is this?
>
> (The earlier autovacuum releases had some bugs with large tables, thus
> the question...)
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>



Re: pg_autovacuum not having enough suction ?

От
"Matthew T. O'Connor"
Дата:
The version that shipped with 8.0 should be fine.  The only version that
had the problem Tom referred to are in the early 7.4.x releases.

Did you get my other message about information from the stats system
(I'm not sure why my other post has yet to show up on the performance
list).

Matthew


Otto Blomqvist wrote:

>Sorry about that. I'm Running 8.0.0 on Linux Redhat 8.0
>
>
>"Tom Lane" <tgl@sss.pgh.pa.us> wrote in message
>news:4933.1111692779@sss.pgh.pa.us...
>
>
>>"Otto Blomqvist" <o.blomqvist@secomintl.com> writes:
>>
>>
>>>Over 100'000 Index Rows removed, 300'000 unused item pointers ? How
>>>
>>>
>could
>
>
>>>autovacuum let this happen ?
>>>
>>>
>>What PG version is this?
>>
>>(The earlier autovacuum releases had some bugs with large tables, thus
>>the question...)
>>
>>regards, tom lane
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 6: Have you searched our list archives?
>>
>>               http://archives.postgresql.org
>>
>>
>>
>
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 7: don't forget to increase your free space map settings
>
>
>

Re: pg_autovacuum not having enough suction ?

От
"Matthew T. O'Connor"
Дата:
I would rather keep this on list since other people can chime in.

Otto Blomqvist wrote:

>It does not seem to be a Stats collector problem.
>
>   oid   | relname | relnamespace | relpages | relisshared | reltuples |
>schemaname | n_tup_ins | n_tup_upd | n_tup_del
>---------+---------+--------------+----------+-------------+-----------+----
>--------+-----------+-----------+-----------
> 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
>public     |    158176 |    318527 |    158176
>(1 row)
>
>I insert 50000 records
>
>secom=# select createfile_92records(1, 50000);    <--- this is a pg script
>that inserts records  1 threw 50000.
> createfile_92records
>----------------------
>                    0
>
>
>   oid   | relname | relnamespace | relpages | relisshared | reltuples |
>schemaname | n_tup_ins | n_tup_upd | n_tup_del
>---------+---------+--------------+----------+-------------+-----------+----
>--------+-----------+-----------+-----------
> 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
>public     |    208179 |    318932 |    158377
>(1 row)
>
>reltuples does not change ? Hmm. n_tup_ins looks fine.
>
>

That is expected, reltuples only gets updated by a vacuum or an analyze.

>This table is basically a queue full of records waiting to get transfered
>over from our 68030 system to the PG database. The records are then moved
>into folders (using a trigger) like file_92_myy depending on what month the
>record was created on the 68030. During normal operations there should not
>be more than 10 records at a time in the table, although during the course
>of a day a normal system will get about 50k records. I create 50000 records
>to simulate incoming traffic, since we don't have much traffic in the test
>lab.
>
>After a few hours we have
>
>secom=# select count(*) from file_92;
> count
>-------
> 42072
>
>So we have sent over approx 8000 Records.
>
>   oid   | relname | relnamespace | relpages | relisshared | reltuples |
>schemaname | n_tup_ins | n_tup_upd | n_tup_del
>---------+---------+--------------+----------+-------------+-----------+----
>--------+-----------+-----------+-----------
> 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
>public     |    208218 |    334521 |    166152
>(1 row)
>
>
>n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close.   (Each record
>gets updated twice, then moved)
>n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are also
>minor background traffic going on)
>
>
>I could send over the full vacuum verbose capture as well as the autovacuum
>capture if that is of interest.
>

That might be helpful.  I don't see a stats system problem here, but I
also haven't heard of any autovac problems recently, so this might be
something new.

Thanks,

Matthew O'Connor



Re: pg_autovacuum not having enough suction ?

От
"Otto Blomqvist"
Дата:
It looks like the reltuples-values are screwed up. Even though rows are
constantly being removed from the table the reltuples keep going up. If I
understand correctly that also makes the Vacuum threshold go up and we end
up in a vicious circle. Right after pg_autovacuum performed a vacuum analyze
on the table it actually had 31000 records, but reltuples reports over 100k.
I'm not sure if this means anything But i thought i would pass it along.

PG version 8.0.0, 31MB tarred DB.

[2005-03-25 09:16:14 EST] INFO:    dbname: testing
[2005-03-25 09:16:14 EST] INFO:      oid: 9383816
[2005-03-25 09:16:14 EST] INFO:      username: (null)
[2005-03-25 09:16:14 EST] INFO:      password: (null)
[2005-03-25 09:16:14 EST] INFO:      conn is null, (not connected)
[2005-03-25 09:16:14 EST] INFO:      default_analyze_threshold: 1000
[2005-03-25 09:16:14 EST] INFO:      default_vacuum_threshold: 500


[2005-03-25 09:05:12 EST] INFO:      table name: secom."public"."file_92"
[2005-03-25 09:05:12 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-25 09:05:12 EST] INFO:         reltuples: 49185.000000;  relpages:
8423
[2005-03-25 09:05:12 EST] INFO:         curr_analyze_count: 919274;
curr_vacuum_count: 658176
[2005-03-25 09:05:12 EST] INFO:         last_analyze_count: 899272;
last_vacuum_count: 560541
[2005-03-25 09:05:12 EST] INFO:         analyze_threshold: 49685;
vacuum_threshold: 100674


[2005-03-25 09:10:12 EST] DEBUG:   Performing: VACUUM ANALYZE
"public"."file_92"
[2005-03-25 09:10:33 EST] INFO:      table name: secom."public"."file_92"
[2005-03-25 09:10:33 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-25 09:10:33 EST] INFO:         reltuples: 113082.000000;  relpages:
6624
[2005-03-25 09:10:33 EST] INFO:         curr_analyze_count: 923820;
curr_vacuum_count: 662699
[2005-03-25 09:10:33 EST] INFO:         last_analyze_count: 923820;
last_vacuum_count: 662699
[2005-03-25 09:10:33 EST] INFO:         analyze_threshold: 113582;
vacuum_threshold: 227164


[2005-03-25 09:16:14 EST] INFO:      table name: secom."public"."file_92"
[2005-03-25 09:16:14 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-25 09:16:14 EST] INFO:         reltuples: 113082.000000;  relpages:
6624  <-- Actually has 31k rows
[2005-03-25 09:16:14 EST] INFO:         curr_analyze_count: 923820;
curr_vacuum_count: 662699
[2005-03-25 09:16:14 EST] INFO:         last_analyze_count: 923820;
last_vacuum_count: 662699
[2005-03-25 09:16:14 EST] INFO:         analyze_threshold: 113582;
vacuum_threshold: 227164

DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
shared memory.




----- Original Message -----
From: "Matthew T. O'Connor" <matthew@zeut.net>
To: "Otto Blomqvist" <o.blomqvist@secomintl.com>;
<pgsql-performance@postgresql.org>
Sent: Thursday, March 24, 2005 3:58 PM
Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?


> I would rather keep this on list since other people can chime in.
>
> Otto Blomqvist wrote:
>
> >It does not seem to be a Stats collector problem.
> >
> >   oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >schemaname | n_tup_ins | n_tup_upd | n_tup_del
>
>---------+---------+--------------+----------+-------------+-----------+---
-
> >--------+-----------+-----------+-----------
> > 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >public     |    158176 |    318527 |    158176
> >(1 row)
> >
> >I insert 50000 records
> >
> >secom=# select createfile_92records(1, 50000);    <--- this is a pg
script
> >that inserts records  1 threw 50000.
> > createfile_92records
> >----------------------
> >                    0
> >
> >
> >   oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >schemaname | n_tup_ins | n_tup_upd | n_tup_del
>
>---------+---------+--------------+----------+-------------+-----------+---
-
> >--------+-----------+-----------+-----------
> > 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >public     |    208179 |    318932 |    158377
> >(1 row)
> >
> >reltuples does not change ? Hmm. n_tup_ins looks fine.
> >
> >
>
> That is expected, reltuples only gets updated by a vacuum or an analyze.
>
> >This table is basically a queue full of records waiting to get transfered
> >over from our 68030 system to the PG database. The records are then moved
> >into folders (using a trigger) like file_92_myy depending on what month
the
> >record was created on the 68030. During normal operations there should
not
> >be more than 10 records at a time in the table, although during the
course
> >of a day a normal system will get about 50k records. I create 50000
records
> >to simulate incoming traffic, since we don't have much traffic in the
test
> >lab.
> >
> >After a few hours we have
> >
> >secom=# select count(*) from file_92;
> > count
> >-------
> > 42072
> >
> >So we have sent over approx 8000 Records.
> >
> >   oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >schemaname | n_tup_ins | n_tup_upd | n_tup_del
>
>---------+---------+--------------+----------+-------------+-----------+---
-
> >--------+-----------+-----------+-----------
> > 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >public     |    208218 |    334521 |    166152
> >(1 row)
> >
> >
> >n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close.   (Each record
> >gets updated twice, then moved)
> >n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are also
> >minor background traffic going on)
> >
> >
> >I could send over the full vacuum verbose capture as well as the
autovacuum
> >capture if that is of interest.
> >
>
> That might be helpful.  I don't see a stats system problem here, but I
> also haven't heard of any autovac problems recently, so this might be
> something new.
>
> Thanks,
>
> Matthew O'Connor
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>


Re: pg_autovacuum not having enough suction ?

От
"Matthew T. O'Connor"
Дата:
hmm.... the value in reltuples should be accurate after a vacuum (or
vacuum analyze) if it's not it's a vacuum bug or something is going on
that isn't understood.  If you or pg_autovacuum are running plain
analyze commands, that could explain the invalid reltules numbers.

Was reltuples = 113082 correct right after the vacuum?

Matthew


Otto Blomqvist wrote:

>It looks like the reltuples-values are screwed up. Even though rows are
>constantly being removed from the table the reltuples keep going up. If I
>understand correctly that also makes the Vacuum threshold go up and we end
>up in a vicious circle. Right after pg_autovacuum performed a vacuum analyze
>on the table it actually had 31000 records, but reltuples reports over 100k.
>I'm not sure if this means anything But i thought i would pass it along.
>
>PG version 8.0.0, 31MB tarred DB.
>
>[2005-03-25 09:16:14 EST] INFO:    dbname: testing
>[2005-03-25 09:16:14 EST] INFO:      oid: 9383816
>[2005-03-25 09:16:14 EST] INFO:      username: (null)
>[2005-03-25 09:16:14 EST] INFO:      password: (null)
>[2005-03-25 09:16:14 EST] INFO:      conn is null, (not connected)
>[2005-03-25 09:16:14 EST] INFO:      default_analyze_threshold: 1000
>[2005-03-25 09:16:14 EST] INFO:      default_vacuum_threshold: 500
>
>
>[2005-03-25 09:05:12 EST] INFO:      table name: secom."public"."file_92"
>[2005-03-25 09:05:12 EST] INFO:         relid: 9384219;   relisshared: 0
>[2005-03-25 09:05:12 EST] INFO:         reltuples: 49185.000000;  relpages:
>8423
>[2005-03-25 09:05:12 EST] INFO:         curr_analyze_count: 919274;
>curr_vacuum_count: 658176
>[2005-03-25 09:05:12 EST] INFO:         last_analyze_count: 899272;
>last_vacuum_count: 560541
>[2005-03-25 09:05:12 EST] INFO:         analyze_threshold: 49685;
>vacuum_threshold: 100674
>
>
>[2005-03-25 09:10:12 EST] DEBUG:   Performing: VACUUM ANALYZE
>"public"."file_92"
>[2005-03-25 09:10:33 EST] INFO:      table name: secom."public"."file_92"
>[2005-03-25 09:10:33 EST] INFO:         relid: 9384219;   relisshared: 0
>[2005-03-25 09:10:33 EST] INFO:         reltuples: 113082.000000;  relpages:
>6624
>[2005-03-25 09:10:33 EST] INFO:         curr_analyze_count: 923820;
>curr_vacuum_count: 662699
>[2005-03-25 09:10:33 EST] INFO:         last_analyze_count: 923820;
>last_vacuum_count: 662699
>[2005-03-25 09:10:33 EST] INFO:         analyze_threshold: 113582;
>vacuum_threshold: 227164
>
>
>[2005-03-25 09:16:14 EST] INFO:      table name: secom."public"."file_92"
>[2005-03-25 09:16:14 EST] INFO:         relid: 9384219;   relisshared: 0
>[2005-03-25 09:16:14 EST] INFO:         reltuples: 113082.000000;  relpages:
>6624  <-- Actually has 31k rows
>[2005-03-25 09:16:14 EST] INFO:         curr_analyze_count: 923820;
>curr_vacuum_count: 662699
>[2005-03-25 09:16:14 EST] INFO:         last_analyze_count: 923820;
>last_vacuum_count: 662699
>[2005-03-25 09:16:14 EST] INFO:         analyze_threshold: 113582;
>vacuum_threshold: 227164
>
>DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
>shared memory.
>
>
>
>
>----- Original Message -----
>From: "Matthew T. O'Connor" <matthew@zeut.net>
>To: "Otto Blomqvist" <o.blomqvist@secomintl.com>;
><pgsql-performance@postgresql.org>
>Sent: Thursday, March 24, 2005 3:58 PM
>Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?
>
>
>
>
>>I would rather keep this on list since other people can chime in.
>>
>>Otto Blomqvist wrote:
>>
>>
>>
>>>It does not seem to be a Stats collector problem.
>>>
>>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
>>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
>>>
>>>
>>---------+---------+--------------+----------+-------------+-----------+---
>>
>>
>-
>
>
>>>--------+-----------+-----------+-----------
>>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
>>>public     |    158176 |    318527 |    158176
>>>(1 row)
>>>
>>>I insert 50000 records
>>>
>>>secom=# select createfile_92records(1, 50000);    <--- this is a pg
>>>
>>>
>script
>
>
>>>that inserts records  1 threw 50000.
>>>createfile_92records
>>>----------------------
>>>                   0
>>>
>>>
>>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
>>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
>>>
>>>
>>---------+---------+--------------+----------+-------------+-----------+---
>>
>>
>-
>
>
>>>--------+-----------+-----------+-----------
>>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
>>>public     |    208179 |    318932 |    158377
>>>(1 row)
>>>
>>>reltuples does not change ? Hmm. n_tup_ins looks fine.
>>>
>>>
>>>
>>>
>>That is expected, reltuples only gets updated by a vacuum or an analyze.
>>
>>
>>
>>>This table is basically a queue full of records waiting to get transfered
>>>over from our 68030 system to the PG database. The records are then moved
>>>into folders (using a trigger) like file_92_myy depending on what month
>>>
>>>
>the
>
>
>>>record was created on the 68030. During normal operations there should
>>>
>>>
>not
>
>
>>>be more than 10 records at a time in the table, although during the
>>>
>>>
>course
>
>
>>>of a day a normal system will get about 50k records. I create 50000
>>>
>>>
>records
>
>
>>>to simulate incoming traffic, since we don't have much traffic in the
>>>
>>>
>test
>
>
>>>lab.
>>>
>>>After a few hours we have
>>>
>>>secom=# select count(*) from file_92;
>>>count
>>>-------
>>>42072
>>>
>>>So we have sent over approx 8000 Records.
>>>
>>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
>>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
>>>
>>>
>>---------+---------+--------------+----------+-------------+-----------+---
>>
>>
>-
>
>
>>>--------+-----------+-----------+-----------
>>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
>>>public     |    208218 |    334521 |    166152
>>>(1 row)
>>>
>>>
>>>n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close.   (Each record
>>>gets updated twice, then moved)
>>>n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are also
>>>minor background traffic going on)
>>>
>>>
>>>I could send over the full vacuum verbose capture as well as the
>>>
>>>
>autovacuum
>
>
>>>capture if that is of interest.
>>>
>>>
>>>
>>That might be helpful.  I don't see a stats system problem here, but I
>>also haven't heard of any autovac problems recently, so this might be
>>something new.
>>
>>Thanks,
>>
>>Matthew O'Connor
>>
>>
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 6: Have you searched our list archives?
>>
>>               http://archives.postgresql.org
>>
>>
>>
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 8: explain analyze is your friend
>
>
>


Re: pg_autovacuum not having enough suction ?

От
Tom Lane
Дата:
"Matthew T. O'Connor" <matthew@zeut.net> writes:
> hmm.... the value in reltuples should be accurate after a vacuum (or
> vacuum analyze) if it's not it's a vacuum bug or something is going on
> that isn't understood.  If you or pg_autovacuum are running plain
> analyze commands, that could explain the invalid reltules numbers.

> Was reltuples = 113082 correct right after the vacuum?

Another thing to check is whether the reltuples (and relpages!) that
autovacuum is reporting are the same as what's actually in the pg_class
row for the relation.  I'm wondering if this could be a similar issue
to the old autovac bug where it wasn't reading the value correctly.

If they are the same then it seems like it must be a backend issue.

One thing that is possibly relevant here is that in 8.0 a plain VACUUM
doesn't set reltuples to the exactly correct number, but to an
interpolated value that reflects our estimate of the "steady state"
average between vacuums.  I wonder if that code is wrong, or if it's
operating as designed but is confusing autovac.

Can autovac be told to run the vacuums in VERBOSE mode?  It would be
useful to compare what VERBOSE has to say to the changes in
reltuples/relpages.

            regards, tom lane

Re: pg_autovacuum not having enough suction ?

От
"Otto Blomqvist"
Дата:
> Was reltuples = 113082 correct right after the vacuum?

No, There where about 31000 rows after the vacuum. I'm no expert but tuples
= rows, right ?

This is not a "normal" table though, in the sence that it is only a
temporary holding ground as I explained earlier. I create 50000 records and
these get sent over from our custom 68030 system, to tables like
file_92_myy, depending on the date of the record.  A pl/pgsql script is used
as a trigger to move the records after they get data from the 68030. Don't
know if that is of interest or not. I could post the trigger if you'd like.


""Matthew T. O'Connor"" <matthew@zeut.net> wrote in message
news:42446A66.6010504@zeut.net...
> hmm.... the value in reltuples should be accurate after a vacuum (or
> vacuum analyze) if it's not it's a vacuum bug or something is going on
> that isn't understood.  If you or pg_autovacuum are running plain
> analyze commands, that could explain the invalid reltules numbers.
>
> Was reltuples = 113082 correct right after the vacuum?
>
> Matthew
>
>
> Otto Blomqvist wrote:
>
> >It looks like the reltuples-values are screwed up. Even though rows are
> >constantly being removed from the table the reltuples keep going up. If I
> >understand correctly that also makes the Vacuum threshold go up and we
end
> >up in a vicious circle. Right after pg_autovacuum performed a vacuum
analyze
> >on the table it actually had 31000 records, but reltuples reports over
100k.
> >I'm not sure if this means anything But i thought i would pass it along.
> >
> >PG version 8.0.0, 31MB tarred DB.
> >
> >[2005-03-25 09:16:14 EST] INFO:    dbname: testing
> >[2005-03-25 09:16:14 EST] INFO:      oid: 9383816
> >[2005-03-25 09:16:14 EST] INFO:      username: (null)
> >[2005-03-25 09:16:14 EST] INFO:      password: (null)
> >[2005-03-25 09:16:14 EST] INFO:      conn is null, (not connected)
> >[2005-03-25 09:16:14 EST] INFO:      default_analyze_threshold: 1000
> >[2005-03-25 09:16:14 EST] INFO:      default_vacuum_threshold: 500
> >
> >
> >[2005-03-25 09:05:12 EST] INFO:      table name: secom."public"."file_92"
> >[2005-03-25 09:05:12 EST] INFO:         relid: 9384219;   relisshared: 0
> >[2005-03-25 09:05:12 EST] INFO:         reltuples: 49185.000000;
relpages:
> >8423
> >[2005-03-25 09:05:12 EST] INFO:         curr_analyze_count: 919274;
> >curr_vacuum_count: 658176
> >[2005-03-25 09:05:12 EST] INFO:         last_analyze_count: 899272;
> >last_vacuum_count: 560541
> >[2005-03-25 09:05:12 EST] INFO:         analyze_threshold: 49685;
> >vacuum_threshold: 100674
> >
> >
> >[2005-03-25 09:10:12 EST] DEBUG:   Performing: VACUUM ANALYZE
> >"public"."file_92"
> >[2005-03-25 09:10:33 EST] INFO:      table name: secom."public"."file_92"
> >[2005-03-25 09:10:33 EST] INFO:         relid: 9384219;   relisshared: 0
> >[2005-03-25 09:10:33 EST] INFO:         reltuples: 113082.000000;
relpages:
> >6624
> >[2005-03-25 09:10:33 EST] INFO:         curr_analyze_count: 923820;
> >curr_vacuum_count: 662699
> >[2005-03-25 09:10:33 EST] INFO:         last_analyze_count: 923820;
> >last_vacuum_count: 662699
> >[2005-03-25 09:10:33 EST] INFO:         analyze_threshold: 113582;
> >vacuum_threshold: 227164
> >
> >
> >[2005-03-25 09:16:14 EST] INFO:      table name: secom."public"."file_92"
> >[2005-03-25 09:16:14 EST] INFO:         relid: 9384219;   relisshared: 0
> >[2005-03-25 09:16:14 EST] INFO:         reltuples: 113082.000000;
relpages:
> >6624  <-- Actually has 31k rows
> >[2005-03-25 09:16:14 EST] INFO:         curr_analyze_count: 923820;
> >curr_vacuum_count: 662699
> >[2005-03-25 09:16:14 EST] INFO:         last_analyze_count: 923820;
> >last_vacuum_count: 662699
> >[2005-03-25 09:16:14 EST] INFO:         analyze_threshold: 113582;
> >vacuum_threshold: 227164
> >
> >DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
> >shared memory.
> >
> >
> >
> >
> >----- Original Message -----
> >From: "Matthew T. O'Connor" <matthew@zeut.net>
> >To: "Otto Blomqvist" <o.blomqvist@secomintl.com>;
> ><pgsql-performance@postgresql.org>
> >Sent: Thursday, March 24, 2005 3:58 PM
> >Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?
> >
> >
> >
> >
> >>I would rather keep this on list since other people can chime in.
> >>
> >>Otto Blomqvist wrote:
> >>
> >>
> >>
> >>>It does not seem to be a Stats collector problem.
> >>>
> >>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
> >>>
> >>>
>
>>---------+---------+--------------+----------+-------------+-----------+--
-
> >>
> >>
> >-
> >
> >
> >>>--------+-----------+-----------+-----------
> >>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >>>public     |    158176 |    318527 |    158176
> >>>(1 row)
> >>>
> >>>I insert 50000 records
> >>>
> >>>secom=# select createfile_92records(1, 50000);    <--- this is a pg
> >>>
> >>>
> >script
> >
> >
> >>>that inserts records  1 threw 50000.
> >>>createfile_92records
> >>>----------------------
> >>>                   0
> >>>
> >>>
> >>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
> >>>
> >>>
>
>>---------+---------+--------------+----------+-------------+-----------+--
-
> >>
> >>
> >-
> >
> >
> >>>--------+-----------+-----------+-----------
> >>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >>>public     |    208179 |    318932 |    158377
> >>>(1 row)
> >>>
> >>>reltuples does not change ? Hmm. n_tup_ins looks fine.
> >>>
> >>>
> >>>
> >>>
> >>That is expected, reltuples only gets updated by a vacuum or an analyze.
> >>
> >>
> >>
> >>>This table is basically a queue full of records waiting to get
transfered
> >>>over from our 68030 system to the PG database. The records are then
moved
> >>>into folders (using a trigger) like file_92_myy depending on what month
> >>>
> >>>
> >the
> >
> >
> >>>record was created on the 68030. During normal operations there should
> >>>
> >>>
> >not
> >
> >
> >>>be more than 10 records at a time in the table, although during the
> >>>
> >>>
> >course
> >
> >
> >>>of a day a normal system will get about 50k records. I create 50000
> >>>
> >>>
> >records
> >
> >
> >>>to simulate incoming traffic, since we don't have much traffic in the
> >>>
> >>>
> >test
> >
> >
> >>>lab.
> >>>
> >>>After a few hours we have
> >>>
> >>>secom=# select count(*) from file_92;
> >>>count
> >>>-------
> >>>42072
> >>>
> >>>So we have sent over approx 8000 Records.
> >>>
> >>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
> >>>
> >>>
>
>>---------+---------+--------------+----------+-------------+-----------+--
-
> >>
> >>
> >-
> >
> >
> >>>--------+-----------+-----------+-----------
> >>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >>>public     |    208218 |    334521 |    166152
> >>>(1 row)
> >>>
> >>>
> >>>n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close.   (Each
record
> >>>gets updated twice, then moved)
> >>>n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are
also
> >>>minor background traffic going on)
> >>>
> >>>
> >>>I could send over the full vacuum verbose capture as well as the
> >>>
> >>>
> >autovacuum
> >
> >
> >>>capture if that is of interest.
> >>>
> >>>
> >>>
> >>That might be helpful.  I don't see a stats system problem here, but I
> >>also haven't heard of any autovac problems recently, so this might be
> >>something new.
> >>
> >>Thanks,
> >>
> >>Matthew O'Connor
> >>
> >>
> >>
> >>---------------------------(end of broadcast)---------------------------
> >>TIP 6: Have you searched our list archives?
> >>
> >>               http://archives.postgresql.org
> >>
> >>
> >>
> >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 8: explain analyze is your friend
> >
> >
> >
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if your
>       joining column's datatypes do not match
>



lazy_update_relstats considered harmful (was Re: pg_autovacuum not having enough suction ?)

От
Tom Lane
Дата:
I wrote:
> One thing that is possibly relevant here is that in 8.0 a plain VACUUM
> doesn't set reltuples to the exactly correct number, but to an
> interpolated value that reflects our estimate of the "steady state"
> average between vacuums.  I wonder if that code is wrong, or if it's
> operating as designed but is confusing autovac.

Now that I think it over, I'm thinking that I must have been suffering
severe brain fade the day I wrote lazy_update_relstats() (see
vacuumlazy.c).  The numbers that that routine is averaging are the pre-
and post-vacuum physical tuple counts.  But the difference between them
consists of known-dead tuples, and we shouldn't be factoring dead tuples
into reltuples.  The planner has always considered reltuples to count
only live tuples, and I think this is correct on two grounds:

1. The numbers of tuples estimated to be returned by scans certainly
shouldn't count dead ones.

2. Dead tuples don't have that much influence on scan costs either, at
least not once they are marked as known-dead.  Certainly they shouldn't
be charged at full freight.

It's possible that there'd be some value in adding a column to pg_class
to record dead tuple count, but given what we have now, the calculation
in lazy_update_relstats is totally wrong.

The idea I was trying to capture is that the tuple density is at a
minimum right after VACUUM, and will increase as free space is filled
in until the next VACUUM, so that recording the exact tuple count
underestimates the number of tuples that will be seen on-the-average.
But I'm not sure that idea really holds water.  The only way that a
table can be at "steady state" over a long period is if the number of
live tuples remains roughly constant (ie, inserts balance deletes).
What actually increases and decreases over a VACUUM cycle is the density
of *dead* tuples ... but per the above arguments this isn't something
we should adjust reltuples for.

So I'm thinking lazy_update_relstats should be ripped out and we should
go back to recording just the actual stats.

Sound reasonable?  Or was I right the first time and suffering brain
fade today?

            regards, tom lane

Re: pg_autovacuum not having enough suction ?

От
"Otto Blomqvist"
Дата:
> Another thing to check is whether the reltuples (and relpages!) that
> autovacuum is reporting are the same as what's actually in the pg_class
> row for the relation.  I'm wondering if this could be a similar issue
> to the old autovac bug where it wasn't reading the value correctly.

These values where extracted at roughly the same time.

 relname | relnamespace | reltype | relowner | relam | relfilenode |
reltablespace | relpages | reltuples | reltoastrelid | reltoastidxid |
relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers |
relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules |
relhassubclass | relacl
---------+--------------+---------+----------+-------+-------------+--------
-------+----------+-----------+---------------+---------------+-------------
+-------------+---------+----------+-----------+-------------+----------+---
-------+---------+------------+------------+-------------+----------------+-
-------
 file_92 |         2200 | 9384220 |      100 |     0 |     9384219 |
0 |     6624 |    113082 |             0 |             0 | t           | f
| r       |       23 |         0 |           1 |        0 |        0 |
0 | t          | f          | f           | f              |
(1 row)

secom=# select count(*) from file_92;
 count
-------
 17579
(1 row)

[2005-03-25 12:16:32 EST] INFO:      table name: secom."public"."file_92"
[2005-03-25 12:16:32 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-25 12:16:32 EST] INFO:         reltuples: 113082.000000;  relpages:
6624
[2005-03-25 12:16:32 EST] INFO:         curr_analyze_count: 993780;
curr_vacuum_count: 732470
[2005-03-25 12:16:32 EST] INFO:         last_analyze_count: 923820;
last_vacuum_count: 662699
[2005-03-25 12:16:32 EST] INFO:         analyze_threshold: 113582;
vacuum_threshold: 227164


Hope this helps, if there is anything else I can do please let me know.


> If they are the same then it seems like it must be a backend issue.
>
> One thing that is possibly relevant here is that in 8.0 a plain VACUUM
> doesn't set reltuples to the exactly correct number, but to an
> interpolated value that reflects our estimate of the "steady state"
> average between vacuums.  I wonder if that code is wrong, or if it's
> operating as designed but is confusing autovac.


This average steady state value might be hard to interpolete in this case
since this is only a temporary holding place for the records ..? Normaly the
table has < 10 records in it at the same time. In the lab we create a
"lump-traffic" by sending over 50000 Records. It takes about 20 hours to
transfer over all of the 50k records.





Re: pg_autovacuum not having enough suction ?

От
"Matthew T. O'Connor"
Дата:
Tom Lane wrote:

>"Matthew T. O'Connor" <matthew@zeut.net> writes:
>
>
>>hmm.... the value in reltuples should be accurate after a vacuum (or
>>vacuum analyze) if it's not it's a vacuum bug or something is going on
>>that isn't understood.  If you or pg_autovacuum are running plain
>>analyze commands, that could explain the invalid reltules numbers.
>>
>>
>>Was reltuples = 113082 correct right after the vacuum?
>>
>>
>
>Another thing to check is whether the reltuples (and relpages!) that
>autovacuum is reporting are the same as what's actually in the pg_class
>row for the relation.  I'm wondering if this could be a similar issue
>to the old autovac bug where it wasn't reading the value correctly.
>
>

I don't think so, as he did some manual selects from pg_class and
pg_stat_all in one of the emails he sent that were showing similar
numbers to what autovac was reporting.

>If they are the same then it seems like it must be a backend issue.
>
>One thing that is possibly relevant here is that in 8.0 a plain VACUUM
>doesn't set reltuples to the exactly correct number, but to an
>interpolated value that reflects our estimate of the "steady state"
>average between vacuums.  I wonder if that code is wrong, or if it's
>operating as designed but is confusing autovac.
>
>

Ahh....  Now that you mention it, I do remember the discussion during
8.0 development.  This sounds very much like the cause of the problem.
Autovac is not vacuuming often enough for this table because reltuples
is telling autovac that there are alot more tuples in this table than
there really are.

Really this is just another case of the more general problem with
autovac as it stands now.  That is, you can't set vacuum thresholds on a
per table basis, and databases like this can't survive with a one size
fits all threshold.  I would suggest that Otto perform regular cron
based vacuums of this one table in addition to autovac, that is what
several people I have heard from in the field are doing.

Come hell or high water I'm gonna get autovac integrated into 8.1, at
which point per table thresholds would be easy todo.

>Can autovac be told to run the vacuums in VERBOSE mode?  It would be
>useful to compare what VERBOSE has to say to the changes in
>reltuples/relpages.
>
Not as it stands now.  That would be an interesting feature for
debugging purposes though.


Re: [HACKERS] lazy_update_relstats considered harmful (was Re:

От
"Matthew T. O'Connor"
Дата:
Tom Lane wrote:

>I wrote:
>
>
>>One thing that is possibly relevant here is that in 8.0 a plain VACUUM
>>doesn't set reltuples to the exactly correct number, but to an
>>interpolated value that reflects our estimate of the "steady state"
>>average between vacuums.  I wonder if that code is wrong, or if it's
>>operating as designed but is confusing autovac.
>>
>>
>
>Now that I think it over, I'm thinking that I must have been suffering
>severe brain fade the day I wrote lazy_update_relstats() (see
>vacuumlazy.c).  The numbers that that routine is averaging are the pre-
>and post-vacuum physical tuple counts.  But the difference between them
>consists of known-dead tuples, and we shouldn't be factoring dead tuples
>into reltuples.  The planner has always considered reltuples to count
>only live tuples, and I think this is correct on two grounds:
>
>1. The numbers of tuples estimated to be returned by scans certainly
>shouldn't count dead ones.
>
>2. Dead tuples don't have that much influence on scan costs either, at
>least not once they are marked as known-dead.  Certainly they shouldn't
>be charged at full freight.
>
>It's possible that there'd be some value in adding a column to pg_class
>to record dead tuple count, but given what we have now, the calculation
>in lazy_update_relstats is totally wrong.
>
>The idea I was trying to capture is that the tuple density is at a
>minimum right after VACUUM, and will increase as free space is filled
>in until the next VACUUM, so that recording the exact tuple count
>underestimates the number of tuples that will be seen on-the-average.
>But I'm not sure that idea really holds water.  The only way that a
>table can be at "steady state" over a long period is if the number of
>live tuples remains roughly constant (ie, inserts balance deletes).
>What actually increases and decreases over a VACUUM cycle is the density
>of *dead* tuples ... but per the above arguments this isn't something
>we should adjust reltuples for.
>
>So I'm thinking lazy_update_relstats should be ripped out and we should
>go back to recording just the actual stats.
>
>Sound reasonable?  Or was I right the first time and suffering brain
>fade today?
>


Re: lazy_update_relstats considered harmful (was Re:

От
Simon Riggs
Дата:
On Fri, 2005-03-25 at 15:22 -0500, Tom Lane wrote:
> 2. Dead tuples don't have that much influence on scan costs either, at
> least not once they are marked as known-dead.  Certainly they shouldn't
> be charged at full freight.

Yes, minor additional CPU time, but the main issue is when the dead
tuples force additional I/O.

> It's possible that there'd be some value in adding a column to pg_class
> to record dead tuple count, but given what we have now, the calculation
> in lazy_update_relstats is totally wrong.

Yes, thats the way. We can record the (averaged?) dead tuple count, but
also record the actual row count in reltuples.

We definitely need to record the physical and logical tuple counts,
since each of them have different contributions to run-times.

For comparing seq scan v index, we need to look at the physical tuples
count * avg row size, whereas when we calculate number of rows returned
we should look at fractions of the logical row count.

> The idea I was trying to capture is that the tuple density is at a
> minimum right after VACUUM, and will increase as free space is filled
> in until the next VACUUM, so that recording the exact tuple count
> underestimates the number of tuples that will be seen on-the-average.
> But I'm not sure that idea really holds water.  The only way that a
> table can be at "steady state" over a long period is if the number of
> live tuples remains roughly constant (ie, inserts balance deletes).
> What actually increases and decreases over a VACUUM cycle is the density
> of *dead* tuples ... but per the above arguments this isn't something
> we should adjust reltuples for.
>
> So I'm thinking lazy_update_relstats should be ripped out and we should
> go back to recording just the actual stats.
>
> Sound reasonable?  Or was I right the first time and suffering brain
> fade today?

Well, I think the original idea had some validity, but clearly
lazy_update_relstats isn't the way to do it even though we thought so at
the time.

Best Regards, Simon Riggs



Simon Riggs <simon@2ndquadrant.com> writes:
> On Fri, 2005-03-25 at 15:22 -0500, Tom Lane wrote:
>> 2. Dead tuples don't have that much influence on scan costs either, at
>> least not once they are marked as known-dead.  Certainly they shouldn't
>> be charged at full freight.

> Yes, minor additional CPU time, but the main issue is when the dead
> tuples force additional I/O.

I/O costs are mostly estimated off relpages, though, not reltuples.
The only time you really pay through the nose for a dead tuple is when
an indexscan visits it, but with the known-dead marking we now do in
btree indexes, I'm pretty sure that path is seldom taken.

>> It's possible that there'd be some value in adding a column to pg_class
>> to record dead tuple count, but given what we have now, the calculation
>> in lazy_update_relstats is totally wrong.

> Yes, thats the way. We can record the (averaged?) dead tuple count, but
> also record the actual row count in reltuples.

What I'd be inclined to record is the actual number of dead rows removed
by the most recent VACUUM.  Any math on that is best done in the
planner, since we can change the logic more easily than the database
contents.  It'd probably be reasonable to take half of that number as
the estimate of the average number of dead tuples.

But in any case, that's for the future; we can't have it in 8.0.*, and
right at the moment I'm focusing on what to push out for 8.0.2.

> We definitely need to record the physical and logical tuple counts,
> since each of them have different contributions to run-times.

There isn't any difference, if you are talking about fully dead tuples.
It would be possible for VACUUM to also count the number of
not-committed-but-not-removable tuples (ie, new from still-open
transactions, plus dead-but-still-visible-to-somebody), but I'm not sure
that it would be useful to do so, because that sort of count is hugely
transient.  The stat would be irrelevant moments after it was taken.

            regards, tom lane

Re: pg_autovacuum not having enough suction ?

От
Tom Lane
Дата:
> Otto Blomqvist wrote:
>> This table is basically a queue full of records waiting to get transfered
>> over from our 68030 system to the PG database. The records are then moved
>> into folders (using a trigger) like file_92_myy depending on what month the
>> record was created on the 68030. During normal operations there should not
>> be more than 10 records at a time in the table, although during the course
>> of a day a normal system will get about 50k records. I create 50000 records
>> to simulate incoming traffic, since we don't have much traffic in the test
>> lab.

Really the right way to do housekeeping for a table like that is to
VACUUM FULL (or better yet, TRUNCATE, if possible) immediately after
discarding a batch of records.  The VACUUM FULL will take very little
time if it only has to repack <10 records.  Plain VACUUM is likely to
leave the table nearly empty but physically sizable, which is bad news
from a statistical point of view: as the table fills up again, it won't
get physically larger, thereby giving the planner no clue that it
doesn't still have <10 records.  This means the queries that process
the 50K-record patch are going to get horrible plans :-(

I'm not sure if autovacuum could be taught to do that --- it could
perhaps launch a vacuum as soon as it notices a large fraction of the
table got deleted, but do we really want to authorize it to launch
VACUUM FULL?  It'd be better to issue the vacuum synchronously
as part of the batch updating script, I feel.

            regards, tom lane

Re: pg_autovacuum not having enough suction ?

От
Bruce Momjian
Дата:
Tom Lane wrote:
> > Otto Blomqvist wrote:
> >> This table is basically a queue full of records waiting to get transfered
> >> over from our 68030 system to the PG database. The records are then moved
> >> into folders (using a trigger) like file_92_myy depending on what month the
> >> record was created on the 68030. During normal operations there should not
> >> be more than 10 records at a time in the table, although during the course
> >> of a day a normal system will get about 50k records. I create 50000 records
> >> to simulate incoming traffic, since we don't have much traffic in the test
> >> lab.
>
> Really the right way to do housekeeping for a table like that is to
> VACUUM FULL (or better yet, TRUNCATE, if possible) immediately after
> discarding a batch of records.  The VACUUM FULL will take very little
> time if it only has to repack <10 records.  Plain VACUUM is likely to
> leave the table nearly empty but physically sizable, which is bad news
> from a statistical point of view: as the table fills up again, it won't
> get physically larger, thereby giving the planner no clue that it
> doesn't still have <10 records.  This means the queries that process
> the 50K-record patch are going to get horrible plans :-(
>
> I'm not sure if autovacuum could be taught to do that --- it could
> perhaps launch a vacuum as soon as it notices a large fraction of the
> table got deleted, but do we really want to authorize it to launch
> VACUUM FULL?  It'd be better to issue the vacuum synchronously
> as part of the batch updating script, I feel.

I added this to the TODO section for autovacuum:

        o Do VACUUM FULL if table is nearly empty?

I don't think autovacuum is every going to be smart enough to recycle
during the delete, especially since the rows can't be reused until the
transaction completes.

One problem with VACUUM FULL would be autovacuum waiting for an
exclusive lock on the table.  Anyway, it is documented now as a possible
issue.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: pg_autovacuum not having enough suction ?

От
"Joshua D. Drake"
Дата:
> > I'm not sure if autovacuum could be taught to do that --- it could
> > perhaps launch a vacuum as soon as it notices a large fraction of the
> > table got deleted, but do we really want to authorize it to launch
> > VACUUM FULL?  It'd be better to issue the vacuum synchronously
> > as part of the batch updating script, I feel.
>
> I added this to the TODO section for autovacuum:
>
>         o Do VACUUM FULL if table is nearly empty?

We should never automatically launch a vacuum full. That seems like a
really bad idea.

Sincerely,

Joshua D. Drake


>
> I don't think autovacuum is every going to be smart enough to recycle
> during the delete, especially since the rows can't be reused until the
> transaction completes.
>
> One problem with VACUUM FULL would be autovacuum waiting for an
> exclusive lock on the table.  Anyway, it is documented now as a possible
> issue.
>
--
Command Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564
Custom programming, 24x7 support, managed services, and hosting
Open Source Authors: plPHP, pgManage, Co-Authors: plPerlNG
Reliable replication, Mammoth Replicator - http://www.commandprompt.com/


Re: pg_autovacuum not having enough suction ?

От
Tom Lane
Дата:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Tom Lane wrote:
>> I'm not sure if autovacuum could be taught to do that --- it could
>> perhaps launch a vacuum as soon as it notices a large fraction of the
>> table got deleted, but do we really want to authorize it to launch
>> VACUUM FULL?

> One problem with VACUUM FULL would be autovacuum waiting for an
> exclusive lock on the table.  Anyway, it is documented now as a possible
> issue.

I don't care too much about autovacuum waiting awhile to get a lock.
I do care about other processes getting queued up behind it, though.

Perhaps it would be possible to alter the normal lock queuing semantics
for this case, so that autovacuum's request doesn't block later
arrivals, and it can only get the lock when no one is interested in the
table.  Of course, that might never happen, or by the time it does
there's no point in VACUUM FULL anymore :-(

            regards, tom lane

Re: pg_autovacuum not having enough suction ?

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Tom Lane wrote:
> >> I'm not sure if autovacuum could be taught to do that --- it could
> >> perhaps launch a vacuum as soon as it notices a large fraction of the
> >> table got deleted, but do we really want to authorize it to launch
> >> VACUUM FULL?
>
> > One problem with VACUUM FULL would be autovacuum waiting for an
> > exclusive lock on the table.  Anyway, it is documented now as a possible
> > issue.
>
> I don't care too much about autovacuum waiting awhile to get a lock.
> I do care about other processes getting queued up behind it, though.
>
> Perhaps it would be possible to alter the normal lock queuing semantics
> for this case, so that autovacuum's request doesn't block later
> arrivals, and it can only get the lock when no one is interested in the
> table.  Of course, that might never happen, or by the time it does
> there's no point in VACUUM FULL anymore :-(

Can we issue a LOCK TABLE with a statement_timeout, and only do the
VACUUM FULL if we can get a lock quickly?  That seems like a plan.

The only problem is that you can't VACUUM FULL in a transaction:

    test=> create table test (x int);
    CREATE TABLE
    test=> insert into test values (1);
    INSERT 0 1
    test=> begin;
    BEGIN
    test=> lock table test;
    LOCK TABLE
    test=> vacuum full;
    ERROR:  VACUUM cannot run inside a transaction block

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: pg_autovacuum not having enough suction ?

От
"Otto Blomqvist"
Дата:
ok, Thanks a lot for your time guys  ! I guess my table is pretty unusual
and thats why this problem has not surfaced until now. Better late then
never ;) I'll cron a "manual" vacuum full on the table.



"Tom Lane" <tgl@sss.pgh.pa.us> wrote in message
news:16054.1111791962@sss.pgh.pa.us...
> > Otto Blomqvist wrote:
> >> This table is basically a queue full of records waiting to get
transfered
> >> over from our 68030 system to the PG database. The records are then
moved
> >> into folders (using a trigger) like file_92_myy depending on what month
the
> >> record was created on the 68030. During normal operations there should
not
> >> be more than 10 records at a time in the table, although during the
course
> >> of a day a normal system will get about 50k records. I create 50000
records
> >> to simulate incoming traffic, since we don't have much traffic in the
test
> >> lab.
>
> Really the right way to do housekeeping for a table like that is to
> VACUUM FULL (or better yet, TRUNCATE, if possible) immediately after
> discarding a batch of records.  The VACUUM FULL will take very little
> time if it only has to repack <10 records.  Plain VACUUM is likely to
> leave the table nearly empty but physically sizable, which is bad news
> from a statistical point of view: as the table fills up again, it won't
> get physically larger, thereby giving the planner no clue that it
> doesn't still have <10 records.  This means the queries that process
> the 50K-record patch are going to get horrible plans :-(
>
> I'm not sure if autovacuum could be taught to do that --- it could
> perhaps launch a vacuum as soon as it notices a large fraction of the
> table got deleted, but do we really want to authorize it to launch
> VACUUM FULL?  It'd be better to issue the vacuum synchronously
> as part of the batch updating script, I feel.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>



Re: pg_autovacuum not having enough suction ?

От
"Matthew T. O'Connor"
Дата:
Well the simple answer is that pg_autovacuum didn't see 10,000 inserts
updates or deletes.
pg_autovacuum saw:    476095 - 471336 = 4759  U/D's relevant for
vacuuming and
   634119 - 629121 = 4998  I/U/D's relevant for performing analyze.

The tough question is why is pg_autovacuum not seeing all the updates.
Since autovacuum depends on the stats system for it's numbers, the most
likely answer is that the stats system is not able to keep up with the
workload, and is ignoring some of the updates.  Would you check to see
what the stats system is reporting for numbers of I/U/D's for the
file_92 table?  The query pg_autovacuum uses is:

select a.oid,a.relname,a.relnamespace,a.relpages,a.relisshared,a.reltuples,
           b.schemaname,b.n_tup_ins,b.n_tup_upd,b.n_tup_del
from pg_class a, pg_stat_all_tables b
where a.oid=b.relid and a.relkind = 'r'

Take a look at the n_tup_ins, upd, del numbers before and see if they
are keeping up with the actual number if I/U/D's that you are
performing.  If they are, then it's a pg_autovacuum problem that I will
look into further, if they are not, then it's a stats system problem
that I can't really help with.

Good luck,

Matthew


Otto Blomqvist wrote:

>Hello !
>
>I'm running pg_autovacuum on a 1GHz, 80Gig, 512Mhz machine. The database is
>about 30MB tarred. We have about 50000 Updates/Inserts/Deletes per day. It
>runs beautifully for ~4 days. Then the HDD activity and the Postmaster CPU
>usage goes up ALOT. Even though I have plenty (?) of FSM (2 million) pages.
>I perform a vacuum and everything is back to normal for another 4 days. I
>could schedule a manual vacuum each day but the util is not called
>pg_SemiAutoVacuum so I'm hoping this is not necessary. The same user that
>ran the manual vacuum is running pg_autovacuum. The normal CPU usage is
>about 10% w/ little HD activity.
>
>Im running autovacuum with the following flags -d 3 -v 300 -V 0.1 -s 180 -S
>0.1 -a 200 -A 0.1
>
>Below are some snipplets regarding vacuuming from the busiest table
>
>This is the last VACUUM ANALYZE performed by pg_autovacuum before I ran the
>manual vacuum
>
>[2005-03-24 02:05:43 EST] DEBUG:        Performing: VACUUM ANALYZE
>"public"."file_92"
>[2005-03-24 02:05:52 EST] INFO:         table name: secom."public"."file_92"
>[2005-03-24 02:05:52 EST] INFO:         relid: 9384219;   relisshared: 0
>[2005-03-24 02:05:52 EST] INFO:         reltuples: 106228.000000;  relpages:
>9131
>[2005-03-24 02:05:52 EST] INFO:         curr_analyze_count: 629121;
>curr_vacuum_count: 471336
>[2005-03-24 02:05:52 EST] INFO:         last_analyze_count: 629121;
>last_vacuum_count: 471336
>[2005-03-24 02:05:52 EST] INFO:         analyze_threshold: 10822;
>vacuum_threshold: 10922
>
>This is the last pg_autovacuum debug output before I ran the manual vacuum
>
>[2005-03-24 09:18:44 EST] INFO:         table name: secom."public"."file_92"
>[2005-03-24 09:18:44 EST] INFO:         relid: 9384219;   relisshared: 0
>[2005-03-24 09:18:44 EST] INFO:         reltuples: 106228.000000;  relpages:
>9131
>[2005-03-24 09:18:44 EST] INFO:         curr_analyze_count: 634119;
>curr_vacuum_count: 476095
>[2005-03-24 09:18:44 EST] INFO:         last_analyze_count: 629121;
>last_vacuum_count: 471336
>[2005-03-24 09:18:44 EST] INFO:         analyze_threshold: 10822;
>vacuum_threshold: 10922
>
>file_92 had about 10000 Inserts/Deletes between 02:05  and 9:20
>
>Then i Ran a vacuum verbose
>
>23 Mar 05 - 9:20 AM
>INFO:  vacuuming "public.file_92"
>INFO:  index "file_92_record_number_key" now contains 94 row versions in
>2720 pages
>DETAIL:  107860 index row versions were removed.
>2712 index pages have been deleted, 2060 are currently reusable.
>CPU 0.22s/0.64u sec elapsed 8.45 sec.
>INFO:  "file_92": removed 107860 row versions in 9131 pages
>DETAIL:  CPU 1.13s/4.27u sec elapsed 11.75 sec.
>INFO:  "file_92": found 107860 removable, 92 nonremovable row versions in
>9131 pages
>DETAIL:  91 dead row versions cannot be removed yet.
>There were 303086 unused item pointers.
>0 pages are entirely empty.
>CPU 1.55s/5.00u sec elapsed 20.86 sec.
>INFO:  "file_92": truncated 9131 to 8423 pages
>DETAIL:  CPU 0.65s/0.03u sec elapsed 5.80 sec.
>INFO:  free space map: 57 relations, 34892 pages stored; 34464 total pages
>needed
>DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
>shared memory.
>
>Also, file_92 is just a temporary storage area, for records waiting to be
>processed. Records are in there typically ~10 sec.
>
>Over 100'000 Index Rows removed, 300'000 unused item pointers ? How could
>autovacuum let this happen ? I would estimate the table had about 10000
>inserts/deletes between the last pg_autovacuum "Vacuum analyze" and my
>manual vacuum verbose.
>
>It is like the suction is not strong enough ;)
>
>Any ideas ? It would be greatly appreciated as this is taking me one step
>closer to the looney bin.
>
>Thanks
>
>/Otto Blomqvist
>
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>
>

--
Matthew O'Connor
V.P. of Operations
Terrie O'Connor Realtors
201-934-4900 x27


Re: pg_autovacuum not having enough suction ?

От
Andrew Sullivan
Дата:
On Fri, Mar 25, 2005 at 06:21:24PM -0500, Bruce Momjian wrote:
>
> Can we issue a LOCK TABLE with a statement_timeout, and only do the
> VACUUM FULL if we can get a lock quickly?  That seems like a plan.

I think someone else's remark in this thread is important, though:
autovacuum shouldn't ever block other transactions, and this approach
will definitely run that risk.

A


--
Andrew Sullivan  | ajs@crankycanuck.ca
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
        --Alexander Hamilton