Обсуждение: How to Find Cause of Long Vacuum Times - NOOB Question

Поиск
Список
Период
Сортировка

How to Find Cause of Long Vacuum Times - NOOB Question

От
Yudhvir Singh Sidhu
Дата:
I hope someone can help me with this vacuum problem. I can post more
info if needed.

Versions:  Postgresql version 8.09 on FreeBSD 6.1
Situation:  huge amounts of adds and deletes daily. Running daily vacuums
Problem:  Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+
hours overnight, once every 1 to 3 months.
Solutions tried:  db truncate - brings vacuum times down. Reindexing
brings vacuum times down.

I know my indexes are getting fragmented and my tables are getting
fragmented. I also know that some of my btree indexes are not being used
in queries. I also know that using "UNIQUE" in a query makes PG ignore
any index.

I am looking for the cause of this. Recently I have been looking at
EXPLAIN and ANALYZE.
1.  Running EXPLAIN on a query tells me how my query SHOULD run and
running ANALYZE tells me how it DOES run. Is that correct?
2.  If (1) is true, then a difference between the two means my query
plan is messed up and running ANALYZE on a table-level will somehow
rebuild the plan. Is that correct?
3.  If (2) is correct, then  running ANALYZE on a nightly basis before
running vacuum will keep vacuum times down. Is that correct?

Yudhvir Singh

Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
"Steinar H. Gunderson"
Дата:
On Sat, May 05, 2007 at 03:57:25PM -0700, Yudhvir Singh Sidhu wrote:
> Situation:  huge amounts of adds and deletes daily. Running daily vacuums

If you have huge amounts of adds and deletes, you might want to vacuum more
often; optionally, look into autovacuum.

> Problem:  Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+
> hours overnight, once every 1 to 3 months.

You might want to check your FSM settings. Take a look at the output of
VACUUM VERBOSE and see how the results stack up against your FSM settings.
Optionally, you could do a VACUUM FULL to clear the bloat, but this will lock
the tables and is not recommended on a regular basis.

> I know my indexes are getting fragmented and my tables are getting
> fragmented.

This sounds like a case of table bloat, ie. vacuuming too seldom and/or too
low FSM settings.

> I also know that some of my btree indexes are not being used in queries.

This is a separate problem, usually; if you need help with a specific query,
post query and the EXPLAIN ANALYZE output here. (Note that using an index is
not always a win; Postgres' planner knows about this and tries to figure out
when it is a win and when it is not.)

> I also know that using "UNIQUE" in a query makes PG ignore any index.

Do you mean DISTINCT? There are known problems with SELECT DISTINCT, but I'm
not sure how it could make Postgres start ignoring an index. Again, it's a
separate problem.

> I am looking for the cause of this. Recently I have been looking at
> EXPLAIN and ANALYZE.

This is a good beginning. :-)

> 1.  Running EXPLAIN on a query tells me how my query SHOULD run and
> running ANALYZE tells me how it DOES run. Is that correct?

Nearly. EXPLAIN tells you how the plan Postgres has chosen, with estimates on
the costs of each step. EXPLAIN ANALYZE (just plain "ANALYZE" is a different
command, which updates the planner's statistics) does the same, but also runs
the query and shows the time each step ended up taking. (Note that the
units of the estimates and the timings are different, so you can't compare
them directly.)

> 2.  If (1) is true, then a difference between the two means my query
> plan is messed up and running ANALYZE on a table-level will somehow
> rebuild the plan. Is that correct?

Again, sort of right, but not entirely. ANALYZE updates the planner's
statistics. Having good statistics is very useful for the planner in
selecting the plan that actually ends up being the best.

> 3.  If (2) is correct, then  running ANALYZE on a nightly basis before
> running vacuum will keep vacuum times down. Is that correct?

No, ANALYZE will only update planner statistics, which has nothing to do with
vacuum times. On the other hand, it might help with some of your queries.

/* Steinar */
--
Homepage: http://www.sesse.net/

Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
Yudhvir Singh Sidhu
Дата:
Steinar H. Gunderson wrote:
> On Sat, May 05, 2007 at 03:57:25PM -0700, Yudhvir Singh Sidhu wrote:
>
>> Situation:  huge amounts of adds and deletes daily. Running daily vacuums
>>
>
> If you have huge amounts of adds and deletes, you might want to vacuum more
> often; optionally, look into autovacuum.
>
>
>> Problem:  Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+
>> hours overnight, once every 1 to 3 months.
>>
>
> You might want to check your FSM settings. Take a look at the output of
> VACUUM VERBOSE and see how the results stack up against your FSM settings.
> Optionally, you could do a VACUUM FULL to clear the bloat, but this will lock
> the tables and is not recommended on a regular basis.
>
>
>> I know my indexes are getting fragmented and my tables are getting
>> fragmented.
>>
>
> This sounds like a case of table bloat, ie. vacuuming too seldom and/or too
> low FSM settings.
>
>
>> I also know that some of my btree indexes are not being used in queries.
>>
>
> This is a separate problem, usually; if you need help with a specific query,
> post query and the EXPLAIN ANALYZE output here. (Note that using an index is
> not always a win; Postgres' planner knows about this and tries to figure out
> when it is a win and when it is not.)
>
>
>> I also know that using "UNIQUE" in a query makes PG ignore any index.
>>
>
> Do you mean DISTINCT? There are known problems with SELECT DISTINCT, but I'm
> not sure how it could make Postgres start ignoring an index. Again, it's a
> separate problem.
>
>
>> I am looking for the cause of this. Recently I have been looking at
>> EXPLAIN and ANALYZE.
>>
>
> This is a good beginning. :-)
>
>
>> 1.  Running EXPLAIN on a query tells me how my query SHOULD run and
>> running ANALYZE tells me how it DOES run. Is that correct?
>>
>
> Nearly. EXPLAIN tells you how the plan Postgres has chosen, with estimates on
> the costs of each step. EXPLAIN ANALYZE (just plain "ANALYZE" is a different
> command, which updates the planner's statistics) does the same, but also runs
> the query and shows the time each step ended up taking. (Note that the
> units of the estimates and the timings are different, so you can't compare
> them directly.)
>
>
>> 2.  If (1) is true, then a difference between the two means my query
>> plan is messed up and running ANALYZE on a table-level will somehow
>> rebuild the plan. Is that correct?
>>
>
> Again, sort of right, but not entirely. ANALYZE updates the planner's
> statistics. Having good statistics is very useful for the planner in
> selecting the plan that actually ends up being the best.
>
>
>> 3.  If (2) is correct, then  running ANALYZE on a nightly basis before
>> running vacuum will keep vacuum times down. Is that correct?
>>
>
> No, ANALYZE will only update planner statistics, which has nothing to do with
> vacuum times. On the other hand, it might help with some of your queries.
>
> /* Steinar */
>
Gee Wow. I am so glad I looked into this subject. I think I am onto the
right path in solving the long-running vacuum problem. Thanks a lot for
the detailed insight Steinar.

Here is what I think the story is:
a.  Large amounts of rows are added to and deleted from a table - daily.
With this much activity, the statistics get out of whack easily. That's
where ANALYZE or VACUUM ANALYZE would help with query speed.
b.  If ANALYZE does not have a direct impact on vacuum times, what does?
Meaning what in this EXPLAIN/ANALYZE and Indexing world would have a
direct impact?

Again, thank you Steinar for validating my suspicion. It is great to be
on the right path.

Yudhvir




Here is another command and I suspect does something different than
ANALYZE by itself:  VACUUM ANALYZE.

Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
Heikki Linnakangas
Дата:
Yudhvir Singh Sidhu wrote:
> Versions:  Postgresql version 8.09 on FreeBSD 6.1
> Situation:  huge amounts of adds and deletes daily. Running daily vacuums
> Problem:  Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+
> hours overnight, once every 1 to 3 months.
> Solutions tried:  db truncate - brings vacuum times down. Reindexing
> brings vacuum times down.
>
> I know my indexes are getting fragmented and my tables are getting
> fragmented. I also know that some of my btree indexes are not being used
> in queries. I also know that using "UNIQUE" in a query makes PG ignore
> any index.

If the increase in vacuum time is indeed because of index fragmentation,
upgrading to 8.2 might help. Since 8.2, we vacuum indexes in physical
order, which speeds it up significantly, especially on fragmented indexes.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
"Steinar H. Gunderson"
Дата:
On Sat, May 05, 2007 at 09:52:56PM -0700, Yudhvir Singh Sidhu wrote:
> Here is what I think the story is:
> a.  Large amounts of rows are added to and deleted from a table - daily.
> With this much activity, the statistics get out of whack easily. That's
> where ANALYZE or VACUUM ANALYZE would help with query speed.

You are still confusing ANALYZE and VACUUM. Those are distinct operations,
and help for different reasons.

Deleting rows leaves "dead rows" -- for various reasons, Postgres can't
actually remove them from disk at the DELETE point. VACUUM scans through the
disk, searching for dead rows, and actually marks them as removed. This
results in faster query times since there will be less data overall to search
for.

ANALYZE updates the statistics, as mentioned. Yes, by adding or deleting a
lot of data, the estimates can get out of whack, leading to bad query plans.

> b.  If ANALYZE does not have a direct impact on vacuum times, what does?
> Meaning what in this EXPLAIN/ANALYZE and Indexing world would have a
> direct impact?

Improving your vacuum speed is overall not that easy (although there are
options you can tweak, and you can of course improve your hardware).  The
simplest thing to do is simply to vacuum more often, as there will be less
work to do each time. It's a bit like cleaning your house -- it might be
less work to clean it once a year, but it sure is a better idea in the long
run to clean a bit every now and then. :-)

/* Steinar */
--
Homepage: http://www.sesse.net/

Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
Yudhvir Singh Sidhu
Дата:
Steinar H. Gunderson wrote:
> On Sat, May 05, 2007 at 09:52:56PM -0700, Yudhvir Singh Sidhu wrote:
>
>> Here is what I think the story is:
>> a.  Large amounts of rows are added to and deleted from a table - daily.
>> With this much activity, the statistics get out of whack easily. That's
>> where ANALYZE or VACUUM ANALYZE would help with query speed.
>>
>
> You are still confusing ANALYZE and VACUUM. Those are distinct operations,
> and help for different reasons.
>
> Deleting rows leaves "dead rows" -- for various reasons, Postgres can't
> actually remove them from disk at the DELETE point. VACUUM scans through the
> disk, searching for dead rows, and actually marks them as removed. This
> results in faster query times since there will be less data overall to search
> for.
>
> ANALYZE updates the statistics, as mentioned. Yes, by adding or deleting a
> lot of data, the estimates can get out of whack, leading to bad query plans.
>
>
>> b.  If ANALYZE does not have a direct impact on vacuum times, what does?
>> Meaning what in this EXPLAIN/ANALYZE and Indexing world would have a
>> direct impact?
>>
>
> Improving your vacuum speed is overall not that easy (although there are
> options you can tweak, and you can of course improve your hardware).  The
> simplest thing to do is simply to vacuum more often, as there will be less
> work to do each time. It's a bit like cleaning your house -- it might be
> less work to clean it once a year, but it sure is a better idea in the long
> run to clean a bit every now and then. :-)
>
> /* Steinar */
>

Thanks for the clarification Steingar,

I'll try some of the things we discussed out on Monday and will let you
guys know what happens. I know I am confusing some concepts but I am new
to this db and to tuning in general. I am excited about this new
adventure and really appreciate the level of support I have seen.

Yudhvir

Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
Jim Nasby
Дата:
On May 5, 2007, at 5:57 PM, Yudhvir Singh Sidhu wrote:
> Problem:  Vacuum times jump up from 45 minutes, or 1:30 minutes to 6
> + hours overnight, once every 1 to 3 months.
> Solutions tried:  db truncate - brings vacuum times down.
> Reindexing brings vacuum times down.

Does it jump up to 6+ hours just once and then come back down? Or
once at 6+ hours does it stay there?

Getting that kind of change in vacuum time sounds a lot like you
suddenly didn't have enough maintenance_work_mem to remember all the
dead tuples in one pass; increasing that setting might bring things
back in line (you can increase it on a per-session basis, too).

Also, have you considered vacuuming during the day, perhaps via
autovacuum? If you can vacuum more often you'll probably get less
bloat. You'll probably want to experiment with the vacuum_cost_delay
settings to reduce the impact of vacuuming during the day (try
setting vacuum_cost_delay to 20 as a starting point).
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)



Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
Yudhvir Singh Sidhu
Дата:
Jim Nasby wrote:
> On May 5, 2007, at 5:57 PM, Yudhvir Singh Sidhu wrote:
>> Problem:  Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+
>> hours overnight, once every 1 to 3 months.
>> Solutions tried:  db truncate - brings vacuum times down. Reindexing
>> brings vacuum times down.
>
> Does it jump up to 6+ hours just once and then come back down? Or once
> at 6+ hours does it stay there?
>
> Getting that kind of change in vacuum time sounds a lot like you
> suddenly didn't have enough maintenance_work_mem to remember all the
> dead tuples in one pass; increasing that setting might bring things
> back in line (you can increase it on a per-session basis, too).
>
> Also, have you considered vacuuming during the day, perhaps via
> autovacuum? If you can vacuum more often you'll probably get less
> bloat. You'll probably want to experiment with the vacuum_cost_delay
> settings to reduce the impact of vacuuming during the day (try setting
> vacuum_cost_delay to 20 as a starting point).
> --
> Jim Nasby                                            jim@nasby.net
> EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)
>
>
>
It ramps up and I have to run a db truncate to bring it back down. On
some machines it creeps up, on others it spikes. I have seen it climb
from 6 to 12 to 21 in 3 consequtive days. Well, what's one to do? I have
maintenance_work_mem set to 32768 - Is that enough? I vacuum daily.

I just turned vacuum verbose on on one of the systems and will find out
tomorrow what it shows me. I plan on playing with Max_fsm_ settings
tomorrow. And I'll keep you guys up to date.

Yudhvir



Re: How to Find Cause of Long Vacuum Times - NOOB Question

От
Jim Nasby
Дата:
On May 7, 2007, at 11:10 PM, Yudhvir Singh Sidhu wrote:
> Jim Nasby wrote:
>> On May 5, 2007, at 5:57 PM, Yudhvir Singh Sidhu wrote:
>>> Problem:  Vacuum times jump up from 45 minutes, or 1:30 minutes
>>> to 6+ hours overnight, once every 1 to 3 months.
>>> Solutions tried:  db truncate - brings vacuum times down.
>>> Reindexing brings vacuum times down.
>>
>> Does it jump up to 6+ hours just once and then come back down? Or
>> once at 6+ hours does it stay there?
>>
>> Getting that kind of change in vacuum time sounds a lot like you
>> suddenly didn't have enough maintenance_work_mem to remember all
>> the dead tuples in one pass; increasing that setting might bring
>> things back in line (you can increase it on a per-session basis,
>> too).
>>
>> Also, have you considered vacuuming during the day, perhaps via
>> autovacuum? If you can vacuum more often you'll probably get less
>> bloat. You'll probably want to experiment with the
>> vacuum_cost_delay settings to reduce the impact of vacuuming
>> during the day (try setting vacuum_cost_delay to 20 as a starting
>> point).
> It ramps up and I have to run a db truncate to bring it back down.
> On some machines it creeps up, on others it spikes. I have seen it
> climb from 6 to 12 to 21 in 3 consequtive days. Well, what's one to
> do? I have maintenance_work_mem set to 32768 - Is that enough?

Depends on how many dead rows there are to be vacuumed. If there's a
lot, you could certainly be exceeding maintenance_work_mem. If you
look closely at the output of VACUUM VERBOSE you'll see the indexes
for a particular table being scanned more than once if all the dead
rows can't fit into maintenance_work_mem.

> I vacuum daily.

If you've got high update rates, that very likely might not be often
enough.

> I just turned vacuum verbose on on one of the systems and will find
> out tomorrow what it shows me. I plan on playing with Max_fsm_
> settings tomorrow. And I'll keep you guys up to date.

The tail end of vacuumdb -av will tell you exactly how much room is
needed in the FSM.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)