Re: Getting an out of memory failure.... (long email)
| От | Gaetano Mendola |
|---|---|
| Тема | Re: Getting an out of memory failure.... (long email) |
| Дата | |
| Msg-id | cjbvfd$mbe$1@floppy.pyrenet.fr обсуждение исходный текст |
| Ответ на | Re: Getting an out of memory failure.... (long email) (Sean Shanny <shannyconsulting@earthlink.net>) |
| Список | pgsql-general |
Sean Shanny wrote:
> Tom,
>
> The Analyze did in fact fix the issue. Thanks.
>
> --sean
Given the fact that you are using pg_autovacuum, you have to consider
a few points:
1) Is out there a buggy version that will not analyze big tables.
2) The autovacuum fail in scenarios with big tables not eavy updated,
inserted.
For the 1) I suggest to check in your logs and see how the total rows
in your table are displayed, the right version show you the rows number
as a float:
[2004-09-28 17:10:47 CEST] table name: empdb."public"."user_logs"
[2004-09-28 17:10:47 CEST] relid: 17220; relisshared: 0
[2004-09-28 17:10:47 CEST] reltuples: 5579780.000000; relpages: 69465
[2004-09-28 17:10:47 CEST] curr_analyze_count: 171003; curr_vacuum_count: 0
[2004-09-28 17:10:47 CEST] last_analyze_count: 165949; last_vacuum_count: 0
[2004-09-28 17:10:47 CEST] analyze_threshold: 4464024; vacuum_threshold: 2790190
for the point 2) I suggest you to "cron" analyze during the day.
Regards
Gaetano Mendola
В списке pgsql-general по дате отправления: