Обсуждение: postmaster crashing on semi large tabl

Поиск
Список
Период
Сортировка

postmaster crashing on semi large tabl

От
nate
Дата:
I have a table that has only about 6000 rows in it, takes up 44941312
bytes (in the data/base dir), and crashes postgre anytime I try to access
it. Sometimes I can select a single row out of it and get a result, other
times it just crashes.  Anytime I try to do a count(column) on it, it just
eats up 100% cpu for more than 5 minutes.  These are the error messages
i'm getting:

when I run a 'vacuum analyze' on the big table:
FATAL 1:  Memory exhausted in AllocSetAlloc()
does the same thing when I try to access it

and I got this in the syslog twice.  Can't reproduce this one...
Nov 22 17:17:59 desert-solutions logger: FATAL: s_lock(401981a4) at
bufmgr.c:665, stuck spinlock. Aborting.
Nov 22 17:17:59 desert-solutions logger: FATAL: s_lock(4019ec64) at
bufmgr.c:1106, stuck spinlock. Aborting.
Nov 22 17:17:59 desert-solutions logger: FATAL: s_lock(40014011) at
spin.c:125, stuck spinlock. Aborting.

I can 'vacuum' the big table, but it won't 'vacuum analyze'.  Here's what
I get when I do a vacuum:

DEBUG:  --Relation resume_user--
DEBUG:  Pages 1145: Changed 0, Reapped 740, Empty 0, New 0; Tup 7150: Vac
0, Keep/VTL 0/0, Crash 0, UnUsed 4349, MinLen 56, MaxLen 6148; Re-using:
Free/Avail. Space 166708/160508; EndEmpty/Avail. Pages 0/449. Elapsed 0/0
sec.
DEBUG:  Rel resume_user: Pages: 1145 --> 1145; Tuple(s) moved: 0. Elapsed
4/0 sec.

I tried futzing with buffer sizes, 256, 512, 1024 but that didn't seem to
do anything.  Here's my ulimit:

[postgre@desert-solutions postgre]$ ulimit -a
core file size (blocks)  1000000
data seg size (kbytes)   unlimited
file size (blocks)       unlimited
max memory size (kbytes) unlimited
stack size (kbytes)      8192
cpu time (seconds)       unlimited
max user processes       256
pipe size (512 bytes)    8
open files               1024
virtual memory (kbytes)  2105343

[postgre@desert-solutions postgre]$ free
             total       used       free     shared    buffers     cached
Mem:        257788      73108     184680       7156       7188      53160
-/+ buffers/cache:      12760     245028
Swap:        72256      10184      62072

as you can see, there's plenty of memory free.  The backend crashes within
2 seconds of running 'vacuum analyze'.  Plenty of disk space left, a fresh
reboot didn't help.  I'm running PostgreSQL 6.5.1 on i686-pc-linux-gnu,
compiled by gcc egcs-2.91.66, redhat 6.0 with 2.2.5-22smp kernel patch.  2
p2-350s, 256 megs of ram in a Dell Dimension.

So, if anyone has any idea, please let me know.  I greatly appreciate it

Thanks,
Nathan Shafer


Re: [BUGS] postmaster crashing on semi large tabl

От
Tom Lane
Дата:
nate <nate@desert-solutions.com> writes:
> I have a table that has only about 6000 rows in it, takes up 44941312
> bytes (in the data/base dir),

? Not in just one file, I hope.  It should be divided into 1-Gb-sized
segments named resume_user, resume_user.1, etc.  If it really is one
file then the problem likely has something to do with file offset
overflow.  If there are multiple files, how big are they exactly?

> I can 'vacuum' the big table, but it won't 'vacuum analyze'.  Here's what
> I get when I do a vacuum:

> DEBUG:  --Relation resume_user--
> DEBUG:  Pages 1145: Changed 0, Reapped 740, Empty 0, New 0; Tup 7150: Vac
> 0, Keep/VTL 0/0, Crash 0, UnUsed 4349, MinLen 56, MaxLen 6148; Re-using:
> Free/Avail. Space 166708/160508; EndEmpty/Avail. Pages 0/449. Elapsed 0/0
> sec.
> DEBUG:  Rel resume_user: Pages: 1145 --> 1145; Tuple(s) moved: 0. Elapsed
> 4/0 sec.

That's even more interesting, because it says "vacuum" thinks there's
only 1145 disk blocks (about 9Mb, assuming you stuck to the standard
8K block size) in the table.  Which is fairly reasonable for a table
with 6000 rows in it, whereas 4 gig is right out.  Why isn't vacuum
noticing all the remaining space?

I hope you had a backup, because I'm afraid that table is hosed.  But
we should try to figure out how it got that way, so we can prevent it
from happening again.

            regards, tom lane