Обсуждение: file size issue?

Поиск
Список
Период
Сортировка

file size issue?

От
"Johnson, Shaunn"
Дата:

Howdy:

Have a development machine running Postgres 7.1.3 on Mandrake Linux 8.0,
kernel 2.4.16. Special note: This is running on a raid array (software) with
reiserfs.  The filesystem size is about 160 gig of space and 1.5 gig of
memory.

I am concerned that there is a file size problem on Linux although I
am not sure how to prove that THAT problem exist.

I have a table that is created in Postgres which is about 3 gig in size.
I would like to do a vacuum and index table, but I get errors:

[errors - vacuum]

bcn=> vacuum verbose analyze db2_cn1pmemb;
NOTICE:  --Relation db2_cn1pmemb--
NOTICE:  Pages 64428: Changed 64428, reaped 0, Empty 0, New 0; Tup 1739546: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 286, MaxLen 295; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 17.58s/0.57u sec.

NOTICE:  Analyzing...
ERROR:  MemoryContextAlloc: invalid request size 1163152965

[/errors - vacuum]

[error - index]
bcn=> create index db2_cn1pmemb_i on db2_cn1pmemb (c_contract_num, c_mbr_num);
pqReadData() -- backend closed the channel unexpectedly.
        This probably means the backend terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

[/error - index]

I don't know much, but as I'm looking at the serverlog, I see:

[serverlog error]

The Data Base System is starting up
/usr/bin/postmaster: ServerLoop:                handling writing 5
DEBUG:  redo starts at (46, 1232599204)
DEBUG:  ReadRecord: record with zero len at (46, 1232673968)
DEBUG:  redo done at (46, 1232665696)
DEBUG:  database system is in production state
DEBUG:  proc_exit(0)
DEBUG:  shmem_exit(0)
DEBUG:  exit(0)
/usr/bin/postmaster: reaping dead processes...

[/serverlog error]

Now - the reason I suggested that it could be a file size problem
is that all files less than about 2 gig (just a tad more) works
without a problem. 

I've been trying to read and see if it's a Large File Summit problem,
but (groups.google.com seems to think that this problem no longer
exists on a Intel box running Linux) can not find any documentation
that says if it is or isn't.  The best the NG can suggest is
'try ulimit -f to see'.  The return of that command is 'unlimited'.

Has anyone seen if it is a problem with the OS or with the way
Postgres handles large files (or, if I should compile it again
with some new options).

Any suggestions or documentation appreciated.

Thanks!

-X

Re: file size issue?

От
"Johnson, Shaunn"
Дата:
--I think you've answered at least 1/2 of my question,
Andrew.
 
--I'd like to figure out if Postgres reaches a point where
it will no longer index or vacuum a table based on its size (your answer
tells me 'No' -  it will continue until it is done, splitting each
table on 1Gig increments).
 
--And if THAT is true, then why am I getting failures when
I'm vacuuming or indexing a table just after reaching 2 Gig?
 
--And if it's an OS (or any other) problem, how can I factor
out Postgres?
 
--Thanks!
 
-X
 
 
[snip]
 
 
> Has anyone seen if it is a problem with the OS or with the way
> Postgres handles large files (or, if I should compile it again
> with some new options).
 
 
What do you mean "postgres handles large files"? The filesize
problem isn't related to the size of your table, because postgres
splits files at 1 Gig.
If it's an output problem, you could see something, but you said you
were vacuuming.
 
 
A

[snip]