When adding millions of rows at once, getting out of disk space errors

Поиск
Список
Период
Сортировка
От Mike Christensen
Тема When adding millions of rows at once, getting out of disk space errors
Дата
Msg-id 499C7216.9000804@comcast.net
обсуждение исходный текст
Ответы Re: When adding millions of rows at once, getting out of disk space errors  (Scott Marlowe <scott.marlowe@gmail.com>)
Re: When adding millions of rows at once, getting out of disk space errors  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: When adding millions of rows at once, getting out of disk space errors  (Alan Hodgson <ahodgson@simkin.ca>)
Re: When adding millions of rows at once, getting out of disk space errors  (Sam Mason <sam@samason.me.uk>)
Список pgsql-general
Hi all -

I'm doing some perf testing and need huge amounts of data.  So I have a
program that is adding data to a few tables ranging from 500,000 to 15M
rows.  The program is just a simply C# program that blasts data into the
DB, but after about 3M rows or so I get an errror:

ERROR:  could not extend relation 1663/41130/41177: No space left on device
HINT:  Check free disk space.

If I do a full VACUUM on the table being inserted into, the error goes
away but it comes back very quickly.  Obviously, I wouldn't want this
happening in a production environment.

I've noticed some auto-vacuum settings as well (I just checked the box
and left all the defaults) but that doesn't seem to help too much.
What's the recommended setup in a production environment for tables
where tons of data will be inserted?

It seems to me there's some sort of "max table size" before you have to
allocate more space on the disk, however I can't seem to find where
these settings are and how to allow millions of rows to be inserted into
a table without having to vacuum every few million rows..

Mike



В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Appending \o output instead of overwriting the output file
Следующее
От: Bill Todd
Дата:
Сообщение: Re: COPY questions