Re: error updating a very large table

От: Simon Riggs
Тема: Re: error updating a very large table
Дата: ,
Msg-id: 1239814637.23905.44.camel@ebony.2ndQuadrant
(см: обсуждение, исходный текст)
Ответ на: Re: error updating a very large table  (Tom Lane)
Список: pgsql-performance

Скрыть дерево обсуждения

error updating a very large table  (Brian Cox, )
 Re: error updating a very large table  (Grzegorz Jaśkiewicz, )
 Re: error updating a very large table  (Tom Lane, )
  Re: error updating a very large table  (Simon Riggs, )

On Wed, 2009-04-15 at 09:51 -0400, Tom Lane wrote:
> Brian Cox <> writes:
> > I changed the logic to update the table in 1M row batches. However,
> > after 159M rows, I get:
>
> > ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of
> > 8192 bytes at block 7621407
>
> You're out of disk space.
>
> > A df run on this machine shows plenty of space:
>
> Per-user quota restriction, perhaps?
>
> I'm also wondering about temporary files, although I suppose 100G worth
> of temp files is a bit much for this query.  But you need to watch df
> while the query is happening, rather than suppose that an after-the-fact
> reading means anything.

Anytime we get an out of space error we will be in the same situation.

When we get this error, we should
* summary of current temp file usage
* df (if possible on OS)

Otherwise we'll always be wondering what caused the error.

--
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support



В списке pgsql-performance по дате сообщения:

От: Simon Riggs
Дата:
Сообщение: Re: error updating a very large table
От: Lists
Дата:
Сообщение: Re: Shouldn't the planner have a higher cost for reverse index scans?