Обсуждение: Large Table Updates, causing memory exhaustion. Is a transaction wrapper the answer?

Поиск
Список
Период
Сортировка

Large Table Updates, causing memory exhaustion. Is a transaction wrapper the answer?

От
"Michael Miyabara-McCaskey"
Дата:
Hello all,

I am new to the PostgreSQL world.

I am creating a new DB with ALOT of records, for instance one table that I
need to do updates on is about 5.5GB in size.

Doing a simple operation to uppercase the records, I keep exhausting the
memory of the backend.  I'm thinking that writing my SQL statement within a
Transaction is the correct method... Is it? If so, what is the best way to
loop it through all the records, such that I again do not get into
exhausting the memory available.

The original SQL statement I have been using is
"UPDATE table_name SET field_1 = UPPER(field_1);"

Any help would be appreciated, and thank you in advance.

-Michael Miyabara-McCaskey


Re: Large Table Updates, causing memory exhaustion. Is a transaction wrapper the answer?

От
Tom Lane
Дата:
"Michael Miyabara-McCaskey" <mykarz@miyabara.com> writes:
> Doing a simple operation to uppercase the records, I keep exhausting the
> memory of the backend.

> The original SQL statement I have been using is
> "UPDATE table_name SET field_1 = UPPER(field_1);"

The problem is that the UPPER() function leaks memory, so if you invoke
it enough times in one statement, you will run out.  Transaction or not
makes no difference.

This is fixed for 7.1, but in the meantime the only solution I can offer
is to do conditional updates that only affect a subset of the table per
command.  You could wrap a sequence of those in a transaction if you
have a need to prevent other processes from seeing the partially-updated
state of the table, but otherwise a transaction isn't going to help.

            regards, tom lane