Re: Adding new field to big table

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: Adding new field to big table
Дата
Msg-id CAMkU=1ydsccdXs6jn52OyncWVoF+CqNemb84G62Rf=NBQvf0wg@mail.gmail.com
обсуждение исходный текст
Ответ на Adding new field to big table  ("acanada" <acanada@cnio.es>)
Ответы Re: Adding new field to big table
Список pgsql-performance
On Fri, Mar 14, 2014 at 4:30 AM, acanada <acanada@cnio.es> wrote:
Hello,

I'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...

This is what I do:

First I create a tmp_table with the data that will be added to the big table:

\d+ svm_confidence_id_tmp
                        Table "public.svm_confidence_id_tmp"
    Column     |       Type       | Modifiers | Storage | Stats target | Description
---------------+------------------+-----------+---------+--------------+-------------
 id            | integer          | not null  | plain   |              |
 svmconfidence | double precision |           | plain   |              |
Indexes:
    "svm_confidence_id_tmp_pkey" PRIMARY KEY, btree (id)



....



Then I update the svmConfidence field of the document table like this:

 update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;

But it takes too much time.
Is there something to take into account? Any hints?
Should I do it in a different way?

If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:

with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )
update document as d set "svmConfidence" = t.svmconfidence from t where t.id=d.id;

Cheers,

Jeff

В списке pgsql-performance по дате отправления:

Предыдущее
От: "acanada"
Дата:
Сообщение: Adding new field to big table
Следующее
От: "acanada"
Дата:
Сообщение: Re: Adding new field to big table