Re: Adding new field to big table

От: acanada
Тема: Re: Adding new field to big table
Дата: ,
Msg-id: E7522DA2-B85C-43D3-BC0E-0F8FAC78B64B@cnio.es
(см: обсуждение, исходный текст)
Ответ на: Re: Adding new field to big table  (Jeff Janes)
Ответы: Re: Adding new field to big table  (Jeff Janes)
Список: pgsql-performance

Скрыть дерево обсуждения

Adding new field to big table  ("acanada", )
 Re: Adding new field to big table  (Jeff Janes, )
  Re: Adding new field to big table  ("acanada", )
   Re: Adding new field to big table  (Jeff Janes, )
    Re: Adding new field to big table  ("acanada", )
     Re: Adding new field to big table  (Jeff Janes, )
      Re: Adding new field to big table  (, )

Hello Jeff,

The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...

Thank you for your answer.

Cheers,
Andrés

El Mar 14, 2014, a las 5:49 PM, Jeff Janes escribió:

On Fri, Mar 14, 2014 at 4:30 AM, acanada <> wrote:
Hello,

I'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...

This is what I do:

First I create a tmp_table with the data that will be added to the big table:

\d+ svm_confidence_id_tmp
                        Table "public.svm_confidence_id_tmp"
    Column     |       Type       | Modifiers | Storage | Stats target | Description
---------------+------------------+-----------+---------+--------------+-------------
 id            | integer          | not null  | plain   |              |
 svmconfidence | double precision |           | plain   |              |
Indexes:
    "svm_confidence_id_tmp_pkey" PRIMARY KEY, btree (id)



....



Then I update the svmConfidence field of the document table like this:

 update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;

But it takes too much time.
Is there something to take into account? Any hints?
Should I do it in a different way?

If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:

with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )
update document as d set "svmConfidence" = t.svmconfidence from t where t.id=d.id;

Cheers,

Jeff


**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.

**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.


В списке pgsql-performance по дате сообщения:

От: Jeff Janes
Дата:
Сообщение: Re: Adding new field to big table
От: "acanada"
Дата:
Сообщение: Re: Adding new field to big table