Обсуждение: Adding new field to big table

Поиск
Список
Период
Сортировка

Adding new field to big table

От
"acanada"
Дата:
Hello,

I'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the
updatesof a table with 124 million rows... 

This is what I do:

First I create a tmp_table with the data that will be added to the big table:

\d+ svm_confidence_id_tmp
                        Table "public.svm_confidence_id_tmp"
    Column     |       Type       | Modifiers | Storage | Stats target | Description
---------------+------------------+-----------+---------+--------------+-------------
 id            | integer          | not null  | plain   |              |
 svmconfidence | double precision |           | plain   |              |
Indexes:
    "svm_confidence_id_tmp_pkey" PRIMARY KEY, btree (id)



The table where this data will be added has the svmConfidence field

\d+ document;
                                      Table "public.document"
      Column      |              Type              | Modifiers | Storage  | Stats target | Description
------------------+--------------------------------+-----------+----------+--------------+-------------
 id               | integer                        | not null  | plain    |              |
 kind             | character varying(255)         | not null  | extended |              |
 uid              | character varying(255)         | not null  | extended |              |
 sentenceId       | character varying(255)         | not null  | extended |              |
 text             | text                           | not null  | extended |              |
 hepval           | double precision               |           | plain    |              |
 created          | timestamp(0) without time zone | not null  | plain    |              |
 updated          | timestamp(0) without time zone |           | plain    |              |
 cardval          | double precision               |           | plain    |              |
 nephval          | double precision               |           | plain    |              |
 phosval          | double precision               |           | plain    |              |
 patternCount     | double precision               |           | plain    |              |
 ruleScore        | double precision               |           | plain    |              |
 hepTermNormScore | double precision               |           | plain    |              |
 hepTermVarScore  | double precision               |           | plain    |              |
 svm              | double precision               |           | plain    |              |
 svmConfidence    | double precision               |           | plain    |              |
Indexes:
    "DocumentOLD_pkey" PRIMARY KEY, btree (id)
    "document_cardval_index" btree (cardval)
    "document_heptermnorm_index" btree ("hepTermNormScore" DESC NULLS LAST)
    "document_heptermvar_index" btree ("hepTermVarScore" DESC NULLS LAST)
    "document_hepval_index" btree (hepval DESC NULLS LAST)
    "document_kind_index" btree (kind)
    "document_nephval_index" btree (nephval DESC NULLS LAST)
    "document_patterncount_index" btree ("patternCount" DESC NULLS LAST)
    "document_phosval_index" btree (phosval DESC NULLS LAST)
    "document_rulescore_index" btree ("ruleScore" DESC NULLS LAST)
    "document_sentenceid_index" btree ("sentenceId")
    "document_svm_index" btree (svm)
    "document_uid_index" btree (uid)

Then I update the svmConfidence field of the document table like this:

 update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;

But it takes too much time.
Is there something to take into account? Any hints?
Should I do it in a different way?

Thanks for your time...

Andrés
**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información
protegidapara el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de
transmisiónpor parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega
comunicarloal remitente y borrar el mensaje recibido. 
**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged
informationfor the sole use of the designated recipient named above. Distribution, reproduction or any other use of
thistransmission by any party other than the intended recipient is prohibited. If you are not the intended recipient
pleasecontact the sender and delete all copies. 



Re: Adding new field to big table

От
Jeff Janes
Дата:
On Fri, Mar 14, 2014 at 4:30 AM, acanada <acanada@cnio.es> wrote:
Hello,

I'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...

This is what I do:

First I create a tmp_table with the data that will be added to the big table:

\d+ svm_confidence_id_tmp
                        Table "public.svm_confidence_id_tmp"
    Column     |       Type       | Modifiers | Storage | Stats target | Description
---------------+------------------+-----------+---------+--------------+-------------
 id            | integer          | not null  | plain   |              |
 svmconfidence | double precision |           | plain   |              |
Indexes:
    "svm_confidence_id_tmp_pkey" PRIMARY KEY, btree (id)



....



Then I update the svmConfidence field of the document table like this:

 update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;

But it takes too much time.
Is there something to take into account? Any hints?
Should I do it in a different way?

If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:

with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )
update document as d set "svmConfidence" = t.svmconfidence from t where t.id=d.id;

Cheers,

Jeff

Re: Adding new field to big table

От
"acanada"
Дата:
Hello Jeff,

The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...

Thank you for your answer.

Cheers,
Andrés

El Mar 14, 2014, a las 5:49 PM, Jeff Janes escribió:

On Fri, Mar 14, 2014 at 4:30 AM, acanada <acanada@cnio.es> wrote:
Hello,

I'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...

This is what I do:

First I create a tmp_table with the data that will be added to the big table:

\d+ svm_confidence_id_tmp
                        Table "public.svm_confidence_id_tmp"
    Column     |       Type       | Modifiers | Storage | Stats target | Description
---------------+------------------+-----------+---------+--------------+-------------
 id            | integer          | not null  | plain   |              |
 svmconfidence | double precision |           | plain   |              |
Indexes:
    "svm_confidence_id_tmp_pkey" PRIMARY KEY, btree (id)



....



Then I update the svmConfidence field of the document table like this:

 update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;

But it takes too much time.
Is there something to take into account? Any hints?
Should I do it in a different way?

If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:

with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )
update document as d set "svmConfidence" = t.svmconfidence from t where t.id=d.id;

Cheers,

Jeff


**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.

**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.

Re: Adding new field to big table

От
Jeff Janes
Дата:
On Fri, Mar 14, 2014 at 10:06 AM, acanada <acanada@cnio.es> wrote:
Hello Jeff,

The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...

Thank you for your answer.

OK.  Can you provide an explain (analyze, buffers), and the other information described here: http://wiki.postgresql.org/wiki/Slow_Query_Questions

It may be faster to make a new table by selecting a join on the existing tables and then replace the master table with it.

Also, if you are going to be doing a lot of bulk updates like this, lowering the fillfactor to below 50% might be helpful.
 
Cheers,

Jeff

Re: Adding new field to big table

От
"acanada"
Дата:
Hello,

Jeff and Jeffrey thank you for your tips.
This is the explain of the query:
x=> explain update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;
                                               QUERY PLAN                                                
---------------------------------------------------------------------------------------------------------
 Update on document d  (cost=4204242.82..61669685.86 rows=124515592 width=284)
   ->  Hash Join  (cost=4204242.82..61669685.86 rows=124515592 width=284)
         Hash Cond: (d.id = st.id)
         ->  Seq Scan on document d  (cost=0.00..8579122.97 rows=203066697 width=270)
         ->  Hash  (cost=1918213.92..1918213.92 rows=124515592 width=18)
               ->  Seq Scan on svm_confidence_id_tmp st  (cost=0.00..1918213.92 rows=124515592 width=18)
(6 rows)

It's not using the index, most of the rows are beeing updated.
I'm trying with the CTAS solution.

Cheers,
Andrés.

El Mar 14, 2014, a las 8:29 PM, Jeff Janes escribió:

On Fri, Mar 14, 2014 at 10:06 AM, acanada <acanada@cnio.es> wrote:
Hello Jeff,

The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...

Thank you for your answer.

OK.  Can you provide an explain (analyze, buffers), and the other information described here: http://wiki.postgresql.org/wiki/Slow_Query_Questions

It may be faster to make a new table by selecting a join on the existing tables and then replace the master table with it.

Also, if you are going to be doing a lot of bulk updates like this, lowering the fillfactor to below 50% might be helpful.
 
Cheers,

Jeff



**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.

**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.

Re: Adding new field to big table

От
Jeff Janes
Дата:
On Monday, March 17, 2014, acanada <acanada@cnio.es> wrote:
Hello,

Jeff and Jeffrey thank you for your tips.
This is the explain of the query:
x=> explain update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;
                                               QUERY PLAN                                                
---------------------------------------------------------------------------------------------------------
 Update on document d  (cost=4204242.82..61669685.86 rows=124515592 width=284)
   ->  Hash Join  (cost=4204242.82..61669685.86 rows=124515592 width=284)
         Hash Cond: (d.id = st.id)
         ->  Seq Scan on document d  (cost=0.00..8579122.97 rows=203066697 width=270)
         ->  Hash  (cost=1918213.92..1918213.92 rows=124515592 width=18)
               ->  Seq Scan on svm_confidence_id_tmp st  (cost=0.00..1918213.92 rows=124515592 width=18)
(6 rows)

It's not using the index, most of the rows are beeing updated.
I'm trying with the CTAS solution.

Once this hash join spills to disk, the performance is going to get very bad.  The problem is that the outer table is going to get split into batches and written to disk.  If this were just a select, that would not be a problem because when it reads each batch back in, that is all it needs to do as the temp file contains all the necessary info.  But with an update, each batch that it reads back in and matches to the inner side, it then needs to back to the physical table to do the update, using the ctid saved in the batches to find the table tuple.  So in effect this adds a nested loop from the hashed copy of the table to the real copy, and the locality of reference between those is poor when there are many batches.  I'm pretty sure that the extra cost of doing this look up is not taken into account by the planner.  But, if it chooses a different plan than a hash join, that other plan might also have the same problem.

Some things for you to consider, other than CTAS:

1) Are you analyzing your temporary table before you do the update?  That might switch it to a different plan.

2) Make work_mem as large as you can stand, just for the one session that runs the update, to try to avoid spilling to disk.

3) If you set enable_hashjoin off temporarily in this session, what plan do you get?

0) Why are you creating the temporary table?  You must have some process that comes up with the value for the new column to put in the temporary table, why not just stick it directly into the original table?

Some things for the PostgreSQL hackers to consider:

1) When the hash spills to disk, it seems to write to disk the entire row that is going to be updated (except for the one column which is going to be overwritten) plus that tuple's ctid.  It doesn't seem like this is necessary, it should only need to write the ctid and the join key (and perhaps any quals?).  Since it has to visit the old row anyway to set its cmax, it can pull out the rest of the data to make the new tuple while it is there.  If it wrote a lot less data to the temp tables it could make a lot less batches for the same work_mem, and here the cost is directly proportional to the number of batches.  (Also, for the table not being updated, it writes the ctid to temp space when there seems to be no use for it.)

2) Should the planner account for the scattered reads needed to join to the original table on ctid for update from whatever materialized version of the table is created?  Of course all other plans would also need to be similarly instrumented.  I think most of them would have the same problem as the hash_join.  The one type I can think of that doesn't would be a merge join in which there is strong correlation between the merge key and the ctid order on the table to be updated.

3) It seems like the truly efficient way to run such an update on a very large data set would be join the two tables (hash or merge), then sort the result on the ctid of the updated table, then do a "merge join" between that sorted result and the physical table.  I don't think such a method currently is known to the planner, is it?  How hard would it be to make it?

The example I have been using is:


alter table pgbench_accounts add filler2 text;
create table foo as select aid, md5(aid::text) from pgbench_accounts;
analyze;
explain (verbose) update pgbench_accounts set filler2 =md5 from foo where pgbench_accounts.aid=foo.aid;

Cheers,

Jeff

Re: Adding new field to big table

От
RichmondDyes@monroehosp.org
Дата:
Jeff I think adding the new table is the best way to handle this issue.




From:        Jeff Janes <jeff.janes@gmail.com>
To:        acanada <acanada@cnio.es>,
Cc:        postgres performance list <pgsql-performance@postgresql.org>
Date:        03/18/2014 02:05 AM
Subject:        Re: [PERFORM] Adding new field to big table
Sent by:        pgsql-performance-owner@postgresql.org




On Monday, March 17, 2014, acanada <acanada@cnio.es> wrote:
Hello,

Jeff and Jeffrey thank you for your tips.
This is the explain of the query:
x=> explain update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;
                                               QUERY PLAN                                                
---------------------------------------------------------------------------------------------------------
 Update on document d  (cost=4204242.82..61669685.86 rows=124515592 width=284)
   ->  Hash Join  (cost=4204242.82..61669685.86 rows=124515592 width=284)
         Hash Cond: (d.id = st.id)
         ->  Seq Scan on document d  (cost=0.00..8579122.97 rows=203066697 width=270)
         ->  Hash  (cost=1918213.92..1918213.92 rows=124515592 width=18)
               ->  Seq Scan on svm_confidence_id_tmp st  (cost=0.00..1918213.92 rows=124515592 width=18)
(6 rows)

It's not using the index, most of the rows are beeing updated.
I'm trying with the CTAS solution.

Once this hash join spills to disk, the performance is going to get very bad.  The problem is that the outer table is going to get split into batches and written to disk.  If this were just a select, that would not be a problem because when it reads each batch back in, that is all it needs to do as the temp file contains all the necessary info.  But with an update, each batch that it reads back in and matches to the inner side, it then needs to back to the physical table to do the update, using the ctid saved in the batches to find the table tuple.  So in effect this adds a nested loop from the hashed copy of the table to the real copy, and the locality of reference between those is poor when there are many batches.  I'm pretty sure that the extra cost of doing this look up is not taken into account by the planner.  But, if it chooses a different plan than a hash join, that other plan might also have the same problem.

Some things for you to consider, other than CTAS:

1) Are you analyzing your temporary table before you do the update?  That might switch it to a different plan.

2) Make work_mem as large as you can stand, just for the one session that runs the update, to try to avoid spilling to disk.

3) If you set enable_hashjoin off temporarily in this session, what plan do you get?

0) Why are you creating the temporary table?  You must have some process that comes up with the value for the new column to put in the temporary table, why not just stick it directly into the original table?

Some things for the PostgreSQL hackers to consider:

1) When the hash spills to disk, it seems to write to disk the entire row that is going to be updated (except for the one column which is going to be overwritten) plus that tuple's ctid.  It doesn't seem like this is necessary, it should only need to write the ctid and the join key (and perhaps any quals?).  Since it has to visit the old row anyway to set its cmax, it can pull out the rest of the data to make the new tuple while it is there.  If it wrote a lot less data to the temp tables it could make a lot less batches for the same work_mem, and here the cost is directly proportional to the number of batches.  (Also, for the table not being updated, it writes the ctid to temp space when there seems to be no use for it.)

2) Should the planner account for the scattered reads needed to join to the original table on ctid for update from whatever materialized version of the table is created?  Of course all other plans would also need to be similarly instrumented.  I think most of them would have the same problem as the hash_join.  The one type I can think of that doesn't would be a merge join in which there is strong correlation between the merge key and the ctid order on the table to be updated.

3) It seems like the truly efficient way to run such an update on a very large data set would be join the two tables (hash or merge), then sort the result on the ctid of the updated table, then do a "merge join" between that sorted result and the physical table.  I don't think such a method currently is known to the planner, is it?  How hard would it be to make it?

The example I have been using is:


alter table pgbench_accounts add filler2 text;
create table foo as select aid, md5(aid::text) from pgbench_accounts;
analyze;
explain (verbose) update pgbench_accounts set filler2 =md5 from foo where pgbench_accounts.aid=foo.aid;

Cheers,

Jeff

-- Confidentiality Notice -- This email message, including all the attachments, is for the sole use of the intended recipient(s) and contains confidential information. Unauthorized use or disclosure is prohibited. If you are not the intended recipient, you may not use, disclose, copy or disseminate this information. If you are not the intended recipient, please contact the sender immediately by reply email and destroy all copies of the original message, including attachments.