Обсуждение: Re: Possible to improve query plan?

Поиск
Список
Период
Сортировка

Re: Possible to improve query plan?

От
"Ing. Marcos Ortiz Valmaseda"
Дата:
Which is the type of your application? You can see it on the Performance Whackamole Presentation from Josh Berkus on
the 
PgCon 2009:
- Web application
- Online Transaction Processing (OLTP)
- Data WareHousing (DW)

And based on the type of your application, you can configure the postgresql.conf to gain a better performance of your
PostgreSQLserver. 
PostgreSQL postgresql.conf baseline:
 shared_buffers = 25% RAM
 work_mem = 512K[W] 2 MB[O] 128 MB[D]
 - but no more that RAM/no_connections
 maintenance_work_mem = 1/16 RAM
 checkpoint_segments = 8 [W], 16-64 [O], [D]
 wal_buffer = 1 MB [W], 8 MB [O], [D]
 effective_cache_size = 2/3 RAM

Regards


Ing. Marcos Luís Ortíz Valmaseda
Linux User # 418229 && PostgreSQL DBA
Centro de Tecnologías Gestión de Datos (DATEC)
http://postgresql.uci.cu
http://www.postgresql.org
http://it.toolbox.com/blogs/sql-apprentice

----- Mensaje original -----
De: "Jeremy Palmer" <JPalmer@linz.govt.nz>
Para: "Andy Colson" <andy@squeakycode.net>
CC: pgsql-performance@postgresql.org
Enviados: Lunes, 17 de Enero 2011 0:13:25 GMT -05:00 Región oriental EE. UU./Canadá
Asunto: Re: [PERFORM] Possible to improve query plan?

Hi Andy,

Yes important omissions:

Server version: 8.4.6
OS Windows Server 2003 Standard Ed :(
The work mem is 50mb.

I tried setting the work_mem to 500mb, but it didn't make a huge difference in query execution time. But then again the
OSdisk caching is probably taking over here. 

Ok here's the new plan with work_mem = 50mb:

http://explain.depesz.com/s/xwv

And here another plan with work_mem = 500mb:

http://explain.depesz.com/s/VmO

Thanks,
Jeremy

-----Original Message-----
From: Andy Colson [mailto:andy@squeakycode.net]
Sent: Monday, 17 January 2011 5:57 p.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Possible to improve query plan?


Hum.. yeah it looks like it takes no time at all to pull data from the individual indexes, and them bitmap them.  I'm
notsure what the bitmap heap scan is, or why its slow.  Hopefully someone smarter will come along. 

Also its weird that explain.depesz.com didnt parse and show your entire plan.  Hum.. you seem to have ending quotes on
someof the lines? 

One other though: quicksort  Memory: 23960kB
It needs 20Meg to sort... It could be your sort is swapping to disk.

What sort of PG version is this?
What are you using for work_mem?  (you could try to bump it up a little (its possible to set for session only, no need
forserver restart) and see if that'd help. 

And sorry, but its my bedtime, good luck though.

-Andy

______________________________________________________________________________________________________

This message contains information, which is confidential and may be subject to legal privilege.
If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info@linz.govt.nz) and
destroythe original message. 
LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.

Thank you.
______________________________________________________________________________________________________

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: Possible to improve query plan?

От
"Kevin Grittner"
Дата:
Jeremy Palmer  wrote:

>   WHERE (
>       (_revision_created <= 16
>        AND _revision_expired > 16
>        AND _revision_expired <= 40)
>    OR (_revision_created > 16
>        AND _revision_created <= 40))

> -> Bitmap Heap Scan on version_crs_coordinate_revision
>      (actual time=70.925..13531.720 rows=149557 loops=1)

> -> BitmapOr (actual time=53.650..53.650 rows=0 loops=1)

This plan actually looks pretty good for what you're doing.  The
Bitmap Index Scans and BitmapOr determine which tuples in the heap
need to be visited.  The Bitmap Heap Scan then visits the heap pages
in physical order (to avoid repeated fetches of the same page and to
possibly edge toward sequential access speeds).  You don't seem to
have a lot of bloat, which could be a killer on this type of query,
since the rowcounts from the index scans aren't that much higher than
the counts after you check the heap.

The only thing I can think of which might help is to CLUSTER the
table on whichever of the two indexes used in the plan which is
typically more selective for such queries.  (In the example query
that seems to be idx_crs_coordinate_revision_created.)  That might
reduce the number of heap pages which need to be accessed and/or put
place them close enough that you'll get some sequential readahead.

I guess you could also try adjusting effective_io_concurrency upward
to see if that helps.

-Kevin

Re: Possible to improve query plan?

От
Jeremy Palmer
Дата:
It fits a Data Warehousing type application.

Apart from work_mem, my other parameters are pretty close to these numbers. I had the work_mem down a little because a
noticedsome clients were getting out of memory errors with large queries which involved lots of sorting. 

Thanks
Jeremy

-----Original Message-----
From: Ing. Marcos Ortiz Valmaseda [mailto:mlortiz@uci.cu]
Sent: Tuesday, 18 January 2011 2:38 a.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org; Andy Colson
Subject: Re: [PERFORM] Possible to improve query plan?

Which is the type of your application? You can see it on the Performance Whackamole Presentation from Josh Berkus on
the 
PgCon 2009:
- Web application
- Online Transaction Processing (OLTP)
- Data WareHousing (DW)

And based on the type of your application, you can configure the postgresql.conf to gain a better performance of your
PostgreSQLserver. 
PostgreSQL postgresql.conf baseline:
 shared_buffers = 25% RAM
 work_mem = 512K[W] 2 MB[O] 128 MB[D]
 - but no more that RAM/no_connections
 maintenance_work_mem = 1/16 RAM
 checkpoint_segments = 8 [W], 16-64 [O], [D]
 wal_buffer = 1 MB [W], 8 MB [O], [D]
 effective_cache_size = 2/3 RAM

Regards


Ing. Marcos Luís Ortíz Valmaseda
Linux User # 418229 && PostgreSQL DBA
Centro de Tecnologías Gestión de Datos (DATEC)
http://postgresql.uci.cu
http://www.postgresql.org
http://it.toolbox.com/blogs/sql-apprentice

______________________________________________________________________________________________________

This message contains information, which is confidential and may be subject to legal privilege.
If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info@linz.govt.nz) and
destroythe original message. 
LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.

Thank you.
______________________________________________________________________________________________________

Re: Possible to improve query plan?

От
Robert Haas
Дата:
On Mon, Jan 17, 2011 at 11:48 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Jeremy Palmer  wrote:
>
>>   WHERE (
>>       (_revision_created <= 16
>>        AND _revision_expired > 16
>>        AND _revision_expired <= 40)
>>    OR (_revision_created > 16
>>        AND _revision_created <= 40))
>
>> -> Bitmap Heap Scan on version_crs_coordinate_revision
>>      (actual time=70.925..13531.720 rows=149557 loops=1)
>
>> -> BitmapOr (actual time=53.650..53.650 rows=0 loops=1)
>
> This plan actually looks pretty good for what you're doing.  The
> Bitmap Index Scans and BitmapOr determine which tuples in the heap
> need to be visited.  The Bitmap Heap Scan then visits the heap pages
> in physical order (to avoid repeated fetches of the same page and to
> possibly edge toward sequential access speeds).  You don't seem to
> have a lot of bloat, which could be a killer on this type of query,
> since the rowcounts from the index scans aren't that much higher than
> the counts after you check the heap.

But isn't 13.5 seconds awfully slow to scan 149557 rows?  The sort is
sorting 23960kB.  Dividing that by 149557 rows gives ~169 bytes/per
row, or roughly 49 rows per block, which works out to 3k blows, or
about 24MB of data.  Clearly we must be hitting a LOT more data than
that, or this would be much faster than it is, I would think.

Any chance this is 9.0.X?  It'd be interesting to see the EXPLAIN
(ANALYZE, BUFFERS) output for this query.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Possible to improve query plan?

От
"Kevin Grittner"
Дата:
Robert Haas <robertmhaas@gmail.com> wrote:

> But isn't 13.5 seconds awfully slow to scan 149557 rows?  The sort
> is sorting 23960kB.  Dividing that by 149557 rows gives ~169
> bytes/per row

You're right.  I would expect 9 ms as per tuple as a worst case if
it doesn't need to go to TOAST data.  Caching, multiple rows per
page, or adjacent pages should all tend to bring it down from there.
How does it get to 90 ms per row with rows that narrow?

Is the table perhaps horribly bloated?  Jeremy, did you try my
suggestion of using CLUSTER on the index which will tend to be more
selective?

-Kevin

Re: Possible to improve query plan?

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> But isn't 13.5 seconds awfully slow to scan 149557 rows?

Depends on how many physical blocks they're scattered across, which
is hard to tell from this printout.  And on how many of the blocks
are already in cache, and what sort of disk hardware he's got, etc.

> Any chance this is 9.0.X?  It'd be interesting to see the EXPLAIN
> (ANALYZE, BUFFERS) output for this query.

Yeah.

            regards, tom lane

Re: Possible to improve query plan?

От
Tom Lane
Дата:
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
> Robert Haas <robertmhaas@gmail.com> wrote:
>> But isn't 13.5 seconds awfully slow to scan 149557 rows?  The sort
>> is sorting 23960kB.  Dividing that by 149557 rows gives ~169
>> bytes/per row

> You're right.  I would expect 9 ms as per tuple as a worst case if
> it doesn't need to go to TOAST data.  Caching, multiple rows per
> page, or adjacent pages should all tend to bring it down from there.
> How does it get to 90 ms per row with rows that narrow?

Um, that looks like 90 usec per row, not msec.

            regards, tom lane

Re: Possible to improve query plan?

От
"Kevin Grittner"
Дата:
Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Um, that looks like 90 usec per row, not msec.

Oh, right.  Well, having to do a random heap access for 1% of the
rows would pretty much explain the run time, then.

-Kevin

Re: Possible to improve query plan?

От
Jeremy Palmer
Дата:
Might be a chance on 9.0 in a couple of weeks, when I do an upgrade on one our dev boxes.

Kevin I've now clustered the table. And the performance did increase quite a bit. My only question is how often will I
needto re-cluster the table, because it comes at quite a cost. The setup I'm running will mean that 10,000 new rows
willbe inserted, and 2,500 rows will be updated on this table each day. 

Here is the new explain output once I have clustered on the idx_crs_coordinate_revision_created index:


Subquery Scan t  (cost=168227.04..173053.88 rows=743 width=205) (actual time=392.586..946.879 rows=106299 loops=1)
  Output: t.row_number, t._revision_created, t._revision_expired, t.id, t.cos_id, t.nod_id, t.ort_type_1, t.ort_type_2,
t.ort_type_3,t.status, t.sdc_status, t.source, t.value1, t.value2, t.value3, t.wrk_id_created, t.cor_id, t.audit_id 
  Filter: (t.row_number = 1)
  ->  WindowAgg  (cost=168227.04..171197.40 rows=148518 width=86) (actual time=392.577..834.477 rows=149557 loops=1)
        Output: row_number() OVER (?), table_version_crs_coordinate_revision._revision_created,
table_version_crs_coordinate_revision._revision_expired,table_version_crs_coordinate_revision.id,
table_version_crs_coordinate_revision.cos_id,table_version_crs_coordinate_revision.nod_id,
table_version_crs_coordinate_revision.ort_type_1,table_version_crs_coordinate_revision.ort_type_2,
table_version_crs_coordinate_revision.ort_type_3,table_version_crs_coordinate_revision.status,
table_version_crs_coordinate_revision.sdc_status,table_version_crs_coordinate_revision.source,
table_version_crs_coordinate_revision.value1,table_version_crs_coordinate_revision.value2,
table_version_crs_coordinate_revision.value3,table_version_crs_coordinate_revision.wrk_id_created,
table_version_crs_coordinate_revision.cor_id,table_version_crs_coordinate_revision.audit_id 
        ->  Sort  (cost=168227.04..168598.34 rows=148518 width=86) (actual time=392.550..457.460 rows=149557 loops=1)
              Output: table_version_crs_coordinate_revision._revision_created,
table_version_crs_coordinate_revision._revision_expired,table_version_crs_coordinate_revision.id,
table_version_crs_coordinate_revision.cos_id,table_version_crs_coordinate_revision.nod_id,
table_version_crs_coordinate_revision.ort_type_1,table_version_crs_coordinate_revision.ort_type_2,
table_version_crs_coordinate_revision.ort_type_3,table_version_crs_coordinate_revision.status,
table_version_crs_coordinate_revision.sdc_status,table_version_crs_coordinate_revision.source,
table_version_crs_coordinate_revision.value1,table_version_crs_coordinate_revision.value2,
table_version_crs_coordinate_revision.value3,table_version_crs_coordinate_revision.wrk_id_created,
table_version_crs_coordinate_revision.cor_id,table_version_crs_coordinate_revision.audit_id 
              Sort Key: table_version_crs_coordinate_revision.id,
table_version_crs_coordinate_revision._revision_created
              Sort Method:  quicksort  Memory: 23960kB
              ->  Bitmap Heap Scan on table_version_crs_coordinate_revision  (cost=3215.29..155469.14 rows=148518
width=86)(actual time=38.808..196.993 rows=149557 loops=1) 
                    Output: table_version_crs_coordinate_revision._revision_created,
table_version_crs_coordinate_revision._revision_expired,table_version_crs_coordinate_revision.id,
table_version_crs_coordinate_revision.cos_id,table_version_crs_coordinate_revision.nod_id,
table_version_crs_coordinate_revision.ort_type_1,table_version_crs_coordinate_revision.ort_type_2,
table_version_crs_coordinate_revision.ort_type_3,table_version_crs_coordinate_revision.status,
table_version_crs_coordinate_revision.sdc_status,table_version_crs_coordinate_revision.source,
table_version_crs_coordinate_revision.value1,table_version_crs_coordinate_revision.value2,
table_version_crs_coordinate_revision.value3,table_version_crs_coordinate_revision.wrk_id_created,
table_version_crs_coordinate_revision.cor_id,table_version_crs_coordinate_revision.audit_id 
                    Recheck Cond: (((_revision_expired > 16) AND (_revision_expired <= 40)) OR ((_revision_created >
16)AND (_revision_created <= 40))) 
                    Filter: (((_revision_created <= 16) AND (_revision_expired > 16) AND (_revision_expired <= 40)) OR
((_revision_created> 16) AND (_revision_created <= 40))) 
                    ->  BitmapOr  (cost=3215.29..3215.29 rows=149432 width=0) (actual time=27.330..27.330 rows=0
loops=1)
                          ->  Bitmap Index Scan on idx_crs_coordinate_revision_expired  (cost=0.00..2225.36 rows=106001
width=0)(actual time=21.596..21.596 rows=110326 loops=1) 
                                Index Cond: ((_revision_expired > 16) AND (_revision_expired <= 40))
                          ->  Bitmap Index Scan on idx_crs_coordinate_revision_created  (cost=0.00..915.67 rows=43432
width=0)(actual time=5.728..5.728 rows=43258 loops=1) 
                                Index Cond: ((_revision_created > 16) AND (_revision_created <= 40))
Total runtime: 985.671 ms

Thanks heaps,
Jeremy
______________________________________________________________________________________________________

This message contains information, which is confidential and may be subject to legal privilege.
If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info@linz.govt.nz) and
destroythe original message. 
LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.

Thank you.
______________________________________________________________________________________________________

Re: Possible to improve query plan?

От
"Kevin Grittner"
Дата:
Jeremy Palmer <JPalmer@linz.govt.nz> wrote:

> Kevin I've now clustered the table. And the performance did
> increase quite a bit.

Yeah, that's enough to notice the difference.

> My only question is how often will I need to re-cluster the table,
> because it comes at quite a cost. The setup I'm running will mean
> that 10,000 new rows will be inserted, and 2,500 rows will be
> updated on this table each day.

You're going to see performance drop off as the data fragments.
You'll need to balance the performance against maintenance
down-time.  I would guess, though, that if you have a weekly
maintenance window big enough to handle the CLUSTER, it might be
worth doing it that often.

-Kevin

Re: Possible to improve query plan?

От
"Kevin Grittner"
Дата:
Jeremy Palmer <JPalmer@linz.govt.nz> wrote:

> My only question is how often will I need to re-cluster the
> table, because it comes at quite a cost.

I probably should have mentioned that the CLUSTER will run faster if
the data is already mostly in the right sequence.  You'll be doing a
nearly sequential pass over the heap, which should minimize seek
time, especially if the OS notices the pattern and starts doing
sequential read-ahead.

-Kevin

Re: Possible to improve query plan?

От
Jeremy Palmer
Дата:
Thanks heaps for the advice. I will do some benchmarks to see how long it takes to cluster all of the database tables.

Cheers,
Jeremy

-----Original Message-----
From: Kevin Grittner [mailto:Kevin.Grittner@wicourts.gov]
Sent: Tuesday, 25 January 2011 1:02 p.m.
To: Jeremy Palmer; Tom Lane
Cc: Robert Haas; pgsql-performance@postgresql.org; andy@squeakycode.net
Subject: RE: [PERFORM] Possible to improve query plan?

Jeremy Palmer <JPalmer@linz.govt.nz> wrote:

> My only question is how often will I need to re-cluster the
> table, because it comes at quite a cost.

I probably should have mentioned that the CLUSTER will run faster if
the data is already mostly in the right sequence.  You'll be doing a
nearly sequential pass over the heap, which should minimize seek
time, especially if the OS notices the pattern and starts doing
sequential read-ahead.

-Kevin
______________________________________________________________________________________________________

This message contains information, which is confidential and may be subject to legal privilege.
If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info@linz.govt.nz) and
destroythe original message. 
LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.

Thank you.
______________________________________________________________________________________________________

Re: Possible to improve query plan?

От
Cédric Villemain
Дата:
2011/1/25 Kevin Grittner <Kevin.Grittner@wicourts.gov>:
> Jeremy Palmer <JPalmer@linz.govt.nz> wrote:
>
>> Kevin I've now clustered the table. And the performance did
>> increase quite a bit.
>
> Yeah, that's enough to notice the difference.
>
>> My only question is how often will I need to re-cluster the table,
>> because it comes at quite a cost. The setup I'm running will mean
>> that 10,000 new rows will be inserted, and 2,500 rows will be
>> updated on this table each day.
>
> You're going to see performance drop off as the data fragments.
> You'll need to balance the performance against maintenance
> down-time.  I would guess, though, that if you have a weekly
> maintenance window big enough to handle the CLUSTER, it might be
> worth doing it that often.

Was FILLFACTOR already suggested regarding the INSERT vs UPDATE per day ?

http://www.postgresql.org/docs/9.0/static/sql-altertable.html (and
index too, but they already have a default at 90% for btree)

>
> -Kevin
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>



--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support