Обсуждение: Deterioration in performance when query executed in multi threads

От:
Anne Rosset
Дата:

Hi all,

We are running a stress test that executes one select query with multiple threads.

The query executes very fast (10ms). It returns 100 rows.  I see deterioration in the performance when we have multiple threads executing the query. With 100 threads, the query takes between 3s and 8s.

 

I suppose there is a way to tune our database. What are the parameters I should look into? (shared_buffer?, wal_buffer?)

 

 

 

 

Thanks for your help,

Anne

От:
"ktm@rice.edu"
Дата:

On Wed, May 01, 2013 at 02:05:06PM +0000, Anne Rosset wrote:
> Hi all,
> We are running a stress test that executes one select query with multiple threads.
> The query executes very fast (10ms). It returns 100 rows.  I see deterioration in the performance when we have
multiplethreads executing the query. With 100 threads, the query takes between 3s and 8s. 
>
> I suppose there is a way to tune our database. What are the parameters I should look into? (shared_buffer?,
wal_buffer?)
>
> Thanks for your help,
> Anne

Try a connection pooler like pgbouncer to keep the number of simultaneous queries
bounded to a reasonable number. You will actually get better performance.

Regards,
Ken


От:
Anne Rosset
Дата:

Hi Ken,
Thanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined  with
min-pool-size=10 and max-pool-size=400. 

Why would you think it is an issue with the connection pool?

Thanks,
Anne


-----Original Message-----
From:  [mailto:]
Sent: Wednesday, May 01, 2013 7:13 AM
To: Anne Rosset
Cc: 
Subject: Re: [PERFORM] Deterioration in performance when query executed in multi threads

On Wed, May 01, 2013 at 02:05:06PM +0000, Anne Rosset wrote:
> Hi all,
> We are running a stress test that executes one select query with multiple threads.
> The query executes very fast (10ms). It returns 100 rows.  I see deterioration in the performance when we have
multiplethreads executing the query. With 100 threads, the query takes between 3s and 8s. 
>
> I suppose there is a way to tune our database. What are the parameters
> I should look into? (shared_buffer?, wal_buffer?)
>
> Thanks for your help,
> Anne

Try a connection pooler like pgbouncer to keep the number of simultaneous queries bounded to a reasonable number. You
willactually get better performance. 

Regards,
Ken


От:
"ktm@rice.edu"
Дата:

On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:
> Hi Ken,
> Thanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined  with
min-pool-size=10 and max-pool-size=400. 
>
> Why would you think it is an issue with the connection pool?
>
> Thanks,
> Anne
>

Hi Anne,

You want to be able to run as many jobs productively at once as your hardware is
capable of supporting. Usually something starting a 2 x number of CPUs is best.
If you make several runs increasing the size of the pool each time, you will
see a maximum throughput somewhere near there and then the performance will
decrease as you add more and more connections. You can then use that sweet spot.
Your test harness should make that pretty easy to find.

Regards,
Ken


От:
Scott Marlowe
Дата:

On Wed, May 1, 2013 at 10:26 AM,  <> wrote:
> On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:
>> Hi Ken,
>> Thanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined  with
min-pool-size=10 and max-pool-size=400. 
>>
>> Why would you think it is an issue with the connection pool?
>>
>> Thanks,
>> Anne
>>
>
> Hi Anne,
>
> You want to be able to run as many jobs productively at once as your hardware is
> capable of supporting. Usually something starting a 2 x number of CPUs is best.
> If you make several runs increasing the size of the pool each time, you will
> see a maximum throughput somewhere near there and then the performance will
> decrease as you add more and more connections. You can then use that sweet spot.
> Your test harness should make that pretty easy to find.

Here's a graph of tps from pgbench on a 48 core / 32 drive battery
backed cache RAID machine:
https://plus.google.com/u/0/photos/117090950881008682691/albums/5537418842370875697/5537418902326245874
Note that on that machine, the peak is between 40 and 50 clients at once.
Note also the asymptote levelling off at 2800tps. This is a good
indication of how the machine will behave if overloaded / connection
pooling goes crazy etc.
So yeah I suggest Anne do what you're saying and chart it. It should
be obvious where the sweet spot is.


От:
Anne Rosset
Дата:

Thanks Ken. I am going to test with different pool sizes and see if I see any improvements.
Are there other configuration options I should look like? I was thinking of playing with shared_buffer.

Thanks,
Anne

-----Original Message-----
From:  [mailto:]
Sent: Wednesday, May 01, 2013 9:27 AM
To: Anne Rosset
Cc: 
Subject: Re: [PERFORM] Deterioration in performance when query executed in multi threads

On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:
> Hi Ken,
> Thanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined  with
min-pool-size=10 and max-pool-size=400. 
>
> Why would you think it is an issue with the connection pool?
>
> Thanks,
> Anne
>

Hi Anne,

You want to be able to run as many jobs productively at once as your hardware is capable of supporting. Usually
somethingstarting a 2 x number of CPUs is best. 
If you make several runs increasing the size of the pool each time, you will see a maximum throughput somewhere near
thereand then the performance will decrease as you add more and more connections. You can then use that sweet spot. 
Your test harness should make that pretty easy to find.

Regards,
Ken


От:
Igor Neyman
Дата:


> -----Original Message-----
> From:  [mailto:pgsql-performance-
> ] On Behalf Of Anne Rosset
> Sent: Wednesday, May 01, 2013 1:10 PM
> To: 
> Cc: 
> Subject: Re: [PERFORM] Deterioration in performance when query executed
> in multi threads
>
> Thanks Ken. I am going to test with different pool sizes and see if I
> see any improvements.
> Are there other configuration options I should look like? I was
> thinking of playing with shared_buffer.
>
> Thanks,
> Anne
>
> -----Original Message-----
> From:  [mailto:]
> Sent: Wednesday, May 01, 2013 9:27 AM
> To: Anne Rosset
> Cc: 
> Subject: Re: [PERFORM] Deterioration in performance when query executed
> in multi threads
>
> On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:
> > Hi Ken,
> > Thanks for your answer. My test is actually running with jboss 7/jdbc
> and the connection pool is defined  with min-pool-size =10 and max-
> pool-size=400.
> >
> > Why would you think it is an issue with the connection pool?
> >
> > Thanks,
> > Anne
> >
>
> Hi Anne,
>
> You want to be able to run as many jobs productively at once as your
> hardware is capable of supporting. Usually something starting a 2 x
> number of CPUs is best.
> If you make several runs increasing the size of the pool each time, you
> will see a maximum throughput somewhere near there and then the
> performance will decrease as you add more and more connections. You can
> then use that sweet spot.
> Your test harness should make that pretty easy to find.
>
> Regards,
> Ken
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

Anne,

Before expecting advice on specific changes to Postgres configuration parameters,
You should provide this list with your hardware configuration, Postgres version, your current Postgres configuration
parameters(at least those that changed from defaults). 
And, if you do the testing using specific query, would be nice if you provide the results of:

Explain analyze <your_select>;

along with the definition of database objects (tables, indexes) involved in this select.

Also, you mention client-side connection pooler.  In my experience, server-side poolers, such as PgBouncer mentioned
earlier,are much more effective. 

Regards,
Igor Neyman



От:
Anne Rosset
Дата:

We saw a little bit improvement by increasing the min_pool_size but again I see a bigvariation in the time the query is
executed.Here is the query: 

srdb=> explain analyze SELECT
psrdb->        artifact.id AS id,
psrdb->        artifact.priority AS priority,
psrdb->        project.path AS projectPathString,
psrdb->        project.title AS projectTitle,
psrdb->        folder.project_id AS projectId,
psrdb->        folder.title AS folderTitle,
psrdb->        item.folder_id AS folderId,
psrdb->        item.title AS title,
psrdb->        item.name AS name,
psrdb->        field_value2.value AS status,
psrdb->        field_value3.value AS category,
psrdb->        sfuser.username AS submittedByUsername,
psrdb->        sfuser.full_name AS submittedByFullname,
psrdb->        sfuser2.username AS assignedToUsername,
psrdb->        sfuser2.full_name AS assignedToFullname,
psrdb->        item.version AS version,
psrdb->        CASE when ((SELECT
psrdb(>        mntr_subscription.user_id AS userId
psrdb(> FROM
psrdb(>        mntr_subscription mntr_subscription
psrdb(> WHERE
psrdb(>        artifact.id=mntr_subscription.object_key
psrdb(>         AND mntr_subscription.user_id='user1439'
psrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS monitoringUserId,
psrdb->        tracker.icon AS trackerIcon,
psrdb->        tracker.remaining_effort_disabled AS remainingEffortDisabled,
psrdb->        tracker.actual_effort_disabled AS actualEffortDisabled,
psrdb->        tracker.estimated_effort_disabled AS estimatedEffortDisabled
psrdb-> FROM
psrdb->        field_value field_value2,
psrdb->        field_value field_value,
psrdb->        sfuser sfuser2,
psrdb->        field_value field_value3,
psrdb->        field_value field_value4,
psrdb->        item item,
psrdb->        project project,
psrdb->        relationship relationship,
psrdb->        tracker tracker,
psrdb->        artifact artifact,
psrdb->        sfuser sfuser,
psrdb->        folder folder
psrdb-> WHERE
psrdb->        artifact.id=item.id
psrdb->         AND item.folder_id=folder.id
psrdb->         AND folder.project_id=project.id
psrdb->         AND artifact.group_fv=field_value.id
psrdb->         AND artifact.status_fv=field_value2.id
psrdb->         AND artifact.category_fv=field_value3.id
psrdb->         AND artifact.customer_fv=field_value4.id
psrdb->         AND item.created_by_id=sfuser.id
psrdb->         AND relationship.is_deleted=false
psrdb->         AND relationship.relationship_type_name='ArtifactAssignment'
psrdb->         AND relationship.origin_id=sfuser2.id
psrdb->         AND artifact.id=relationship.target_id
psrdb->         AND item.is_deleted=false
psrdb->         AND ((artifact.priority=3))
psrdb->         AND (project.path='projects.psr-pub-13')
psrdb->         AND item.folder_id=tracker.id
psrdb-> ;
                                                                                            QUERY PLAN



------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------
 Nested Loop  (cost=0.00..272.62 rows=1 width=181) (actual time=805.934..1792.596 rows=177 loops=1)


   ->  Nested Loop  (cost=0.00..263.87 rows=1 width=167) (actual time=707.739..1553.348 rows=177 loops=1)


         ->  Nested Loop  (cost=0.00..263.58 rows=1 width=153) (actual time=653.053..1496.839 rows=177 loops=1)


               ->  Nested Loop  (cost=0.00..262.50 rows=1 width=154) (actual time=565.627..1385.667 rows=177 loops=1)


                     ->  Nested Loop  (cost=0.00..262.08 rows=1 width=163) (actual time=565.605..1383.686 rows=177
loops
=1)
                           ->  Nested Loop  (cost=0.00..261.67 rows=1 width=166) (actual time=530.928..1347.053
rows=177
 loops=1)
                                 ->  Nested Loop  (cost=0.00..261.26 rows=1 width=175) (actual time=530.866..1345.032
rows=177 loops=1)
                                       ->  Nested Loop  (cost=0.00..260.84 rows=1 width=178) (actual
time=372.825..1184.
668 rows=177 loops=1)
                                             ->  Nested Loop  (cost=0.00..250.33 rows=29 width=128) (actual
time=317.897
..534.645 rows=1011 loops=1)
                                                   ->  Nested Loop  (cost=0.00..207.56 rows=3 width=92) (actual
time=251
.014..408.868 rows=10 loops=1)
                                                         ->  Nested Loop  (cost=0.00..163.54 rows=155 width=65) (actual

time=146.176..382.023 rows=615 loops=1)
                                                               ->  Index Scan using project_path on project
(cost=0.00.
.8.27 rows=1 width=42) (actual time=76.581..76.583 rows=1 loops=1)
                                                                     Index Cond: ((path)::text =
'projects.psr-pub-13'::
text)
                                                               ->  Index Scan using folder_project on folder
(cost=0.00
..153.26 rows=161 width=32) (actual time=69.564..305.083 rows=615 loops=1)
                                                                     Index Cond: ((folder.project_id)::text = (project.
id)::text)
                                                         ->  Index Scan using tracker_pk on tracker  (cost=0.00..0.27
rows=1 width=27) (actual time=0.043..0.043 rows=0 loops=615)
                                                               Index Cond: ((tracker.id)::text = (folder.id)::text)
                                                   ->  Index Scan using item_folder on item  (cost=0.00..14.11 rows=12
width=58) (actual time=7.603..12.532 rows=101 loops=10)
                                                         Index Cond: ((item.folder_id)::text = (folder.id)::text)
                                                         Filter: (NOT item.is_deleted)
                                             ->  Index Scan using artifact_pk on artifact  (cost=0.00..0.35 rows=1
width
=50) (actual time=0.642..0.642 rows=0 loops=1011)
                                                   Index Cond: ((artifact.id)::text = (item.id)::text)
                                                   Filter: (artifact.priority = 3)
                                       ->  Index Scan using field_value_pk on field_value field_value2
(cost=0.00..0.40
 rows=1 width=15) (actual time=0.904..0.905 rows=1 loops=177)
                                             Index Cond: ((field_value2.id)::text = (artifact.status_fv)::text)
                                 ->  Index Scan using field_value_pk on field_value  (cost=0.00..0.40 rows=1 width=9)
(actual time=0.010..0.010 rows=1 loops=177)
                                       Index Cond: ((field_value.id)::text = (artifact.group_fv)::text)
                           ->  Index Scan using field_value_pk on field_value field_value3  (cost=0.00..0.40 rows=1
width=15) (actual time=0.205..0.206 rows=1 loops=177)
                                 Index Cond: ((field_value3.id)::text = (artifact.category_fv)::text)
                     ->  Index Scan using field_value_pk on field_value field_value4  (cost=0.00..0.40 rows=1 width=9)
(actual time=0.010..0.010 rows=1 loops=177)
                           Index Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)
               ->  Index Scan using relation_target on relationship  (cost=0.00..1.07 rows=1 width=19) (actual time=0.
627..0.627 rows=1 loops=177)
                     Index Cond: ((relationship.target_id)::text = (artifact.id)::text)
                     Filter: ((NOT relationship.is_deleted) AND ((relationship.relationship_type_name)::text =
'ArtifactAssignment'::text))
         ->  Index Scan using sfuser_pk on sfuser sfuser2  (cost=0.00..0.28 rows=1 width=32) (actual time=0.318..0.318
rows=1 loops=177)
               Index Cond: ((sfuser2.id)::text = (relationship.origin_id)::text)
   ->  Index Scan using sfuser_pk on sfuser  (cost=0.00..0.27 rows=1 width=32) (actual time=0.178..0.179 rows=1 loops=
177)
         Index Cond: ((sfuser.id)::text = (item.created_by_id)::text)
   SubPlan 1
     ->  Index Scan using mntr_subscr_user on mntr_subscription  (cost=0.00..8.47 rows=1 width=9) (actual
time=1.170..1.
170 rows=0 loops=177)
           Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text))
 Total runtime: 1793.203 ms
(42 rows)


Work_mem is set to 64MB
Shared_buffer to 240MB
Segment_size is 1GB
Wal_buffer is 10MB

If you can give me some pointers, I would really appreciate.
Thanks,
Anne


-----Original Message-----
From: Igor Neyman [mailto:]
Sent: Wednesday, May 01, 2013 10:26 AM
To: Anne Rosset; 
Cc: 
Subject: RE: [PERFORM] Deterioration in performance when query executed in multi threads



> -----Original Message-----
> From: 
> [mailto:pgsql-performance- ] On Behalf Of Anne
> Rosset
> Sent: Wednesday, May 01, 2013 1:10 PM
> To: 
> Cc: 
> Subject: Re: [PERFORM] Deterioration in performance when query
> executed in multi threads
>
> Thanks Ken. I am going to test with different pool sizes and see if I
> see any improvements.
> Are there other configuration options I should look like? I was
> thinking of playing with shared_buffer.
>
> Thanks,
> Anne
>
> -----Original Message-----
> From:  [mailto:]
> Sent: Wednesday, May 01, 2013 9:27 AM
> To: Anne Rosset
> Cc: 
> Subject: Re: [PERFORM] Deterioration in performance when query
> executed in multi threads
>
> On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:
> > Hi Ken,
> > Thanks for your answer. My test is actually running with jboss
> > 7/jdbc
> and the connection pool is defined  with min-pool-size =10 and max-
> pool-size=400.
> >
> > Why would you think it is an issue with the connection pool?
> >
> > Thanks,
> > Anne
> >
>
> Hi Anne,
>
> You want to be able to run as many jobs productively at once as your
> hardware is capable of supporting. Usually something starting a 2 x
> number of CPUs is best.
> If you make several runs increasing the size of the pool each time,
> you will see a maximum throughput somewhere near there and then the
> performance will decrease as you add more and more connections. You
> can then use that sweet spot.
> Your test harness should make that pretty easy to find.
>
> Regards,
> Ken
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

Anne,

Before expecting advice on specific changes to Postgres configuration parameters, You should provide this list with
yourhardware configuration, Postgres version, your current Postgres configuration parameters (at least those that
changedfrom defaults). 
And, if you do the testing using specific query, would be nice if you provide the results of:

Explain analyze <your_select>;

along with the definition of database objects (tables, indexes) involved in this select.

Also, you mention client-side connection pooler.  In my experience, server-side poolers, such as PgBouncer mentioned
earlier,are much more effective. 

Regards,
Igor Neyman



От:
Igor Neyman
Дата:


> -----Original Message-----
> From: Anne Rosset [mailto:]
> Sent: Friday, May 03, 2013 4:52 PM
> To: Igor Neyman; 
> Cc: 
> Subject: RE: [PERFORM] Deterioration in performance when query executed
> in multi threads
>
> We saw a little bit improvement by increasing the min_pool_size but
> again I see a bigvariation in the time the query is executed. Here is
> the query:
>
> srdb=> explain analyze SELECT
> psrdb->        artifact.id AS id,
> psrdb->        artifact.priority AS priority,
> psrdb->        project.path AS projectPathString,
> psrdb->        project.title AS projectTitle,
> psrdb->        folder.project_id AS projectId,
> psrdb->        folder.title AS folderTitle,
> psrdb->        item.folder_id AS folderId,
> psrdb->        item.title AS title,
> psrdb->        item.name AS name,
> psrdb->        field_value2.value AS status,
> psrdb->        field_value3.value AS category,
> psrdb->        sfuser.username AS submittedByUsername,
> psrdb->        sfuser.full_name AS submittedByFullname,
> psrdb->        sfuser2.username AS assignedToUsername,
> psrdb->        sfuser2.full_name AS assignedToFullname,
> psrdb->        item.version AS version,
> psrdb->        CASE when ((SELECT
> psrdb(>        mntr_subscription.user_id AS userId
> psrdb(> FROM
> psrdb(>        mntr_subscription mntr_subscription
> psrdb(> WHERE
> psrdb(>        artifact.id=mntr_subscription.object_key
> psrdb(>         AND mntr_subscription.user_id='user1439'
> psrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS
> monitoringUserId,
> psrdb->        tracker.icon AS trackerIcon,
> psrdb->        tracker.remaining_effort_disabled AS
> remainingEffortDisabled,
> psrdb->        tracker.actual_effort_disabled AS actualEffortDisabled,
> psrdb->        tracker.estimated_effort_disabled AS
> estimatedEffortDisabled
> psrdb-> FROM
> psrdb->        field_value field_value2,
> psrdb->        field_value field_value,
> psrdb->        sfuser sfuser2,
> psrdb->        field_value field_value3,
> psrdb->        field_value field_value4,
> psrdb->        item item,
> psrdb->        project project,
> psrdb->        relationship relationship,
> psrdb->        tracker tracker,
> psrdb->        artifact artifact,
> psrdb->        sfuser sfuser,
> psrdb->        folder folder
> psrdb-> WHERE
> psrdb->        artifact.id=item.id
> psrdb->         AND item.folder_id=folder.id
> psrdb->         AND folder.project_id=project.id
> psrdb->         AND artifact.group_fv=field_value.id
> psrdb->         AND artifact.status_fv=field_value2.id
> psrdb->         AND artifact.category_fv=field_value3.id
> psrdb->         AND artifact.customer_fv=field_value4.id
> psrdb->         AND item.created_by_id=sfuser.id
> psrdb->         AND relationship.is_deleted=false
> psrdb->         AND
> relationship.relationship_type_name='ArtifactAssignment'
> psrdb->         AND relationship.origin_id=sfuser2.id
> psrdb->         AND artifact.id=relationship.target_id
> psrdb->         AND item.is_deleted=false
> psrdb->         AND ((artifact.priority=3))
> psrdb->         AND (project.path='projects.psr-pub-13')
> psrdb->         AND item.folder_id=tracker.id
> psrdb-> ;
>
> QUERY PLAN
>
> -----------------------------------------------------------------------
> -------------------------------------------------
> -----------------------------------------------------------------------
> ----
>  Nested Loop  (cost=0.00..272.62 rows=1 width=181) (actual
> time=805.934..1792.596 rows=177 loops=1)
>
>    ->  Nested Loop  (cost=0.00..263.87 rows=1 width=167) (actual
> time=707.739..1553.348 rows=177 loops=1)
>
>          ->  Nested Loop  (cost=0.00..263.58 rows=1 width=153) (actual
> time=653.053..1496.839 rows=177 loops=1)
>
>                ->  Nested Loop  (cost=0.00..262.50 rows=1 width=154)
> (actual time=565.627..1385.667 rows=177 loops=1)
>
>                      ->  Nested Loop  (cost=0.00..262.08 rows=1
> width=163) (actual time=565.605..1383.686 rows=177 loops
> =1)
>                            ->  Nested Loop  (cost=0.00..261.67 rows=1
> width=166) (actual time=530.928..1347.053 rows=177
>  loops=1)
>                                  ->  Nested Loop  (cost=0.00..261.26
> rows=1 width=175) (actual time=530.866..1345.032
> rows=177 loops=1)
>                                        ->  Nested Loop
> (cost=0.00..260.84 rows=1 width=178) (actual time=372.825..1184.
> 668 rows=177 loops=1)
>                                              ->  Nested Loop
> (cost=0.00..250.33 rows=29 width=128) (actual time=317.897
> ..534.645 rows=1011 loops=1)
>                                                    ->  Nested Loop
> (cost=0.00..207.56 rows=3 width=92) (actual time=251
> .014..408.868 rows=10 loops=1)
>                                                          ->  Nested
> Loop  (cost=0.00..163.54 rows=155 width=65) (actual
> time=146.176..382.023 rows=615 loops=1)
>                                                                ->
> Index Scan using project_path on project  (cost=0.00.
> .8.27 rows=1 width=42) (actual time=76.581..76.583 rows=1 loops=1)
>
> Index Cond: ((path)::text = 'projects.psr-pub-13'::
> text)
>                                                                ->
> Index Scan using folder_project on folder  (cost=0.00
> ..153.26 rows=161 width=32) (actual time=69.564..305.083 rows=615
> loops=1)
>
> Index Cond: ((folder.project_id)::text = (project.
> id)::text)
>                                                          ->  Index Scan
> using tracker_pk on tracker  (cost=0.00..0.27
> rows=1 width=27) (actual time=0.043..0.043 rows=0 loops=615)
>                                                                Index
> Cond: ((tracker.id)::text = (folder.id)::text)
>                                                    ->  Index Scan using
> item_folder on item  (cost=0.00..14.11 rows=12
> width=58) (actual time=7.603..12.532 rows=101 loops=10)
>                                                          Index Cond:
> ((item.folder_id)::text = (folder.id)::text)
>                                                          Filter: (NOT
> item.is_deleted)
>                                              ->  Index Scan using
> artifact_pk on artifact  (cost=0.00..0.35 rows=1 width
> =50) (actual time=0.642..0.642 rows=0 loops=1011)
>                                                    Index Cond:
> ((artifact.id)::text = (item.id)::text)
>                                                    Filter:
> (artifact.priority = 3)
>                                        ->  Index Scan using
> field_value_pk on field_value field_value2  (cost=0.00..0.40
>  rows=1 width=15) (actual time=0.904..0.905 rows=1 loops=177)
>                                              Index Cond:
> ((field_value2.id)::text = (artifact.status_fv)::text)
>                                  ->  Index Scan using field_value_pk on
> field_value  (cost=0.00..0.40 rows=1 width=9)
> (actual time=0.010..0.010 rows=1 loops=177)
>                                        Index Cond:
> ((field_value.id)::text = (artifact.group_fv)::text)
>                            ->  Index Scan using field_value_pk on
> field_value field_value3  (cost=0.00..0.40 rows=1
> width=15) (actual time=0.205..0.206 rows=1 loops=177)
>                                  Index Cond: ((field_value3.id)::text =
> (artifact.category_fv)::text)
>                      ->  Index Scan using field_value_pk on field_value
> field_value4  (cost=0.00..0.40 rows=1 width=9)
> (actual time=0.010..0.010 rows=1 loops=177)
>                            Index Cond: ((field_value4.id)::text =
> (artifact.customer_fv)::text)
>                ->  Index Scan using relation_target on relationship
> (cost=0.00..1.07 rows=1 width=19) (actual time=0.
> 627..0.627 rows=1 loops=177)
>                      Index Cond: ((relationship.target_id)::text =
> (artifact.id)::text)
>                      Filter: ((NOT relationship.is_deleted) AND
> ((relationship.relationship_type_name)::text =
> 'ArtifactAssignment'::text))
>          ->  Index Scan using sfuser_pk on sfuser sfuser2
> (cost=0.00..0.28 rows=1 width=32) (actual time=0.318..0.318
> rows=1 loops=177)
>                Index Cond: ((sfuser2.id)::text =
> (relationship.origin_id)::text)
>    ->  Index Scan using sfuser_pk on sfuser  (cost=0.00..0.27 rows=1
> width=32) (actual time=0.178..0.179 rows=1 loops=
> 177)
>          Index Cond: ((sfuser.id)::text = (item.created_by_id)::text)
>    SubPlan 1
>      ->  Index Scan using mntr_subscr_user on mntr_subscription
> (cost=0.00..8.47 rows=1 width=9) (actual time=1.170..1.
> 170 rows=0 loops=177)
>            Index Cond: ((($0)::text = (object_key)::text) AND
> ((user_id)::text = 'user1439'::text))
>  Total runtime: 1793.203 ms
> (42 rows)
>
>
> Work_mem is set to 64MB
> Shared_buffer to 240MB
> Segment_size is 1GB
> Wal_buffer is 10MB
>
> If you can give me some pointers, I would really appreciate.
> Thanks,
> Anne
>
>

Anne,

So, results of "explain analyze" that you provided - is this the case, when the query considered "slow" (when you have
manythreads running)? 

Looks like optimizer clearly favors "nested loops" (never hash joins).  What are the sizes of tables involved in this
query?

You never told us about your server hardware configuration: # of CPUs, RAM size?  Version of Postgres that you are
using?

And, (again) did you consider switching from "client-side polling" to using PgBouncer for pooling purposes?  It is very
"light-weight"tool and very easy to install/configure. 

Regards,
Igor Neyman




От:
Merlin Moncure
Дата:

On Fri, May 3, 2013 at 3:52 PM, Anne Rosset <> wrote:
> We saw a little bit improvement by increasing the min_pool_size but again I see a bigvariation in the time the query
isexecuted. Here is the query: 
>
> srdb=> explain analyze SELECT
> psrdb->        artifact.id AS id,
> psrdb->        artifact.priority AS priority,
> psrdb->        project.path AS projectPathString,
> psrdb->        project.title AS projectTitle,
> psrdb->        folder.project_id AS projectId,
> psrdb->        folder.title AS folderTitle,
> psrdb->        item.folder_id AS folderId,
> psrdb->        item.title AS title,
> psrdb->        item.name AS name,
> psrdb->        field_value2.value AS status,
> psrdb->        field_value3.value AS category,
> psrdb->        sfuser.username AS submittedByUsername,
> psrdb->        sfuser.full_name AS submittedByFullname,
> psrdb->        sfuser2.username AS assignedToUsername,
> psrdb->        sfuser2.full_name AS assignedToFullname,
> psrdb->        item.version AS version,
> psrdb->        CASE when ((SELECT
> psrdb(>        mntr_subscription.user_id AS userId
> psrdb(> FROM
> psrdb(>        mntr_subscription mntr_subscription
> psrdb(> WHERE
> psrdb(>        artifact.id=mntr_subscription.object_key
> psrdb(>         AND mntr_subscription.user_id='user1439'
> psrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS monitoringUserId,
> psrdb->        tracker.icon AS trackerIcon,
> psrdb->        tracker.remaining_effort_disabled AS remainingEffortDisabled,
> psrdb->        tracker.actual_effort_disabled AS actualEffortDisabled,
> psrdb->        tracker.estimated_effort_disabled AS estimatedEffortDisabled
> psrdb-> FROM
> psrdb->        field_value field_value2,
> psrdb->        field_value field_value,
> psrdb->        sfuser sfuser2,
> psrdb->        field_value field_value3,
> psrdb->        field_value field_value4,
> psrdb->        item item,
> psrdb->        project project,
> psrdb->        relationship relationship,
> psrdb->        tracker tracker,
> psrdb->        artifact artifact,
> psrdb->        sfuser sfuser,
> psrdb->        folder folder
> psrdb-> WHERE
> psrdb->        artifact.id=item.id
> psrdb->         AND item.folder_id=folder.id
> psrdb->         AND folder.project_id=project.id
> psrdb->         AND artifact.group_fv=field_value.id
> psrdb->         AND artifact.status_fv=field_value2.id
> psrdb->         AND artifact.category_fv=field_value3.id
> psrdb->         AND artifact.customer_fv=field_value4.id
> psrdb->         AND item.created_by_id=sfuser.id
> psrdb->         AND relationship.is_deleted=false
> psrdb->         AND relationship.relationship_type_name='ArtifactAssignment'
> psrdb->         AND relationship.origin_id=sfuser2.id
> psrdb->         AND artifact.id=relationship.target_id
> psrdb->         AND item.is_deleted=false
> psrdb->         AND ((artifact.priority=3))
> psrdb->         AND (project.path='projects.psr-pub-13')
> psrdb->         AND item.folder_id=tracker.id
> psrdb-> ;


(*please* stop top-posting).

What is the cpu profile of the machine while you are threading the
query out?  if all cpu peggged @ or near 100%, it's possible seeing
spinlock contention on some of the key index buffers -- but that's a
long shot.  More likely it's planner malfeasance.  Are you running
this *exact* query across all threads or are the specific parameters
changing (and if so, maybe instead the problem is that specific
arguments sets providing bad plans?)

This is a classic case of surrogate key design run amok, leading to
bad performance via difficult to plan queries and/or poorly utilized
indexes.

merlin


От:
Anne Rosset
Дата:

Hi Igor,
The explain analyze is from when there was no load.

Artifact table: 251831 rows
Field_value table: 77378 rows
Mntr_subscription: 929071 rows
Relationship: 270478 row
Folder: 280356 rows
Item: 716465 rows
Sfuser: 5733 rows
Project: 1817 rows

8CPUs
RAM: 8GB

Postgres version: 9.0.13

 And no we haven't switched or tested yet  with pgbouncer. We would like to do a bit more analysis before trying this.

Thanks for your help,
Anne


-----Original Message-----
From: Igor Neyman [mailto:]
Sent: Monday, May 06, 2013 7:06 AM
To: Anne Rosset; 
Cc: 
Subject: RE: [PERFORM] Deterioration in performance when query executed in multi threads



> -----Original Message-----
> From: Anne Rosset [mailto:]
> Sent: Friday, May 03, 2013 4:52 PM
> To: Igor Neyman; 
> Cc: 
> Subject: RE: [PERFORM] Deterioration in performance when query
> executed in multi threads
>
> We saw a little bit improvement by increasing the min_pool_size but
> again I see a bigvariation in the time the query is executed. Here is
> the query:
>
> srdb=> explain analyze SELECT
> psrdb->        artifact.id AS id,
> psrdb->        artifact.priority AS priority,
> psrdb->        project.path AS projectPathString,
> psrdb->        project.title AS projectTitle,
> psrdb->        folder.project_id AS projectId,
> psrdb->        folder.title AS folderTitle,
> psrdb->        item.folder_id AS folderId,
> psrdb->        item.title AS title,
> psrdb->        item.name AS name,
> psrdb->        field_value2.value AS status,
> psrdb->        field_value3.value AS category,
> psrdb->        sfuser.username AS submittedByUsername,
> psrdb->        sfuser.full_name AS submittedByFullname,
> psrdb->        sfuser2.username AS assignedToUsername,
> psrdb->        sfuser2.full_name AS assignedToFullname,
> psrdb->        item.version AS version,
> psrdb->        CASE when ((SELECT
> psrdb(>        mntr_subscription.user_id AS userId
> psrdb(> FROM
> psrdb(>        mntr_subscription mntr_subscription
> psrdb(> WHERE
> psrdb(>        artifact.id=mntr_subscription.object_key
> psrdb(>         AND mntr_subscription.user_id='user1439'
> psrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS
> monitoringUserId,
> psrdb->        tracker.icon AS trackerIcon,
> psrdb->        tracker.remaining_effort_disabled AS
> remainingEffortDisabled,
> psrdb->        tracker.actual_effort_disabled AS actualEffortDisabled,
> psrdb->        tracker.estimated_effort_disabled AS
> estimatedEffortDisabled
> psrdb-> FROM
> psrdb->        field_value field_value2,
> psrdb->        field_value field_value,
> psrdb->        sfuser sfuser2,
> psrdb->        field_value field_value3,
> psrdb->        field_value field_value4,
> psrdb->        item item,
> psrdb->        project project,
> psrdb->        relationship relationship,
> psrdb->        tracker tracker,
> psrdb->        artifact artifact,
> psrdb->        sfuser sfuser,
> psrdb->        folder folder
> psrdb-> WHERE
> psrdb->        artifact.id=item.id
> psrdb->         AND item.folder_id=folder.id
> psrdb->         AND folder.project_id=project.id
> psrdb->         AND artifact.group_fv=field_value.id
> psrdb->         AND artifact.status_fv=field_value2.id
> psrdb->         AND artifact.category_fv=field_value3.id
> psrdb->         AND artifact.customer_fv=field_value4.id
> psrdb->         AND item.created_by_id=sfuser.id
> psrdb->         AND relationship.is_deleted=false
> psrdb->         AND
> relationship.relationship_type_name='ArtifactAssignment'
> psrdb->         AND relationship.origin_id=sfuser2.id
> psrdb->         AND artifact.id=relationship.target_id
> psrdb->         AND item.is_deleted=false
> psrdb->         AND ((artifact.priority=3))
> psrdb->         AND (project.path='projects.psr-pub-13')
> psrdb->         AND item.folder_id=tracker.id ;
>
> QUERY PLAN
>
> ----------------------------------------------------------------------
> -
> -------------------------------------------------
> ----------------------------------------------------------------------
> -
> ----
>  Nested Loop  (cost=0.00..272.62 rows=1 width=181) (actual
> time=805.934..1792.596 rows=177 loops=1)
>
>    ->  Nested Loop  (cost=0.00..263.87 rows=1 width=167) (actual
> time=707.739..1553.348 rows=177 loops=1)
>
>          ->  Nested Loop  (cost=0.00..263.58 rows=1 width=153) (actual
> time=653.053..1496.839 rows=177 loops=1)
>
>                ->  Nested Loop  (cost=0.00..262.50 rows=1 width=154)
> (actual time=565.627..1385.667 rows=177 loops=1)
>
>                      ->  Nested Loop  (cost=0.00..262.08 rows=1
> width=163) (actual time=565.605..1383.686 rows=177 loops
> =1)
>                            ->  Nested Loop  (cost=0.00..261.67 rows=1
> width=166) (actual time=530.928..1347.053 rows=177
>  loops=1)
>                                  ->  Nested Loop  (cost=0.00..261.26
> rows=1 width=175) (actual time=530.866..1345.032
> rows=177 loops=1)
>                                        ->  Nested Loop
> (cost=0.00..260.84 rows=1 width=178) (actual time=372.825..1184.
> 668 rows=177 loops=1)
>                                              ->  Nested Loop
> (cost=0.00..250.33 rows=29 width=128) (actual time=317.897
> ..534.645 rows=1011 loops=1)
>                                                    ->  Nested Loop
> (cost=0.00..207.56 rows=3 width=92) (actual time=251
> .014..408.868 rows=10 loops=1)
>                                                          ->  Nested
> Loop  (cost=0.00..163.54 rows=155 width=65) (actual
> time=146.176..382.023 rows=615 loops=1)
>                                                                ->
> Index Scan using project_path on project  (cost=0.00.
> .8.27 rows=1 width=42) (actual time=76.581..76.583 rows=1 loops=1)
>
> Index Cond: ((path)::text = 'projects.psr-pub-13'::
> text)
>                                                                ->
> Index Scan using folder_project on folder  (cost=0.00
> ..153.26 rows=161 width=32) (actual time=69.564..305.083 rows=615
> loops=1)
>
> Index Cond: ((folder.project_id)::text = (project.
> id)::text)
>                                                          ->  Index
> Scan using tracker_pk on tracker  (cost=0.00..0.27
> rows=1 width=27) (actual time=0.043..0.043 rows=0 loops=615)
>                                                                Index
> Cond: ((tracker.id)::text = (folder.id)::text)
>                                                    ->  Index Scan
> using item_folder on item  (cost=0.00..14.11 rows=12
> width=58) (actual time=7.603..12.532 rows=101 loops=10)
>                                                          Index Cond:
> ((item.folder_id)::text = (folder.id)::text)
>                                                          Filter: (NOT
> item.is_deleted)
>                                              ->  Index Scan using
> artifact_pk on artifact  (cost=0.00..0.35 rows=1 width
> =50) (actual time=0.642..0.642 rows=0 loops=1011)
>                                                    Index Cond:
> ((artifact.id)::text = (item.id)::text)
>                                                    Filter:
> (artifact.priority = 3)
>                                        ->  Index Scan using
> field_value_pk on field_value field_value2  (cost=0.00..0.40
>  rows=1 width=15) (actual time=0.904..0.905 rows=1 loops=177)
>                                              Index Cond:
> ((field_value2.id)::text = (artifact.status_fv)::text)
>                                  ->  Index Scan using field_value_pk
> on field_value  (cost=0.00..0.40 rows=1 width=9) (actual
> time=0.010..0.010 rows=1 loops=177)
>                                        Index Cond:
> ((field_value.id)::text = (artifact.group_fv)::text)
>                            ->  Index Scan using field_value_pk on
> field_value field_value3  (cost=0.00..0.40 rows=1
> width=15) (actual time=0.205..0.206 rows=1 loops=177)
>                                  Index Cond: ((field_value3.id)::text
> =
> (artifact.category_fv)::text)
>                      ->  Index Scan using field_value_pk on
> field_value
> field_value4  (cost=0.00..0.40 rows=1 width=9) (actual
> time=0.010..0.010 rows=1 loops=177)
>                            Index Cond: ((field_value4.id)::text =
> (artifact.customer_fv)::text)
>                ->  Index Scan using relation_target on relationship
> (cost=0.00..1.07 rows=1 width=19) (actual time=0.
> 627..0.627 rows=1 loops=177)
>                      Index Cond: ((relationship.target_id)::text =
> (artifact.id)::text)
>                      Filter: ((NOT relationship.is_deleted) AND
> ((relationship.relationship_type_name)::text =
> 'ArtifactAssignment'::text))
>          ->  Index Scan using sfuser_pk on sfuser sfuser2
> (cost=0.00..0.28 rows=1 width=32) (actual time=0.318..0.318
> rows=1 loops=177)
>                Index Cond: ((sfuser2.id)::text =
> (relationship.origin_id)::text)
>    ->  Index Scan using sfuser_pk on sfuser  (cost=0.00..0.27 rows=1
> width=32) (actual time=0.178..0.179 rows=1 loops=
> 177)
>          Index Cond: ((sfuser.id)::text = (item.created_by_id)::text)
>    SubPlan 1
>      ->  Index Scan using mntr_subscr_user on mntr_subscription
> (cost=0.00..8.47 rows=1 width=9) (actual time=1.170..1.
> 170 rows=0 loops=177)
>            Index Cond: ((($0)::text = (object_key)::text) AND
> ((user_id)::text = 'user1439'::text))  Total runtime: 1793.203 ms
> (42 rows)
>
>
> Work_mem is set to 64MB
> Shared_buffer to 240MB
> Segment_size is 1GB
> Wal_buffer is 10MB
>
> If you can give me some pointers, I would really appreciate.
> Thanks,
> Anne
>
>

Anne,

So, results of "explain analyze" that you provided - is this the case, when the query considered "slow" (when you have
manythreads running)? 

Looks like optimizer clearly favors "nested loops" (never hash joins).  What are the sizes of tables involved in this
query?

You never told us about your server hardware configuration: # of CPUs, RAM size?  Version of Postgres that you are
using?

And, (again) did you consider switching from "client-side polling" to using PgBouncer for pooling purposes?  It is very
"light-weight"tool and very easy to install/configure. 

Regards,
Igor Neyman




От:
Thomas Kellerer
Дата:

Anne Rosset, 06.05.2013 19:00:
> Postgres version: 9.0.13
>
>> Work_mem is set to 64MB
>> Shared_buffer to 240MB
>> Segment_size is 1GB
>> Wal_buffer is 10MB
>
> Artifact table: 251831 rows
> Field_value table: 77378 rows
> Mntr_subscription: 929071 rows
> Relationship: 270478 row
> Folder: 280356 rows
> Item: 716465 rows
> Sfuser: 5733 rows
> Project: 1817 rows
>
> 8CPUs
> RAM: 8GB
>

With 8GB RAM you should be able to increase shared_buffer to 1GB or maybe even higher especially if this is a dedicated
server.
240MB is pretty conservative for a server with that amount of RAM (unless you have many other applications running on
thatbox) 

Also what are the values for

cpu_tuple_cost
seq_page_cost
random_page_cost
effective_cache_size

What kind of harddisk is in the server? SSD? Regular ones (spinning disks)?



От:
Igor Neyman
Дата:


> -----Original Message-----
> From: Anne Rosset [mailto:]
> Sent: Monday, May 06, 2013 1:01 PM
> To: Igor Neyman; 
> Cc: 
> Subject: RE: [PERFORM] Deterioration in performance when query executed
> in multi threads
>
> Hi Igor,
> The explain analyze is from when there was no load.
>
> Artifact table: 251831 rows
> Field_value table: 77378 rows
> Mntr_subscription: 929071 rows
> Relationship: 270478 row
> Folder: 280356 rows
> Item: 716465 rows
> Sfuser: 5733 rows
> Project: 1817 rows
>
> 8CPUs
> RAM: 8GB
>
> Postgres version: 9.0.13
>
>  And no we haven't switched or tested yet  with pgbouncer. We would
> like to do a bit more analysis before trying this.
>
> Thanks for your help,
> Anne
>
>


Anne,

Just as a quick test, try in the psql session/connection locally change enable_nestloop setting and run your query:

set enable_nestloop = off;
explain analyze <your_query>;

just to see if different execution plan will be better and optimizer needs to be "convinced" to use this different
plan.
Please post what you get with the modified setting.

Also, what is the setting for effective_cache_size in postgresql.conf?

Regards,
Igor Neyman



От:
Igor Neyman
Дата:


> -----Original Message-----
> From:  [mailto:pgsql-performance-
> ] On Behalf Of Thomas Kellerer
> Sent: Monday, May 06, 2013 1:12 PM
> To: 
> Subject: Re: [PERFORM] Deterioration in performance when query executed
> in multi threads
> 
> Anne Rosset, 06.05.2013 19:00:
> > Postgres version: 9.0.13
> >
> >> Work_mem is set to 64MB
> >> Shared_buffer to 240MB
> >> Segment_size is 1GB
> >> Wal_buffer is 10MB
> >
> > Artifact table: 251831 rows
> > Field_value table: 77378 rows
> > Mntr_subscription: 929071 rows
> > Relationship: 270478 row
> > Folder: 280356 rows
> > Item: 716465 rows
> > Sfuser: 5733 rows
> > Project: 1817 rows
> >
> > 8CPUs
> > RAM: 8GB
> >
> 
> With 8GB RAM you should be able to increase shared_buffer to 1GB or
> maybe even higher especially if this is a dedicated server.
> 240MB is pretty conservative for a server with that amount of RAM
> (unless you have many other applications running on that box)
> 
> Also what are the values for
> 
> cpu_tuple_cost
> seq_page_cost
> random_page_cost
> effective_cache_size
> 
> What kind of harddisk is in the server? SSD? Regular ones (spinning
> disks)?
> 
> 
> 


Also, with 8 CPUs, your max connection_pool size shouldn't much bigger than 20.

Igor Neyman

От:
Anne Rosset
Дата:

Hi Thomas,
It is not a dedicated box (we have Jboss running too).

cpu_tuple_cost                  | 0.01
seq_page_cost                   | 1
random_page_cost                | 4
effective_cache_size            | 512MB

We have the data directory on nfs (rw,intr,hard,tcp,rsize=32768,wsize=32768,nfsvers=3,tcp). Note that we have also
testedputting the data directory on local disk and didn't find a big improvement.
 

Thanks,
Anne



-----Original Message-----
From:  [mailto:] On Behalf Of Thomas
Kellerer
Sent: Monday, May 06, 2013 10:12 AM
To: 
Subject: Re: [PERFORM] Deterioration in performance when query executed in multi threads

Anne Rosset, 06.05.2013 19:00:
> Postgres version: 9.0.13
>
>> Work_mem is set to 64MB
>> Shared_buffer to 240MB
>> Segment_size is 1GB
>> Wal_buffer is 10MB
>
> Artifact table: 251831 rows
> Field_value table: 77378 rows
> Mntr_subscription: 929071 rows
> Relationship: 270478 row
> Folder: 280356 rows
> Item: 716465 rows
> Sfuser: 5733 rows
> Project: 1817 rows
>
> 8CPUs
> RAM: 8GB
>

With 8GB RAM you should be able to increase shared_buffer to 1GB or maybe even higher especially if this is a dedicated
server.
240MB is pretty conservative for a server with that amount of RAM (unless you have many other applications running on
thatbox)
 

Also what are the values for

cpu_tuple_cost
seq_page_cost
random_page_cost
effective_cache_size

What kind of harddisk is in the server? SSD? Regular ones (spinning disks)?





-- 
Sent via pgsql-performance mailing list ()
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

От:
Anne Rosset
Дата:

Hi Igor,
Result with enable_nestloop off:




------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-----------
 Hash Join  (cost=49946.49..58830.02 rows=1 width=181) (actual time=2189.474..2664.888 rows=180 loops=1)
   Hash Cond: ((item.created_by_id)::text = (sfuser.id)::text)
   ->  Hash Join  (cost=49470.50..58345.53 rows=1 width=167) (actual time=1931.870..2404.745 rows=180 loops=1)
         Hash Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)
         ->  Hash Join  (cost=48994.51..57869.52 rows=1 width=153) (actual time=1927.603..2400.334 rows=180 loops=1)
               Hash Cond: ((relationship.target_id)::text = (artifact.id)::text)
               ->  Seq Scan on relationship  (cost=0.00..7973.38 rows=240435 width=19) (actual time=0.036..492.442
rows=241285loops=1) 
                     Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))
               ->  Hash  (cost=48994.49..48994.49 rows=1 width=154) (actual time=1858.350..1858.350 rows=180 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 34kB
                     ->  Hash Join  (cost=47260.54..48994.49 rows=1 width=154) (actual time=1836.495..1858.151 rows=180
loops=1)
                           Hash Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)
                           ->  Seq Scan on field_value field_value4  (cost=0.00..1443.78 rows=77378 width=9) (actual
time=22.104..30.694rows=77378 loops=1) 
                           ->  Hash  (cost=47260.52..47260.52 rows=1 width=163) (actual time=1814.005..1814.005
rows=180loops=1) 
                                 Buckets: 1024  Batches: 1  Memory Usage: 35kB
                                 ->  Hash Join  (cost=45526.57..47260.52 rows=1 width=163) (actual
time=1790.908..1813.780rows=180 loops=1) 
                                       Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)
                                       ->  Seq Scan on field_value field_value3  (cost=0.00..1443.78 rows=77378
width=15)(actual time=0.002..9.262 rows=77378 loops=1) 
                                       ->  Hash  (cost=45526.55..45526.55 rows=1 width=166) (actual
time=1790.505..1790.505rows=180 loops=1) 
                                             Buckets: 1024  Batches: 1  Memory Usage: 36kB
                                             ->  Hash Join  (cost=43792.60..45526.55 rows=1 width=166) (actual
time=1768.362..1790.304rows=180 loops=1) 
                                                   Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)
                                                   ->  Seq Scan on field_value  (cost=0.00..1443.78 rows=77378 width=9)
(actualtime=0.002..8.687 rows=77378 loops=1) 
                                                   ->  Hash  (cost=43792.58..43792.58 rows=1 width=175) (actual
time=1767.928..1767.928rows=180 loops=1) 
                                                         Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                         ->  Hash Join  (cost=42058.63..43792.58 rows=1 width=175)
(actualtime=1499.822..1767.734 rows=180 loops=1) 
                                                               Hash Cond: ((field_value2.id)::text =
(artifact.status_fv)::text)
                                                               ->  Seq Scan on field_value field_value2
(cost=0.00..1443.78rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1) 
                                                               ->  Hash  (cost=42058.61..42058.61 rows=1 width=178)
(actualtime=1492.707..1492.707 rows=180 loops=1) 
                                                                     Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                                     ->  Hash Join  (cost=18039.59..42058.61 rows=1
width=178)(actual time=1175.659..1492.482 rows=180 loops=1) 
                                                                           Hash Cond: ((item.id)::text =
(artifact.id)::text)
                                                                           ->  Hash Join  (cost=12112.31..36130.95
rows=30width=128) (actual time=304.035..702.745 rows=1015 loops=1) 
                                                                                 Hash Cond: ((item.folder_id)::text =
(folder.id)::text)
                                                                                 ->  Seq Scan on item
(cost=0.00..21381.10rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1) 
                                                                                       Filter: (NOT is_deleted)
                                                                                 ->  Hash  (cost=12112.27..12112.27
rows=3width=92) (actual time=189.285..189.285 rows=10 loops=1) 
                                                                                       Buckets: 1024  Batches: 1
MemoryUsage: 2kB 
                                                                                       ->  Hash Join
(cost=195.86..12112.27rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1) 
                                                                                             Hash Cond:
((folder.id)::text= (tracker.id)::text) 
                                                                                             ->  Hash Join
(cost=8.28..11923.31rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1) 
                                                                                                   Hash Cond:
((folder.project_id)::text= (project.id)::text) 
                                                                                                   ->  Seq Scan on
folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1) 
                                                                                                   ->  Hash
(cost=8.27..8.27rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1) 
                                                                                                         Buckets: 1024
Batches:1  Memory Usage: 1kB 
                                                                                                         ->  Index Scan
usingproject_path on project  (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows= 
1 loops=1)
                                                                                                               Index
Cond:((path)::text = 'projects.psr-pub-13'::text) 
                                                                                             ->  Hash
(cost=129.48..129.48rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1) 
                                                                                                   Buckets: 1024
Batches:1  Memory Usage: 267kB 
                                                                                                   ->  Seq Scan on
tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1) 
->  Hash Join  (cost=45526.57..47260.52 rows=1 width=163) (actual time=1790.908..1813.780 rows=180 loops=1)
                                       Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)
                                       ->  Seq Scan on field_value field_value3  (cost=0.00..1443.78 rows=77378
width=15)(actual time=0.002..9.262 rows=77378 loops=1) 
                                       ->  Hash  (cost=45526.55..45526.55 rows=1 width=166) (actual
time=1790.505..1790.505rows=180 loops=1) 
                                             Buckets: 1024  Batches: 1  Memory Usage: 36kB
                                             ->  Hash Join  (cost=43792.60..45526.55 rows=1 width=166) (actual
time=1768.362..1790.304rows=180 loops=1) 
                                                   Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)
                                                   ->  Seq Scan on field_value  (cost=0.00..1443.78 rows=77378 width=9)
(actualtime=0.002..8.687 rows=77378 loops=1) 
                                                   ->  Hash  (cost=43792.58..43792.58 rows=1 width=175) (actual
time=1767.928..1767.928rows=180 loops=1) 
                                                         Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                         ->  Hash Join  (cost=42058.63..43792.58 rows=1 width=175)
(actualtime=1499.822..1767.734 rows=180 loops=1) 
                                                               Hash Cond: ((field_value2.id)::text =
(artifact.status_fv)::text)
                                                               ->  Seq Scan on field_value field_value2
(cost=0.00..1443.78rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1) 
                                                               ->  Hash  (cost=42058.61..42058.61 rows=1 width=178)
(actualtime=1492.707..1492.707 rows=180 loops=1) 
                                                                     Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                                     ->  Hash Join  (cost=18039.59..42058.61 rows=1
width=178)(actual time=1175.659..1492.482 rows=180 loops=1) 
                                                                           Hash Cond: ((item.id)::text =
(artifact.id)::text)
                                                                           ->  Hash Join  (cost=12112.31..36130.95
rows=30width=128) (actual time=304.035..702.745 rows=1015 loops=1) 
                                                                                 Hash Cond: ((item.folder_id)::text =
(folder.id)::text)
                                                                                 ->  Seq Scan on item
(cost=0.00..21381.10rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1) 
                                                                                       Filter: (NOT is_deleted)
                                                                                 ->  Hash  (cost=12112.27..12112.27
rows=3width=92) (actual time=189.285..189.285 rows=10 loops=1) 
                                                                                       Buckets: 1024  Batches: 1
MemoryUsage: 2kB 
                                                                                       ->  Hash Join
(cost=195.86..12112.27rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1) 
                                                                                             Hash Cond:
((folder.id)::text= (tracker.id)::text) 
                                                                                             ->  Hash Join
(cost=8.28..11923.31rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1) 
                                                                                                   Hash Cond:
((folder.project_id)::text= (project.id)::text) 
                                                                                                   ->  Seq Scan on
folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1) 
                                                                                                   ->  Hash
(cost=8.27..8.27rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1) 
                                                                                                         Buckets: 1024
Batches:1  Memory Usage: 1kB 
                                                                                                         ->  Index Scan
usingproject_path on project  (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows= 
1 loops=1)
                                                                                                               Index
Cond:((path)::text = 'projects.psr-pub-13'::text) 
                                                                                             ->  Hash
(cost=129.48..129.48rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1) 
                                                                                                   Buckets: 1024
Batches:1  Memory Usage: 267kB 
                                                                                                   ->  Seq Scan on
tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1) 
                                                                           ->  Hash  (cost=5527.28..5527.28 rows=32000
width=50)(actual time=788.470..788.470 rows=31183 loops=1) 
                                                                                 Buckets: 4096  Batches: 1  Memory
Usage:2662kB 
                                                                                 ->  Bitmap Heap Scan on artifact
(cost=628.28..5527.28rows=32000 width=50) (actual time=83.568..771.651 rows=31183 loops=1) 
                                                                                       Recheck Cond: (priority = 3)
                                                                                       ->  Bitmap Index Scan on
artifact_priority (cost=0.00..620.28 rows=32000 width=0) (actual time=82.877..82.877 rows=31260 loops=1 
)
                                                                                             Index Cond: (priority = 3)
         ->  Hash  (cost=404.33..404.33 rows=5733 width=32) (actual time=4.232..4.232 rows=5733 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 364kB
               ->  Seq Scan on sfuser sfuser2  (cost=0.00..404.33 rows=5733 width=32) (actual time=0.006..1.941
rows=5733loops=1) 
   ->  Hash  (cost=404.33..404.33 rows=5733 width=32) (actual time=257.485..257.485 rows=5733 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 364kB
         ->  Seq Scan on sfuser  (cost=0.00..404.33 rows=5733 width=32) (actual time=9.555..255.085 rows=5733 loops=1)
   SubPlan 1
     ->  Index Scan using mntr_subscr_user on mntr_subscription  (cost=0.00..8.47 rows=1 width=9) (actual
time=0.013..0.013rows=0 loops=180) 
           Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text))
 Total runtime: 2666.011 ms


effective_cache_size            | 512MB

Thanks,
Anne

-----Original Message-----
From: Igor Neyman [mailto:]
Sent: Monday, May 06, 2013 10:13 AM
To: Anne Rosset; 
Cc: 
Subject: RE: [PERFORM] Deterioration in performance when query executed in multi threads



> -----Original Message-----
> From: Anne Rosset [mailto:]
> Sent: Monday, May 06, 2013 1:01 PM
> To: Igor Neyman; 
> Cc: 
> Subject: RE: [PERFORM] Deterioration in performance when query
> executed in multi threads
>
> Hi Igor,
> The explain analyze is from when there was no load.
>
> Artifact table: 251831 rows
> Field_value table: 77378 rows
> Mntr_subscription: 929071 rows
> Relationship: 270478 row
> Folder: 280356 rows
> Item: 716465 rows
> Sfuser: 5733 rows
> Project: 1817 rows
>
> 8CPUs
> RAM: 8GB
>
> Postgres version: 9.0.13
>
>  And no we haven't switched or tested yet  with pgbouncer. We would
> like to do a bit more analysis before trying this.
>
> Thanks for your help,
> Anne
>
>


Anne,

Just as a quick test, try in the psql session/connection locally change enable_nestloop setting and run your query:

set enable_nestloop = off;
explain analyze <your_query>;

just to see if different execution plan will be better and optimizer needs to be "convinced" to use this different
plan.
Please post what you get with the modified setting.

Also, what is the setting for effective_cache_size in postgresql.conf?

Regards,
Igor Neyman



От:
Gavin Flower
Дата:

Anne, please read the comment at the bottom of this post!


On 07/05/13 09:46, Anne Rosset wrote:
> Hi Thomas,
> It is not a dedicated box (we have Jboss running too).
>
> cpu_tuple_cost                  | 0.01
> seq_page_cost                   | 1
> random_page_cost                | 4
> effective_cache_size            | 512MB
>
> We have the data directory on nfs (rw,intr,hard,tcp,rsize=32768,wsize=32768,nfsvers=3,tcp). Note that we have also
testedputting the data directory on local disk and didn't find a big improvement. 
>
> Thanks,
> Anne
>
>
>
> -----Original Message-----
> From:  [mailto:] On Behalf Of Thomas
Kellerer
> Sent: Monday, May 06, 2013 10:12 AM
> To: 
> Subject: Re: [PERFORM] Deterioration in performance when query executed in multi threads
>
> Anne Rosset, 06.05.2013 19:00:
>> Postgres version: 9.0.13
>>
>>> Work_mem is set to 64MB
>>> Shared_buffer to 240MB
>>> Segment_size is 1GB
>>> Wal_buffer is 10MB
>> Artifact table: 251831 rows
>> Field_value table: 77378 rows
>> Mntr_subscription: 929071 rows
>> Relationship: 270478 row
>> Folder: 280356 rows
>> Item: 716465 rows
>> Sfuser: 5733 rows
>> Project: 1817 rows
>>
>> 8CPUs
>> RAM: 8GB
>>
> With 8GB RAM you should be able to increase shared_buffer to 1GB or maybe even higher especially if this is a
dedicatedserver. 
> 240MB is pretty conservative for a server with that amount of RAM (unless you have many other applications running on
thatbox) 
>
> Also what are the values for
>
> cpu_tuple_cost
> seq_page_cost
> random_page_cost
> effective_cache_size
>
> What kind of harddisk is in the server? SSD? Regular ones (spinning disks)?
>
>
The policy on this list is to add comments at the bottom, so people can
first read what you are replying to.

Though you can intersperse comments where that is apprporiate.


Cheers,
Gavin


От:
Igor Neyman
Дата:

________________________________________
From: Anne Rosset []
Sent: Monday, May 06, 2013 5:51 PM
To: Igor Neyman; 
Cc: 
Subject: RE: [PERFORM] Deterioration in performance when query executed in  multi threads

Hi Igor,
Result with enable_nestloop off:




------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-----------
 Hash Join  (cost=49946.49..58830.02 rows=1 width=181) (actual time=2189.474..2664.888 rows=180 loops=1)
   Hash Cond: ((item.created_by_id)::text = (sfuser.id)::text)
   ->  Hash Join  (cost=49470.50..58345.53 rows=1 width=167) (actual time=1931.870..2404.745 rows=180 loops=1)
         Hash Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)
         ->  Hash Join  (cost=48994.51..57869.52 rows=1 width=153) (actual time=1927.603..2400.334 rows=180 loops=1)
               Hash Cond: ((relationship.target_id)::text = (artifact.id)::text)
               ->  Seq Scan on relationship  (cost=0.00..7973.38 rows=240435 width=19) (actual time=0.036..492.442
rows=241285loops=1) 
                     Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))
               ->  Hash  (cost=48994.49..48994.49 rows=1 width=154) (actual time=1858.350..1858.350 rows=180 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 34kB
                     ->  Hash Join  (cost=47260.54..48994.49 rows=1 width=154) (actual time=1836.495..1858.151 rows=180
loops=1)
                           Hash Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)
                           ->  Seq Scan on field_value field_value4  (cost=0.00..1443.78 rows=77378 width=9) (actual
time=22.104..30.694rows=77378 loops=1) 
                           ->  Hash  (cost=47260.52..47260.52 rows=1 width=163) (actual time=1814.005..1814.005
rows=180loops=1) 
                                 Buckets: 1024  Batches: 1  Memory Usage: 35kB
                                 ->  Hash Join  (cost=45526.57..47260.52 rows=1 width=163) (actual
time=1790.908..1813.780rows=180 loops=1) 
                                       Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)
                                       ->  Seq Scan on field_value field_value3  (cost=0.00..1443.78 rows=77378
width=15)(actual time=0.002..9.262 rows=77378 loops=1) 
                                       ->  Hash  (cost=45526.55..45526.55 rows=1 width=166) (actual
time=1790.505..1790.505rows=180 loops=1) 
                                             Buckets: 1024  Batches: 1  Memory Usage: 36kB
                                             ->  Hash Join  (cost=43792.60..45526.55 rows=1 width=166) (actual
time=1768.362..1790.304rows=180 loops=1) 
                                                   Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)
                                                   ->  Seq Scan on field_value  (cost=0.00..1443.78 rows=77378 width=9)
(actualtime=0.002..8.687 rows=77378 loops=1) 
                                                   ->  Hash  (cost=43792.58..43792.58 rows=1 width=175) (actual
time=1767.928..1767.928rows=180 loops=1) 
                                                         Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                         ->  Hash Join  (cost=42058.63..43792.58 rows=1 width=175)
(actualtime=1499.822..1767.734 rows=180 loops=1) 
                                                               Hash Cond: ((field_value2.id)::text =
(artifact.status_fv)::text)
                                                               ->  Seq Scan on field_value field_value2
(cost=0.00..1443.78rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1) 
                                                               ->  Hash  (cost=42058.61..42058.61 rows=1 width=178)
(actualtime=1492.707..1492.707 rows=180 loops=1) 
                                                                     Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                                     ->  Hash Join  (cost=18039.59..42058.61 rows=1
width=178)(actual time=1175.659..1492.482 rows=180 loops=1) 
                                                                           Hash Cond: ((item.id)::text =
(artifact.id)::text)
                                                                           ->  Hash Join  (cost=12112.31..36130.95
rows=30width=128) (actual time=304.035..702.745 rows=1015 loops=1) 
                                                                                 Hash Cond: ((item.folder_id)::text =
(folder.id)::text)
                                                                                 ->  Seq Scan on item
(cost=0.00..21381.10rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1) 
                                                                                       Filter: (NOT is_deleted)
                                                                                 ->  Hash  (cost=12112.27..12112.27
rows=3width=92) (actual time=189.285..189.285 rows=10 loops=1) 
                                                                                       Buckets: 1024  Batches: 1
MemoryUsage: 2kB 
                                                                                       ->  Hash Join
(cost=195.86..12112.27rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1) 
                                                                                             Hash Cond:
((folder.id)::text= (tracker.id)::text) 
                                                                                             ->  Hash Join
(cost=8.28..11923.31rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1) 
                                                                                                   Hash Cond:
((folder.project_id)::text= (project.id)::text) 
                                                                                                   ->  Seq Scan on
folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1) 
                                                                                                   ->  Hash
(cost=8.27..8.27rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1) 
                                                                                                         Buckets: 1024
Batches:1  Memory Usage: 1kB 
                                                                                                         ->  Index Scan
usingproject_path on project  (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows= 
1 loops=1)
                                                                                                               Index
Cond:((path)::text = 'projects.psr-pub-13'::text) 
                                                                                             ->  Hash
(cost=129.48..129.48rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1) 
                                                                                                   Buckets: 1024
Batches:1  Memory Usage: 267kB 
                                                                                                   ->  Seq Scan on
tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1) 
->  Hash Join  (cost=45526.57..47260.52 rows=1 width=163) (actual time=1790.908..1813.780 rows=180 loops=1)
                                       Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)
                                       ->  Seq Scan on field_value field_value3  (cost=0.00..1443.78 rows=77378
width=15)(actual time=0.002..9.262 rows=77378 loops=1) 
                                       ->  Hash  (cost=45526.55..45526.55 rows=1 width=166) (actual
time=1790.505..1790.505rows=180 loops=1) 
                                             Buckets: 1024  Batches: 1  Memory Usage: 36kB
                                             ->  Hash Join  (cost=43792.60..45526.55 rows=1 width=166) (actual
time=1768.362..1790.304rows=180 loops=1) 
                                                   Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)
                                                   ->  Seq Scan on field_value  (cost=0.00..1443.78 rows=77378 width=9)
(actualtime=0.002..8.687 rows=77378 loops=1) 
                                                   ->  Hash  (cost=43792.58..43792.58 rows=1 width=175) (actual
time=1767.928..1767.928rows=180 loops=1) 
                                                         Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                         ->  Hash Join  (cost=42058.63..43792.58 rows=1 width=175)
(actualtime=1499.822..1767.734 rows=180 loops=1) 
                                                               Hash Cond: ((field_value2.id)::text =
(artifact.status_fv)::text)
                                                               ->  Seq Scan on field_value field_value2
(cost=0.00..1443.78rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1) 
                                                               ->  Hash  (cost=42058.61..42058.61 rows=1 width=178)
(actualtime=1492.707..1492.707 rows=180 loops=1) 
                                                                     Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                                     ->  Hash Join  (cost=18039.59..42058.61 rows=1
width=178)(actual time=1175.659..1492.482 rows=180 loops=1) 
                                                                           Hash Cond: ((item.id)::text =
(artifact.id)::text)
                                                                           ->  Hash Join  (cost=12112.31..36130.95
rows=30width=128) (actual time=304.035..702.745 rows=1015 loops=1) 
                                                                                 Hash Cond: ((item.folder_id)::text =
(folder.id)::text)
                                                                                 ->  Seq Scan on item
(cost=0.00..21381.10rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1) 
                                                                                       Filter: (NOT is_deleted)
                                                                                 ->  Hash  (cost=12112.27..12112.27
rows=3width=92) (actual time=189.285..189.285 rows=10 loops=1) 
                                                                                       Buckets: 1024  Batches: 1
MemoryUsage: 2kB 
                                                                                       ->  Hash Join
(cost=195.86..12112.27rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1) 
                                                                                             Hash Cond:
((folder.id)::text= (tracker.id)::text) 
                                                                                             ->  Hash Join
(cost=8.28..11923.31rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1) 
                                                                                                   Hash Cond:
((folder.project_id)::text= (project.id)::text) 
                                                                                                   ->  Seq Scan on
folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1) 
                                                                                                   ->  Hash
(cost=8.27..8.27rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1) 
                                                                                                         Buckets: 1024
Batches:1  Memory Usage: 1kB 
                                                                                                         ->  Index Scan
usingproject_path on project  (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows= 
1 loops=1)
                                                                                                               Index
Cond:((path)::text = 'projects.psr-pub-13'::text) 
                                                                                             ->  Hash
(cost=129.48..129.48rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1) 
                                                                                                   Buckets: 1024
Batches:1  Memory Usage: 267kB 
                                                                                                   ->  Seq Scan on
tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1) 
                                                                           ->  Hash  (cost=5527.28..5527.28 rows=32000
width=50)(actual time=788.470..788.470 rows=31183 loops=1) 
                                                                                 Buckets: 4096  Batches: 1  Memory
Usage:2662kB 
                                                                                 ->  Bitmap Heap Scan on artifact
(cost=628.28..5527.28rows=32000 width=50) (actual time=83.568..771.651 rows=31183 loops=1) 
                                                                                       Recheck Cond: (priority = 3)
                                                                                       ->  Bitmap Index Scan on
artifact_priority (cost=0.00..620.28 rows=32000 width=0) (actual time=82.877..82.877 rows=31260 loops=1 
)
                                                                                             Index Cond: (priority = 3)
         ->  Hash  (cost=404.33..404.33 rows=5733 width=32) (actual time=4.232..4.232 rows=5733 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 364kB
               ->  Seq Scan on sfuser sfuser2  (cost=0.00..404.33 rows=5733 width=32) (actual time=0.006..1.941
rows=5733loops=1) 
   ->  Hash  (cost=404.33..404.33 rows=5733 width=32) (actual time=257.485..257.485 rows=5733 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 364kB
         ->  Seq Scan on sfuser  (cost=0.00..404.33 rows=5733 width=32) (actual time=9.555..255.085 rows=5733 loops=1)
   SubPlan 1
     ->  Index Scan using mntr_subscr_user on mntr_subscription  (cost=0.00..8.47 rows=1 width=9) (actual
time=0.013..0.013rows=0 loops=180) 
           Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text))
 Total runtime: 2666.011 ms


effective_cache_size            | 512MB

Thanks,
Anne

--------------------------------------------------------------

Anne,

So, this shows that original execution plan (with nested loops) was not that bad.

Still, considering your hardware config (and I think, you mentioned it's a "dedicated" database server), I'd set:

buffer_cache = 3GB
effective_cache_size = 7GB

buffer_cache is "global" setting and requires Postgres restart after changing in postgresql.conf.

But, besides postgres config parameters, it seems your particular problem is in correct setting of conection pooler.

Igor Neyman


От:
Anne Rosset
Дата:

Thanks Igor.
I am going to test with pgbouncer. Will let you know.

Thanks,
Anne



-----Original Message-----
From: Igor Neyman [mailto:]
Sent: Monday, May 06, 2013 7:04 PM
To: Anne Rosset; 
Cc: 
Subject: RE: [PERFORM] Deterioration in performance when query executed in multi threads


________________________________________
From: Anne Rosset []
Sent: Monday, May 06, 2013 5:51 PM
To: Igor Neyman; 
Cc: 
Subject: RE: [PERFORM] Deterioration in performance when query executed in  multi threads

Hi Igor,
Result with enable_nestloop off:




------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-----------
 Hash Join  (cost=49946.49..58830.02 rows=1 width=181) (actual time=2189.474..2664.888 rows=180 loops=1)
   Hash Cond: ((item.created_by_id)::text = (sfuser.id)::text)
   ->  Hash Join  (cost=49470.50..58345.53 rows=1 width=167) (actual time=1931.870..2404.745 rows=180 loops=1)
         Hash Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)
         ->  Hash Join  (cost=48994.51..57869.52 rows=1 width=153) (actual time=1927.603..2400.334 rows=180 loops=1)
               Hash Cond: ((relationship.target_id)::text = (artifact.id)::text)
               ->  Seq Scan on relationship  (cost=0.00..7973.38 rows=240435 width=19) (actual time=0.036..492.442
rows=241285loops=1) 
                     Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))
               ->  Hash  (cost=48994.49..48994.49 rows=1 width=154) (actual time=1858.350..1858.350 rows=180 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 34kB
                     ->  Hash Join  (cost=47260.54..48994.49 rows=1 width=154) (actual time=1836.495..1858.151 rows=180
loops=1)
                           Hash Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)
                           ->  Seq Scan on field_value field_value4  (cost=0.00..1443.78 rows=77378 width=9) (actual
time=22.104..30.694rows=77378 loops=1) 
                           ->  Hash  (cost=47260.52..47260.52 rows=1 width=163) (actual time=1814.005..1814.005
rows=180loops=1) 
                                 Buckets: 1024  Batches: 1  Memory Usage: 35kB
                                 ->  Hash Join  (cost=45526.57..47260.52 rows=1 width=163) (actual
time=1790.908..1813.780rows=180 loops=1) 
                                       Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)
                                       ->  Seq Scan on field_value field_value3  (cost=0.00..1443.78 rows=77378
width=15)(actual time=0.002..9.262 rows=77378 loops=1) 
                                       ->  Hash  (cost=45526.55..45526.55 rows=1 width=166) (actual
time=1790.505..1790.505rows=180 loops=1) 
                                             Buckets: 1024  Batches: 1  Memory Usage: 36kB
                                             ->  Hash Join  (cost=43792.60..45526.55 rows=1 width=166) (actual
time=1768.362..1790.304rows=180 loops=1) 
                                                   Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)
                                                   ->  Seq Scan on field_value  (cost=0.00..1443.78 rows=77378 width=9)
(actualtime=0.002..8.687 rows=77378 loops=1) 
                                                   ->  Hash  (cost=43792.58..43792.58 rows=1 width=175) (actual
time=1767.928..1767.928rows=180 loops=1) 
                                                         Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                         ->  Hash Join  (cost=42058.63..43792.58 rows=1 width=175)
(actualtime=1499.822..1767.734 rows=180 loops=1) 
                                                               Hash Cond: ((field_value2.id)::text =
(artifact.status_fv)::text)
                                                               ->  Seq Scan on field_value field_value2
(cost=0.00..1443.78rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1) 
                                                               ->  Hash  (cost=42058.61..42058.61 rows=1 width=178)
(actualtime=1492.707..1492.707 rows=180 loops=1) 
                                                                     Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                                     ->  Hash Join  (cost=18039.59..42058.61 rows=1
width=178)(actual time=1175.659..1492.482 rows=180 loops=1) 
                                                                           Hash Cond: ((item.id)::text =
(artifact.id)::text)
                                                                           ->  Hash Join  (cost=12112.31..36130.95
rows=30width=128) (actual time=304.035..702.745 rows=1015 loops=1) 
                                                                                 Hash Cond: ((item.folder_id)::text =
(folder.id)::text)
                                                                                 ->  Seq Scan on item
(cost=0.00..21381.10rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1) 
                                                                                       Filter: (NOT is_deleted)
                                                                                 ->  Hash  (cost=12112.27..12112.27
rows=3width=92) (actual time=189.285..189.285 rows=10 loops=1) 
                                                                                       Buckets: 1024  Batches: 1
MemoryUsage: 2kB 
                                                                                       ->  Hash Join
(cost=195.86..12112.27rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1) 
                                                                                             Hash Cond:
((folder.id)::text= (tracker.id)::text) 
                                                                                             ->  Hash Join
(cost=8.28..11923.31rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1) 
                                                                                                   Hash Cond:
((folder.project_id)::text= (project.id)::text) 
                                                                                                   ->  Seq Scan on
folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1) 
                                                                                                   ->  Hash
(cost=8.27..8.27rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1) 
                                                                                                         Buckets: 1024
Batches:1  Memory Usage: 1kB 
                                                                                                         ->  Index Scan
usingproject_path on project  (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows= 
1 loops=1)
                                                                                                               Index
Cond:((path)::text = 'projects.psr-pub-13'::text) 
                                                                                             ->  Hash
(cost=129.48..129.48rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1) 
                                                                                                   Buckets: 1024
Batches:1  Memory Usage: 267kB 
                                                                                                   ->  Seq Scan on
tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1) 
->  Hash Join  (cost=45526.57..47260.52 rows=1 width=163) (actual
-> time=1790.908..1813.780 rows=180 loops=1)
                                       Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)
                                       ->  Seq Scan on field_value field_value3  (cost=0.00..1443.78 rows=77378
width=15)(actual time=0.002..9.262 rows=77378 loops=1) 
                                       ->  Hash  (cost=45526.55..45526.55 rows=1 width=166) (actual
time=1790.505..1790.505rows=180 loops=1) 
                                             Buckets: 1024  Batches: 1  Memory Usage: 36kB
                                             ->  Hash Join  (cost=43792.60..45526.55 rows=1 width=166) (actual
time=1768.362..1790.304rows=180 loops=1) 
                                                   Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)
                                                   ->  Seq Scan on field_value  (cost=0.00..1443.78 rows=77378 width=9)
(actualtime=0.002..8.687 rows=77378 loops=1) 
                                                   ->  Hash  (cost=43792.58..43792.58 rows=1 width=175) (actual
time=1767.928..1767.928rows=180 loops=1) 
                                                         Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                         ->  Hash Join  (cost=42058.63..43792.58 rows=1 width=175)
(actualtime=1499.822..1767.734 rows=180 loops=1) 
                                                               Hash Cond: ((field_value2.id)::text =
(artifact.status_fv)::text)
                                                               ->  Seq Scan on field_value field_value2
(cost=0.00..1443.78rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1) 
                                                               ->  Hash  (cost=42058.61..42058.61 rows=1 width=178)
(actualtime=1492.707..1492.707 rows=180 loops=1) 
                                                                     Buckets: 1024  Batches: 1  Memory Usage: 38kB
                                                                     ->  Hash Join  (cost=18039.59..42058.61 rows=1
width=178)(actual time=1175.659..1492.482 rows=180 loops=1) 
                                                                           Hash Cond: ((item.id)::text =
(artifact.id)::text)
                                                                           ->  Hash Join  (cost=12112.31..36130.95
rows=30width=128) (actual time=304.035..702.745 rows=1015 loops=1) 
                                                                                 Hash Cond: ((item.folder_id)::text =
(folder.id)::text)
                                                                                 ->  Seq Scan on item
(cost=0.00..21381.10rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1) 
                                                                                       Filter: (NOT is_deleted)
                                                                                 ->  Hash  (cost=12112.27..12112.27
rows=3width=92) (actual time=189.285..189.285 rows=10 loops=1) 
                                                                                       Buckets: 1024  Batches: 1
MemoryUsage: 2kB 
                                                                                       ->  Hash Join
(cost=195.86..12112.27rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1) 
                                                                                             Hash Cond:
((folder.id)::text= (tracker.id)::text) 
                                                                                             ->  Hash Join
(cost=8.28..11923.31rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1) 
                                                                                                   Hash Cond:
((folder.project_id)::text= (project.id)::text) 
                                                                                                   ->  Seq Scan on
folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1) 
                                                                                                   ->  Hash
(cost=8.27..8.27rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1) 
                                                                                                         Buckets: 1024
Batches:1  Memory Usage: 1kB 
                                                                                                         ->  Index Scan
usingproject_path on project  (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows= 
1 loops=1)
                                                                                                               Index
Cond:((path)::text = 'projects.psr-pub-13'::text) 
                                                                                             ->  Hash
(cost=129.48..129.48rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1) 
                                                                                                   Buckets: 1024
Batches:1  Memory Usage: 267kB 
                                                                                                   ->  Seq Scan on
tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1) 
                                                                           ->  Hash  (cost=5527.28..5527.28 rows=32000
width=50)(actual time=788.470..788.470 rows=31183 loops=1) 
                                                                                 Buckets: 4096  Batches: 1  Memory
Usage:2662kB 
                                                                                 ->  Bitmap Heap Scan on artifact
(cost=628.28..5527.28rows=32000 width=50) (actual time=83.568..771.651 rows=31183 loops=1) 
                                                                                       Recheck Cond: (priority = 3)
                                                                                       ->  Bitmap Index Scan on
artifact_priority (cost=0.00..620.28 rows=32000 width=0) (actual time=82.877..82.877 rows=31260 loops=1 
)
                                                                                             Index Cond: (priority = 3)
         ->  Hash  (cost=404.33..404.33 rows=5733 width=32) (actual time=4.232..4.232 rows=5733 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 364kB
               ->  Seq Scan on sfuser sfuser2  (cost=0.00..404.33 rows=5733 width=32) (actual time=0.006..1.941
rows=5733loops=1) 
   ->  Hash  (cost=404.33..404.33 rows=5733 width=32) (actual time=257.485..257.485 rows=5733 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 364kB
         ->  Seq Scan on sfuser  (cost=0.00..404.33 rows=5733 width=32) (actual time=9.555..255.085 rows=5733 loops=1)
   SubPlan 1
     ->  Index Scan using mntr_subscr_user on mntr_subscription  (cost=0.00..8.47 rows=1 width=9) (actual
time=0.013..0.013rows=0 loops=180) 
           Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text))  Total runtime:
2666.011ms 


effective_cache_size            | 512MB

Thanks,
Anne

--------------------------------------------------------------

Anne,

So, this shows that original execution plan (with nested loops) was not that bad.

Still, considering your hardware config (and I think, you mentioned it's a "dedicated" database server), I'd set:

buffer_cache = 3GB
effective_cache_size = 7GB

buffer_cache is "global" setting and requires Postgres restart after changing in postgresql.conf.

But, besides postgres config parameters, it seems your particular problem is in correct setting of conection pooler.

Igor Neyman