Обсуждение: Sync Scan update

Поиск
Список
Период
Сортировка

Sync Scan update

От
Jeff Davis
Дата:
I have updated my Synchronized Scan patch and have had more time for
testing.

Go to http://j-davis.com/postgresql/syncscan-results10.html
where you can download the patch, and see the benchmarks that I've run.

The results are very promising. I did not see any significant slowdown
for non-concurrent scans or for scans that fit into memory, although I
do need more testing in this area.

The benchmarks that I ran tested the concurrent performance, and the
results were excellent.

I also added two new simple features to the patch (they're just
#define'd tunables in heapam.h):
(1) If the table is smaller than
effective_cache_size*SYNC_SCAN_THRESHOLD then the patch doesn't do
anything different from current behavior.
(2) The scans can start earlier than the hint implies by setting
SYNC_SCAN_START_OFFSET between 0 and 1. This is helpful because it makes
the scan start in a place where the cache trail is likely to be
continuous between the starting point and the location of an existing scan.

I'd like any feedback, particularly any results that show a slowdown
from current behavior. I think I fixed Luke's problem (actually, it was
a fluke that it was even working at all), but I haven't heard back. Some
new feedback would be very helpful.

Thanks.

Regards,Jeff Davis


Re: Sync Scan update

От
"Simon Riggs"
Дата:
On Tue, 2006-12-19 at 09:07 -0800, Jeff Davis wrote:
> I have updated my Synchronized Scan patch and have had more time for
> testing.
> 
> Go to http://j-davis.com/postgresql/syncscan-results10.html
> where you can download the patch, and see the benchmarks that I've run.
> 
> The results are very promising. I did not see any significant slowdown
> for non-concurrent scans or for scans that fit into memory, although I
> do need more testing in this area.

Yes, very promising.

Like to see some tests with 2 parallel threads, since that is the most
common case. I'd also like to see some tests with varying queries,
rather than all use select count(*). My worry is that these tests all
progress along their scans at exactly the same rate, so are likely to
stay in touch. What happens when we have significantly more CPU work to
do on one scan - does it fall behind??

I'd like to see all testing use log_executor_stats=on for those
sessions. I would like to know whether the blocks are being hit while
still in shared_buffers or whether we rely on the use of the full
filesystem buffer cache to provide performance.

It would be very cool to run a background performance test also, say a
pgbench run with a -S 100. That would show us what its like to try to
run multiple queries when most of the cache is full with something else.

It would be better to have a GUC to control the scanning
e.g.synch_scan_threshold = 256MB

rather than link it to effective_cache_size always, since that is
related to index scan tuning.

--  Simon Riggs              EnterpriseDB   http://www.enterprisedb.com




Re: Sync Scan update

От
Jeff Davis
Дата:
On Tue, 2006-12-19 at 17:43 +0000, Simon Riggs wrote:
> On Tue, 2006-12-19 at 09:07 -0800, Jeff Davis wrote:
> > I have updated my Synchronized Scan patch and have had more time for
> > testing.
> > 
> > Go to http://j-davis.com/postgresql/syncscan-results10.html
> > where you can download the patch, and see the benchmarks that I've run.
> > 
> > The results are very promising. I did not see any significant slowdown
> > for non-concurrent scans or for scans that fit into memory, although I
> > do need more testing in this area.
> 
> Yes, very promising.
> 
> Like to see some tests with 2 parallel threads, since that is the most
> common case. I'd also like to see some tests with varying queries,
> rather than all use select count(*). My worry is that these tests all
> progress along their scans at exactly the same rate, so are likely to
> stay in touch. What happens when we have significantly more CPU work to
> do on one scan - does it fall behind??

Right, that's important. Hopefully the test you describe below sheds
some light on that.

> I'd like to see all testing use log_executor_stats=on for those
> sessions. I would like to know whether the blocks are being hit while
> still in shared_buffers or whether we rely on the use of the full
> filesystem buffer cache to provide performance.

Ok, will do.

> It would be very cool to run a background performance test also, say a
> pgbench run with a -S 100. That would show us what its like to try to
> run multiple queries when most of the cache is full with something else.

Do you mean '-S -s 100' or '-s 100'? Reading the pgbench docs it doesn't
look like '-S' takes an argument.

> It would be better to have a GUC to control the scanning
> e.g.
>     synch_scan_threshold = 256MB
> 
> rather than link it to effective_cache_size always, since that is
> related to index scan tuning.

I will make it completely unrelated to effective_cache_size. I'll do the
same with "sync_scan_start_offset" (by the way, does someone have a
better name for that?).

Regards,Jeff Davis



Re: Sync Scan update

От
Gregory Stark
Дата:
"Simon Riggs" <simon@2ndquadrant.com> writes:

> Like to see some tests with 2 parallel threads, since that is the most
> common case. I'd also like to see some tests with varying queries,
> rather than all use select count(*). My worry is that these tests all
> progress along their scans at exactly the same rate, so are likely to
> stay in touch. What happens when we have significantly more CPU work to
> do on one scan - does it fall behind??

If it's just CPU then I would expect the cache to help the followers keep up
pretty easily. What concerns me is queries that involve more I/O. For example
if the leader is doing a straight sequential scan and the follower is doing a
nested loop join driven by the sequential scan. Or worse, what happens if the
leader is doing a nested loop and the follower which is just doing a straight
sequential scan is being held back?

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com


Re: Sync Scan update

От
Jeff Davis
Дата:
On Tue, 2006-12-19 at 18:05 +0000, Gregory Stark wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> 
> > Like to see some tests with 2 parallel threads, since that is the most
> > common case. I'd also like to see some tests with varying queries,
> > rather than all use select count(*). My worry is that these tests all
> > progress along their scans at exactly the same rate, so are likely to
> > stay in touch. What happens when we have significantly more CPU work to
> > do on one scan - does it fall behind??
> 
> If it's just CPU then I would expect the cache to help the followers keep up
> pretty easily. What concerns me is queries that involve more I/O. For example
> if the leader is doing a straight sequential scan and the follower is doing a
> nested loop join driven by the sequential scan. Or worse, what happens if the

That would be one painful query: scanning two tables in a nested loop,
neither of which fit into physical memory! ;)

If one table does fit into memory, it's likely to stay there since a
nested loop will keep the pages so hot.

I can't think of a way to test two big tables in a nested loop because
it would take so long. However, it would be worth trying it with an
index, because that would cause random I/O during the scan.

> leader is doing a nested loop and the follower which is just doing a straight
> sequential scan is being held back?
> 

The follower will never be held back in my current implementation.

My current implementation relies on the scans to stay close together
once they start close together. If one falls seriously behind, it will
fall outside of the main "cache trail" and cause the performance to
degrade due to disk seeking and lower cache efficiency.

I think Simon is concerned about CPU because that will be a common case:
if one scan is CPU bound and another is I/O bound, they will progress at
different rates. That's bound to cause seeking and poor cache
efficiency.

Although I don't think either of these cases will be worse than current
behavior, it warrants more testing.

Regards,Jeff Davis



Re: Sync Scan update

От
"Jim C. Nasby"
Дата:
On Tue, Dec 19, 2006 at 10:37:21AM -0800, Jeff Davis wrote:
> > leader is doing a nested loop and the follower which is just doing a straight
> > sequential scan is being held back?
> > 
> 
> The follower will never be held back in my current implementation.
> 
> My current implementation relies on the scans to stay close together
> once they start close together. If one falls seriously behind, it will
> fall outside of the main "cache trail" and cause the performance to
> degrade due to disk seeking and lower cache efficiency.

That's something else that it would be really good to have data for; in
some cases it will be better for the slow case to just fall behind, but
in other cases the added seeking will slow everything down enough that
it would have been faster to just stay at the speed of the slow scan.
The question is where those two thresholds are...
-- 
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)


Re: Sync Scan update

От
Jeff Davis
Дата:
On Sat, 2006-12-30 at 13:35 -0600, Jim C. Nasby wrote:
> > My current implementation relies on the scans to stay close together
> > once they start close together. If one falls seriously behind, it will
> > fall outside of the main "cache trail" and cause the performance to
> > degrade due to disk seeking and lower cache efficiency.
> 
> That's something else that it would be really good to have data for; in
> some cases it will be better for the slow case to just fall behind, but
> in other cases the added seeking will slow everything down enough that
> it would have been faster to just stay at the speed of the slow scan.
> The question is where those two thresholds are...

Right. I will do more testing for my basic patch soon, but a lot of
testing is required to characterize when the scans should move apart and
when they should stay together. The problem is that there are a lot of
variables. If you have a few scans that uses a moderate amount of CPU,
the scans might all stay together (I/0 bound). But as soon as you get
more scans, those scans could all become CPU bound (and could be mixed
with other types of scans on the same table).

If you have some ideas for tests I can run I'll get back to you with the
results. However, this kind of test would probably need to be run on a
variety of hardware.

Regards,Jeff Davis



Re: Sync Scan update

От
"Jim C. Nasby"
Дата:
On Tue, Jan 02, 2007 at 09:48:22AM -0800, Jeff Davis wrote:
> On Sat, 2006-12-30 at 13:35 -0600, Jim C. Nasby wrote:
> > > My current implementation relies on the scans to stay close together
> > > once they start close together. If one falls seriously behind, it will
> > > fall outside of the main "cache trail" and cause the performance to
> > > degrade due to disk seeking and lower cache efficiency.
> > 
> > That's something else that it would be really good to have data for; in
> > some cases it will be better for the slow case to just fall behind, but
> > in other cases the added seeking will slow everything down enough that
> > it would have been faster to just stay at the speed of the slow scan.
> > The question is where those two thresholds are...
> 
> Right. I will do more testing for my basic patch soon, but a lot of
> testing is required to characterize when the scans should move apart and
> when they should stay together. The problem is that there are a lot of
> variables. If you have a few scans that uses a moderate amount of CPU,
> the scans might all stay together (I/0 bound). But as soon as you get
> more scans, those scans could all become CPU bound (and could be mixed
> with other types of scans on the same table).
> 
> If you have some ideas for tests I can run I'll get back to you with the
> results. However, this kind of test would probably need to be run on a
> variety of hardware.

Well, that's the real trick: ideally, syncscan would be designed in such
a way that you wouldn't have to manually tune at what point scans should
diverge instead of converge; the system should just figure it out.
-- 
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)


Re: Sync Scan update

От
Bruce Momjian
Дата:
Thread added to TODO for item:

* Allow sequential scans to take advantage of other concurrent sequential scans, also called "Synchronised Scanning"


---------------------------------------------------------------------------

Jeff Davis wrote:
> I have updated my Synchronized Scan patch and have had more time for
> testing.
> 
> Go to http://j-davis.com/postgresql/syncscan-results10.html
> where you can download the patch, and see the benchmarks that I've run.
> 
> The results are very promising. I did not see any significant slowdown
> for non-concurrent scans or for scans that fit into memory, although I
> do need more testing in this area.
> 
> The benchmarks that I ran tested the concurrent performance, and the
> results were excellent.
> 
> I also added two new simple features to the patch (they're just
> #define'd tunables in heapam.h):
> (1) If the table is smaller than
> effective_cache_size*SYNC_SCAN_THRESHOLD then the patch doesn't do
> anything different from current behavior.
> (2) The scans can start earlier than the hint implies by setting
> SYNC_SCAN_START_OFFSET between 0 and 1. This is helpful because it makes
> the scan start in a place where the cache trail is likely to be
> continuous between the starting point and the location of an existing scan.
> 
> I'd like any feedback, particularly any results that show a slowdown
> from current behavior. I think I fixed Luke's problem (actually, it was
> a fluke that it was even working at all), but I haven't heard back. Some
> new feedback would be very helpful.
> 
> Thanks.
> 
> Regards,
>     Jeff Davis
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
> 
>                http://archives.postgresql.org

--  Bruce Momjian   bruce@momjian.us EnterpriseDB    http://www.enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +