Обсуждение: effective_io_concurrency on EBS/gp2

Поиск
Список
Период
Сортировка

effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
Hi,

I've tried to run a benchmark, similar to this one:


https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
   sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
   echo "effective_io_concurrency=$i" >>test.txt
   psql pgbench -c "set effective_io_concurrency=$i; set 
enable_indexscan=off; explain (analyze, buffers)  select * from 
pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;" 
 >>test.txt
done

I get the following results:

effective_io_concurrency=0
  Execution time: 40262.781 ms
effective_io_concurrency=1
  Execution time: 98125.987 ms
effective_io_concurrency=2
  Execution time: 55343.776 ms
effective_io_concurrency=4
  Execution time: 52505.638 ms
effective_io_concurrency=8
  Execution time: 54954.024 ms
effective_io_concurrency=16
  Execution time: 54346.455 ms
effective_io_concurrency=32
  Execution time: 55196.626 ms
effective_io_concurrency=64
  Execution time: 55057.956 ms
effective_io_concurrency=128
  Execution time: 54963.510 ms
effective_io_concurrency=256
  Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are 
attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one is 
that I've seen recommendations to set effective_io_concurrency=256 (or 
more) on EBS. The other one is that effective_io_concurrency=1 (the 
worst case) is actually the default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy


Вложения

Re: effective_io_concurrency on EBS/gp2

От
Rick Otten
Дата:
We moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable.  I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time.

The same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch.  I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business.  Smaller nodes seemed to be worse than larger nodes, I only have theories as to why.  I never got good support from AWS to help me figure out what was happening.

My first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically.  Maybe spin up a node in another data center and availability zone and try the test there too.

My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS.   (We moved to Google Cloud and have been very happy there.  The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)


On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <vgarnashevich@gmail.com> wrote:
Hi,

I've tried to run a benchmark, similar to this one:

https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers)  select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;" >>test.txt
done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy


Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
I've tried to re-run the test for some specific values of effective_io_concurrency. The results were the same.

That's why I don't think the order of tests or variability in "hardware" performance affected the results.

Regards,
Vitaliy

On 31/01/2018 15:01, Rick Otten wrote:
We moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable.  I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time.

The same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch.  I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business.  Smaller nodes seemed to be worse than larger nodes, I only have theories as to why.  I never got good support from AWS to help me figure out what was happening.

My first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically.  Maybe spin up a node in another data center and availability zone and try the test there too.

My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS.   (We moved to Google Cloud and have been very happy there.  The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)


On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <vgarnashevich@gmail.com> wrote:
Hi,

I've tried to run a benchmark, similar to this one:

https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers)  select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;" >>test.txt
done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy



Re: effective_io_concurrency on EBS/gp2

От
Pavel Stehule
Дата:


2018-01-31 14:15 GMT+01:00 Vitaliy Garnashevich <vgarnashevich@gmail.com>:
I've tried to re-run the test for some specific values of effective_io_concurrency. The results were the same.

That's why I don't think the order of tests or variability in "hardware" performance affected the results.

AWS uses some intelligent throttling, so it can be related to hardware.


Regards,
Vitaliy


On 31/01/2018 15:01, Rick Otten wrote:
We moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable.  I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time.

The same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch.  I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business.  Smaller nodes seemed to be worse than larger nodes, I only have theories as to why.  I never got good support from AWS to help me figure out what was happening.

My first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically.  Maybe spin up a node in another data center and availability zone and try the test there too.

My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS.   (We moved to Google Cloud and have been very happy there.  The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)


On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <vgarnashevich@gmail.com> wrote:
Hi,

I've tried to run a benchmark, similar to this one:

https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers)  select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;" >>test.txt
done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy




RE: effective_io_concurrency on EBS/gp2

От
"Gary Doades"
Дата:

 

 

I've tried to re-run the test for some specific values of effective_io_concurrency. The results were the same.

 > That's why I don't think the order of tests or variability in "hardware" performance affected the results.

We run many MS SQL server VMs in AWS with more than adequate performance.

 

AWS EBS performance is variable and depends on various factors, mainly the size of the volume and the size of the VM it is attached to. The bigger the VM, the more EBS “bandwidth” is available, especially if the VM is EBS Optimised.

 

The size of the disk determines the IOPS available, with smaller disks naturally getting less. However, even a small disk with (say) 300 IOPS is allowed to burst up to 3000 IOPS for a while and then gets clobbered. If you want predictable performance then get a bigger disk! If you really want maximum, predictable performance get an EBS Optimised VM and use Provisioned IOPS EBS volumes…. At a price!

 

Cheers,

Gary.

On 31/01/2018 15:01, Rick Otten wrote:

We moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable.  I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time.

 

The same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch.  I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business.  Smaller nodes seemed to be worse than larger nodes, I only have theories as to why.  I never got good support from AWS to help me figure out what was happening.

 

My first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically.  Maybe spin up a node in another data center and availability zone and try the test there too.

 

My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS.   (We moved to Google Cloud and have been very happy there.  The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)

 

 

On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <vgarnashevich@gmail.com> wrote:

Hi,

I've tried to run a benchmark, similar to this one:

https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers)  select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;" >>test.txt
done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy

 

 

Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
More tests:

io1, 100 GB:

effective_io_concurrency=0
 Execution time: 40333.626 ms
effective_io_concurrency=1
 Execution time: 163840.500 ms
effective_io_concurrency=2
 Execution time: 162606.330 ms
effective_io_concurrency=4
 Execution time: 163670.405 ms
effective_io_concurrency=8
 Execution time: 161800.478 ms
effective_io_concurrency=16
 Execution time: 161962.319 ms
effective_io_concurrency=32
 Execution time: 160451.435 ms
effective_io_concurrency=64
 Execution time: 161763.632 ms
effective_io_concurrency=128
 Execution time: 161687.398 ms
effective_io_concurrency=256
 Execution time: 160945.066 ms

effective_io_concurrency=256
 Execution time: 161226.440 ms
effective_io_concurrency=128
 Execution time: 161977.954 ms
effective_io_concurrency=64
 Execution time: 159122.006 ms
effective_io_concurrency=32
 Execution time: 154923.569 ms
effective_io_concurrency=16
 Execution time: 160922.819 ms
effective_io_concurrency=8
 Execution time: 160577.122 ms
effective_io_concurrency=4
 Execution time: 157509.481 ms
effective_io_concurrency=2
 Execution time: 161806.713 ms
effective_io_concurrency=1
 Execution time: 164026.708 ms
effective_io_concurrency=0
 Execution time: 40196.182 ms


st1, 500 GB:

effective_io_concurrency=0
 Execution time: 40542.583 ms
effective_io_concurrency=1
 Execution time: 119996.892 ms
effective_io_concurrency=2
 Execution time: 51137.998 ms
effective_io_concurrency=4
 Execution time: 42301.922 ms
effective_io_concurrency=8
 Execution time: 42081.877 ms
effective_io_concurrency=16
 Execution time: 42253.782 ms
effective_io_concurrency=32
 Execution time: 42087.216 ms
effective_io_concurrency=64
 Execution time: 42112.105 ms
effective_io_concurrency=128
 Execution time: 42271.850 ms
effective_io_concurrency=256
 Execution time: 42213.074 ms

effective_io_concurrency=256
 Execution time: 42255.568 ms
effective_io_concurrency=128
 Execution time: 42030.515 ms
effective_io_concurrency=64
 Execution time: 41713.753 ms
effective_io_concurrency=32
 Execution time: 42035.436 ms
effective_io_concurrency=16
 Execution time: 42221.581 ms
effective_io_concurrency=8
 Execution time: 42203.730 ms
effective_io_concurrency=4
 Execution time: 42236.082 ms
effective_io_concurrency=2
 Execution time: 49531.558 ms
effective_io_concurrency=1
 Execution time: 117160.222 ms
effective_io_concurrency=0
 Execution time: 40059.259 ms

Regards,
Vitaliy

On 31/01/2018 15:46, Gary Doades wrote:

 

 

I've tried to re-run the test for some specific values of effective_io_concurrency. The results were the same.

 > That's why I don't think the order of tests or variability in "hardware" performance affected the results.

We run many MS SQL server VMs in AWS with more than adequate performance.

 

AWS EBS performance is variable and depends on various factors, mainly the size of the volume and the size of the VM it is attached to. The bigger the VM, the more EBS “bandwidth” is available, especially if the VM is EBS Optimised.

 

The size of the disk determines the IOPS available, with smaller disks naturally getting less. However, even a small disk with (say) 300 IOPS is allowed to burst up to 3000 IOPS for a while and then gets clobbered. If you want predictable performance then get a bigger disk! If you really want maximum, predictable performance get an EBS Optimised VM and use Provisioned IOPS EBS volumes…. At a price!

 

Cheers,

Gary.

On 31/01/2018 15:01, Rick Otten wrote:

We moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable.  I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time.

 

The same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch.  I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business.  Smaller nodes seemed to be worse than larger nodes, I only have theories as to why.  I never got good support from AWS to help me figure out what was happening.

 

My first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically.  Maybe spin up a node in another data center and availability zone and try the test there too.

 

My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS.   (We moved to Google Cloud and have been very happy there.  The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)

 

 

On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <vgarnashevich@gmail.com> wrote:

Hi,

I've tried to run a benchmark, similar to this one:

https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers)  select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;" >>test.txt
done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy

 

 


Re: effective_io_concurrency on EBS/gp2

От
Claudio Freire
Дата:
On Wed, Jan 31, 2018 at 1:57 PM, Vitaliy Garnashevich
<vgarnashevich@gmail.com> wrote:
> More tests:
>
> io1, 100 GB:
>
> effective_io_concurrency=0
>  Execution time: 40333.626 ms
> effective_io_concurrency=1
>  Execution time: 163840.500 ms

In my experience playing with prefetch, e_i_c>0 interferes with kernel
read-ahead. What you've got there would make sense if what postgres
thinks will be random I/O ends up being sequential. With e_i_c=0, the
kernel will optimize the hell out of it, because it's a predictable
pattern. But with e_i_c=1, the kernel's optimization gets disabled but
postgres isn't reading much ahead, so you get the worst possible case.


Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
I've done some more tests. Here they are all:

io1, 100 GB SSD, 1000 IOPS
effective_io_concurrency=0 Execution time: 40333.626 ms
effective_io_concurrency=1 Execution time: 163840.500 ms
effective_io_concurrency=2 Execution time: 162606.330 ms
effective_io_concurrency=4 Execution time: 163670.405 ms
effective_io_concurrency=8 Execution time: 161800.478 ms
effective_io_concurrency=16 Execution time: 161962.319 ms
effective_io_concurrency=32 Execution time: 160451.435 ms
effective_io_concurrency=64 Execution time: 161763.632 ms
effective_io_concurrency=128 Execution time: 161687.398 ms
effective_io_concurrency=256 Execution time: 160945.066 ms
effective_io_concurrency=256 Execution time: 161226.440 ms
effective_io_concurrency=128 Execution time: 161977.954 ms
effective_io_concurrency=64 Execution time: 159122.006 ms
effective_io_concurrency=32 Execution time: 154923.569 ms
effective_io_concurrency=16 Execution time: 160922.819 ms
effective_io_concurrency=8 Execution time: 160577.122 ms
effective_io_concurrency=4 Execution time: 157509.481 ms
effective_io_concurrency=2 Execution time: 161806.713 ms
effective_io_concurrency=1 Execution time: 164026.708 ms
effective_io_concurrency=0 Execution time: 40196.182 ms

gp2, 100 GB SSD
effective_io_concurrency=0 Execution time: 40262.781 ms
effective_io_concurrency=1 Execution time: 98125.987 ms
effective_io_concurrency=2 Execution time: 55343.776 ms
effective_io_concurrency=4 Execution time: 52505.638 ms
effective_io_concurrency=8 Execution time: 54954.024 ms
effective_io_concurrency=16 Execution time: 54346.455 ms
effective_io_concurrency=32 Execution time: 55196.626 ms
effective_io_concurrency=64 Execution time: 55057.956 ms
effective_io_concurrency=128 Execution time: 54963.510 ms
effective_io_concurrency=256 Execution time: 54339.258 ms

io1, 1 TB SSD, 3000 IOPS
effective_io_concurrency=0 Execution time: 40691.396 ms
effective_io_concurrency=1 Execution time: 87524.939 ms
effective_io_concurrency=2 Execution time: 54197.982 ms
effective_io_concurrency=4 Execution time: 55082.740 ms
effective_io_concurrency=8 Execution time: 54838.161 ms
effective_io_concurrency=16 Execution time: 52561.553 ms
effective_io_concurrency=32 Execution time: 54266.847 ms
effective_io_concurrency=64 Execution time: 54683.102 ms
effective_io_concurrency=128 Execution time: 54643.874 ms
effective_io_concurrency=256 Execution time: 42944.938 ms

gp2, 1 TB SSD
effective_io_concurrency=0 Execution time: 40072.880 ms
effective_io_concurrency=1 Execution time: 83528.679 ms
effective_io_concurrency=2 Execution time: 55706.941 ms
effective_io_concurrency=4 Execution time: 55664.646 ms
effective_io_concurrency=8 Execution time: 54699.658 ms
effective_io_concurrency=16 Execution time: 54632.291 ms
effective_io_concurrency=32 Execution time: 54793.305 ms
effective_io_concurrency=64 Execution time: 55227.875 ms
effective_io_concurrency=128 Execution time: 54638.744 ms
effective_io_concurrency=256 Execution time: 54869.761 ms

st1, 500 GB HDD
effective_io_concurrency=0 Execution time: 40542.583 ms
effective_io_concurrency=1 Execution time: 119996.892 ms
effective_io_concurrency=2 Execution time: 51137.998 ms
effective_io_concurrency=4 Execution time: 42301.922 ms
effective_io_concurrency=8 Execution time: 42081.877 ms
effective_io_concurrency=16 Execution time: 42253.782 ms
effective_io_concurrency=32 Execution time: 42087.216 ms
effective_io_concurrency=64 Execution time: 42112.105 ms
effective_io_concurrency=128 Execution time: 42271.850 ms
effective_io_concurrency=256 Execution time: 42213.074 ms

Regards,
Vitaliy



Re: effective_io_concurrency on EBS/gp2

От
Jeff Janes
Дата:
On Wed, Jan 31, 2018 at 4:03 AM, Vitaliy Garnashevich <vgarnashevich@gmail.com> wrote:

The results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS.

I would not expect this to make much of a difference on a table which is perfectly correlated with the index.  You would have to create an accounts table which is randomly ordered to have a meaningful benchmark of the eic parameter.

I don't know why the default for eic is 1.  It seems like that just turns on the eic mechanism, without any hope of benefiting from it.

Cheers,

Jeff

Re: effective_io_concurrency on EBS/gp2

От
Claudio Freire
Дата:
On Wed, Jan 31, 2018 at 11:21 PM, hzzhangjiazhi
<hzzhangjiazhi@corp.netease.com> wrote:
> HI
>
>      I think this parameter will be usefull when the storage using RAID
> stripe , otherwise turn up this parameter is meaningless when only has one
> device。

Not at all. Especially on EBS, where keeping a relatively full queue
is necessary to get max thoughput out of the drive.

Problem is, if you're scanning a highly correlated index, the
mechanism is counterproductive. I had worked on some POC patches for
correcting that, I guess I could work something out, but it's
low-priority for me. Especially since it's actually a kernel "bug" (or
shortcoming), that could be fixed in the kernel rather than worked
around by postgres.


Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
I did some more tests. I've made an SQL dump of the table. Then used 
head/tail commands to cut the data part. Then used shuf command to 
shuffle rows, and then joined the pieces back and restored the table 
back into DB.

Before:
select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;
{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}

effective_io_concurrency=0 Execution time: 1455.336 ms
effective_io_concurrency=1 Execution time: 8365.070 ms
effective_io_concurrency=2 Execution time: 4791.961 ms
effective_io_concurrency=4 Execution time: 4113.713 ms
effective_io_concurrency=8 Execution time: 1584.862 ms
effective_io_concurrency=16 Execution time: 1533.096 ms
effective_io_concurrency=8 Execution time: 1494.494 ms
effective_io_concurrency=4 Execution time: 3235.892 ms
effective_io_concurrency=2 Execution time: 4624.334 ms
effective_io_concurrency=1 Execution time: 7831.310 ms
effective_io_concurrency=0 Execution time: 1422.203 ms

After:
select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;

{6861090,18316007,2361004,11880097,5079470,9859942,13776329,12687163,3793362,18312052,15912971,9928864,10179242,9307499,2737986,13911147,5337329,12582498,3019085,4631617}

effective_io_concurrency=0 Execution time: 71321.723 ms
effective_io_concurrency=1 Execution time: 180230.742 ms
effective_io_concurrency=2 Execution time: 98635.566 ms
effective_io_concurrency=4 Execution time: 91464.375 ms
effective_io_concurrency=8 Execution time: 91048.939 ms
effective_io_concurrency=16 Execution time: 97682.475 ms
effective_io_concurrency=8 Execution time: 91262.404 ms
effective_io_concurrency=4 Execution time: 90945.560 ms
effective_io_concurrency=2 Execution time: 97019.504 ms
effective_io_concurrency=1 Execution time: 180331.474 ms
effective_io_concurrency=0 Execution time: 71469.484 ms

The numbers are not directly comparable with the previous tests, because 
this time I used scale factor 200.

Regards,
Vitaliy

On 2018-02-01 20:39, Claudio Freire wrote:
> On Wed, Jan 31, 2018 at 11:21 PM, hzzhangjiazhi
> <hzzhangjiazhi@corp.netease.com> wrote:
>> HI
>>
>>       I think this parameter will be usefull when the storage using RAID
>> stripe , otherwise turn up this parameter is meaningless when only has one
>> device。
> Not at all. Especially on EBS, where keeping a relatively full queue
> is necessary to get max thoughput out of the drive.
>
> Problem is, if you're scanning a highly correlated index, the
> mechanism is counterproductive. I had worked on some POC patches for
> correcting that, I guess I could work something out, but it's
> low-priority for me. Especially since it's actually a kernel "bug" (or
> shortcoming), that could be fixed in the kernel rather than worked
> around by postgres.
>


Вложения

Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
> Problem is, if you're scanning a highly correlated index, the
> mechanism is counterproductive.

> I would not expect this to make much of a difference on a table which 
> is perfectly correlated with the index.  You would have to create an 
> accounts table which is randomly ordered to have a meaningful 
> benchmark of the eic parameter.

If I read the postgres source code correctly, then the pages are sorted 
in tbm_begin_iterate() before being iterated, so I don't think 
correlation of index should matter. The tests on shuffled records show 
the same trend in execution time for different eic values.

I did some more tests, this time on DigitalOcean/SSD. I also tried 
different kernel versions (3.13 and 4.4). I've run each test several times.

Ubuntu 16.04.3 LTS
Linux ubuntu-s-2vcpu-4gb-ams3-01 4.4.0-112-generic #135-Ubuntu SMP Fri 
Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;
                       array_agg
------------------------------------------------------
  {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}
(1 row)

effective_io_concurrency=0 Execution time: 3910.770 ms
effective_io_concurrency=1 Execution time: 10754.483 ms
effective_io_concurrency=2 Execution time: 5347.845 ms
effective_io_concurrency=4 Execution time: 5737.166 ms
effective_io_concurrency=8 Execution time: 4904.962 ms
effective_io_concurrency=16 Execution time: 4947.941 ms
effective_io_concurrency=8 Execution time: 4737.117 ms
effective_io_concurrency=4 Execution time: 4749.065 ms
effective_io_concurrency=2 Execution time: 5031.390 ms
effective_io_concurrency=1 Execution time: 10117.927 ms
effective_io_concurrency=0 Execution time: 3769.260 ms

select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;
array_agg

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 {14845391,12121312,18579380,9075771,7602183,762831,8485877,1035607,4451695,4686093,1925254,3462677,9634221,14144638,17894662,8247722,17996891,14842493,13832379,2052647}
(1 row)

effective_io_concurrency=0 Execution time: 6801.229 ms
effective_io_concurrency=1 Execution time: 14217.719 ms
effective_io_concurrency=2 Execution time: 9126.216 ms
effective_io_concurrency=4 Execution time: 8797.717 ms
effective_io_concurrency=8 Execution time: 8759.317 ms
effective_io_concurrency=16 Execution time: 8431.835 ms
effective_io_concurrency=8 Execution time: 9387.119 ms
effective_io_concurrency=4 Execution time: 9064.808 ms
effective_io_concurrency=2 Execution time: 9359.062 ms
effective_io_concurrency=1 Execution time: 16639.386 ms
effective_io_concurrency=0 Execution time: 6560.935 ms


Ubuntu 14.04.5 LTS
Linux ubuntu-s-2vcpu-4gb-ams3-02 3.13.0-139-generic #188-Ubuntu SMP Tue 
Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
4.8.4-2ubuntu1~14.04.3) 4.8.4, 64-bit

select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;
                       array_agg
------------------------------------------------------
  {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}
(1 row)

effective_io_concurrency=0 Execution time: 3760.865 ms
effective_io_concurrency=1 Execution time: 11092.846 ms
effective_io_concurrency=2 Execution time: 4933.662 ms
effective_io_concurrency=4 Execution time: 4733.713 ms
effective_io_concurrency=8 Execution time: 4860.886 ms
effective_io_concurrency=16 Execution time: 5063.696 ms
effective_io_concurrency=8 Execution time: 4670.155 ms
effective_io_concurrency=4 Execution time: 5049.901 ms
effective_io_concurrency=2 Execution time: 4785.219 ms
effective_io_concurrency=1 Execution time: 11106.143 ms
effective_io_concurrency=0 Execution time: 3779.058 ms

select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;
array_agg

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 {8089611,788082,3477731,10034640,9256860,15432349,2412452,10087114,10386959,7199759,17253672,7798185,160908,1960920,13287370,14970792,18578221,13892448,3532901,3560583}
(1 row)

effective_io_concurrency=0 Execution time: 6243.600 ms
effective_io_concurrency=1 Execution time: 14613.348 ms
effective_io_concurrency=2 Execution time: 8250.552 ms
effective_io_concurrency=4 Execution time: 8286.333 ms
effective_io_concurrency=8 Execution time: 8167.817 ms
effective_io_concurrency=16 Execution time: 8193.186 ms
effective_io_concurrency=8 Execution time: 8206.614 ms
effective_io_concurrency=4 Execution time: 8375.153 ms
effective_io_concurrency=2 Execution time: 8354.106 ms
effective_io_concurrency=1 Execution time: 14139.712 ms
effective_io_concurrency=0 Execution time: 6409.229 ms


Looks like this behavior is not caused by, and does not depend on:
- variable performance in the cloud
- order of rows in the table
- whether the disk is EBS (backed by SSD or HDD), or ordinary SSD
- kernel version

Does this mean that the default setting for eic on Linux is just 
inadequate for how the modern kernels behave? Or am I missing something 
else in the tests?

Regards,
Vitaliy

Вложения

Re: effective_io_concurrency on EBS/gp2

От
Claudio Freire
Дата:
On Sat, Feb 3, 2018 at 8:05 PM, Vitaliy Garnashevich
<vgarnashevich@gmail.com> wrote:
> Looks like this behavior is not caused by, and does not depend on:
> - variable performance in the cloud
> - order of rows in the table
> - whether the disk is EBS (backed by SSD or HDD), or ordinary SSD
> - kernel version
>
> Does this mean that the default setting for eic on Linux is just inadequate
> for how the modern kernels behave? Or am I missing something else in the
> tests?
>
> Regards,
> Vitaliy

I have analyzed this issue quite extensively in the past, and I can
say with high confidence that you're analysis on point 2 is most
likely wrong.

Now, I don't have all the information to make that a categorical
assertion, you might have a point, but I believe you're
misinterpreting the data.

I mean, that the issue is indeed affected by the order of rows in the
table. Random heap access patterns result in sparse bitmap heap scans,
whereas less random heap access patterns result in denser bitmap heap
scans. Dense scans have large portions of contiguous fetches, a
pattern that is quite adversely affected by the current prefetch
mechanism in linux.

This analysis does point to the fact that I should probably revisit
this issue. There's a rather simple workaround for this, pg should
just avoid issuing prefetch orders for sequential block patterns,
since those are already much better handled by the kernel itself.


Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
> I mean, that the issue is indeed affected by the order of rows in the
> table. Random heap access patterns result in sparse bitmap heap scans,
> whereas less random heap access patterns result in denser bitmap heap
> scans. Dense scans have large portions of contiguous fetches, a
> pattern that is quite adversely affected by the current prefetch
> mechanism in linux.
>

Thanks for your input.

How can I test a sparse bitmap scan? Can you think of any SQL commands 
which would generate data and run such scans?

Would a bitmap scan over expression index ((aid%1000)=0) do a sparse 
bitmap scan?

Regards,
Vitaliy


Re: effective_io_concurrency on EBS/gp2

От
Claudio Freire
Дата:
On Mon, Feb 5, 2018 at 8:26 AM, Vitaliy Garnashevich
<vgarnashevich@gmail.com> wrote:
>> I mean, that the issue is indeed affected by the order of rows in the
>> table. Random heap access patterns result in sparse bitmap heap scans,
>> whereas less random heap access patterns result in denser bitmap heap
>> scans. Dense scans have large portions of contiguous fetches, a
>> pattern that is quite adversely affected by the current prefetch
>> mechanism in linux.
>>
>
> Thanks for your input.
>
> How can I test a sparse bitmap scan? Can you think of any SQL commands which
> would generate data and run such scans?
>
> Would a bitmap scan over expression index ((aid%1000)=0) do a sparse bitmap
> scan?

If you have a minimally correlated index (ie: totally random order),
and suppose you have N tuples per page, you need to select less (much
less) than 1/Nth of the table.


Re: effective_io_concurrency on EBS/gp2

От
Justin Pryzby
Дата:
On Wed, Jan 31, 2018 at 04:34:18PM -0300, Claudio Freire wrote:
> In my experience playing with prefetch, e_i_c>0 interferes with kernel
> read-ahead. What you've got there would make sense if what postgres
> thinks will be random I/O ends up being sequential. With e_i_c=0, the
> kernel will optimize the hell out of it, because it's a predictable
> pattern. But with e_i_c=1, the kernel's optimization gets disabled but
> postgres isn't reading much ahead, so you get the worst possible case.

On Thu, Feb 01, 2018 at 03:39:07PM -0300, Claudio Freire wrote:
> Problem is, if you're scanning a highly correlated index, the
> mechanism is counterproductive. I had worked on some POC patches for
> correcting that, I guess I could work something out, but it's
> low-priority for me. Especially since it's actually a kernel "bug" (or
> shortcoming), that could be fixed in the kernel rather than worked
> around by postgres.

On Sun, Feb 04, 2018 at 11:27:25PM -0300, Claudio Freire wrote:
> ... Dense scans have large portions of contiguous fetches, a pattern that is
> quite adversely affected by the current prefetch mechanism in linux.
> 
> ... There's a rather simple workaround for this, pg should just avoid issuing
> prefetch orders for sequential block patterns, since those are already much
> better handled by the kernel itself.

Thinking out loud.. if prefetch were a separate process, I imagine this
wouldn't be an issue ; is it possible the parallel worker code could take on
responsibility of prefetching (?)

Justin


Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
>> Would a bitmap scan over expression index ((aid%1000)=0) do a sparse bitmap
>> scan?
> If you have a minimally correlated index (ie: totally random order),
> and suppose you have N tuples per page, you need to select less (much
> less) than 1/Nth of the table.
>

I've done a test with a sparse bitmap scan. The positive effect of 
effective_io_concurrency is visible in that case.

In the test, I'm creating a table with 100k rows, 10 tuples per page. 
Then I create an index on expression ((id%100)=0), and then query the 
table using a bitmap scan over this index. Before each query, I also 
restart postgresql service and clear OS caches, to make all reads happen 
from disk.

create table test as select generate_series(1, 100000) id, repeat('x', 
750) val;
create index sparse_idx on test (((id%100)=0));

explain (analyze, buffers) select * from test where ((id%100)=0) and val 
!= '';

effective_io_concurrency=0 Execution time: 3258.220 ms
effective_io_concurrency=1 Execution time: 3345.689 ms
effective_io_concurrency=2 Execution time: 2516.558 ms
effective_io_concurrency=4 Execution time: 1816.150 ms
effective_io_concurrency=8 Execution time: 1083.018 ms
effective_io_concurrency=16 Execution time: 2349.064 ms
effective_io_concurrency=32 Execution time: 771.776 ms
effective_io_concurrency=64 Execution time: 1536.146 ms
effective_io_concurrency=128 Execution time: 560.471 ms
effective_io_concurrency=256 Execution time: 404.113 ms
effective_io_concurrency=512 Execution time: 318.271 ms
effective_io_concurrency=1000 Execution time: 411.978 ms

effective_io_concurrency=0 Execution time: 3655.124 ms
effective_io_concurrency=1 Execution time: 3337.614 ms
effective_io_concurrency=2 Execution time: 2914.609 ms
effective_io_concurrency=4 Execution time: 2133.285 ms
effective_io_concurrency=8 Execution time: 1326.740 ms
effective_io_concurrency=16 Execution time: 1765.848 ms
effective_io_concurrency=32 Execution time: 583.176 ms
effective_io_concurrency=64 Execution time: 541.667 ms
effective_io_concurrency=128 Execution time: 362.409 ms
effective_io_concurrency=256 Execution time: 446.026 ms
effective_io_concurrency=512 Execution time: 416.469 ms
effective_io_concurrency=1000 Execution time: 301.295 ms

effective_io_concurrency=0 Execution time: 4611.075 ms
effective_io_concurrency=1 Execution time: 3583.286 ms
effective_io_concurrency=2 Execution time: 2404.817 ms
effective_io_concurrency=4 Execution time: 1602.766 ms
effective_io_concurrency=8 Execution time: 1811.409 ms
effective_io_concurrency=16 Execution time: 1688.752 ms
effective_io_concurrency=32 Execution time: 613.454 ms
effective_io_concurrency=64 Execution time: 686.325 ms
effective_io_concurrency=128 Execution time: 425.590 ms
effective_io_concurrency=256 Execution time: 1394.318 ms
effective_io_concurrency=512 Execution time: 1579.458 ms
effective_io_concurrency=1000 Execution time: 414.184 ms

Regards,
Vitaliy



Вложения

Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
Anyway, there are still some strange things happening when 
effective_io_concurrency is non-zero.

I've found that the real reason for the poor Bitmap Scan performance was 
related not only with sparsity of the rows/pages to be rechecked, but 
also with the value of starting ID from which the scan begins:

create table test as select generate_series(1, 100000) id, repeat('x', 
90) val;
alter table test add constraint test_pkey primary key (id);

select count(*) tup_per_page from test group by (ctid::text::point)[0] 
order by count(*) desc limit 5;
  tup_per_page
--------------
            65
            65
            65
            65
            65
(5 rows)

select * from test where id between X and 100000 and val != ''


effective_io_concurrency=0; id between 0 and 100000; Execution time: 
524.671 ms
effective_io_concurrency=1; id between 0 and 100000; Execution time: 
420.000 ms
effective_io_concurrency=0; id between 0 and 100000; Execution time: 
441.813 ms
effective_io_concurrency=1; id between 0 and 100000; Execution time: 
498.591 ms
effective_io_concurrency=0; id between 0 and 100000; Execution time: 
662.838 ms
effective_io_concurrency=1; id between 0 and 100000; Execution time: 
431.503 ms

effective_io_concurrency=0; id between 10 and 100000; Execution time: 
1210.436 ms
effective_io_concurrency=1; id between 10 and 100000; Execution time: 
1056.646 ms
effective_io_concurrency=0; id between 10 and 100000; Execution time: 
578.102 ms
effective_io_concurrency=1; id between 10 and 100000; Execution time: 
396.996 ms
effective_io_concurrency=0; id between 10 and 100000; Execution time: 
598.842 ms
effective_io_concurrency=1; id between 10 and 100000; Execution time: 
555.258 ms

effective_io_concurrency=0; id between 50 and 100000; Execution time: 
4017.999 ms
effective_io_concurrency=1; id between 50 and 100000; Execution time: 
383.694 ms
effective_io_concurrency=0; id between 50 and 100000; Execution time: 
535.686 ms
effective_io_concurrency=1; id between 50 and 100000; Execution time: 
570.221 ms
effective_io_concurrency=0; id between 50 and 100000; Execution time: 
852.960 ms
effective_io_concurrency=1; id between 50 and 100000; Execution time: 
656.097 ms

effective_io_concurrency=0; id between 64 and 100000; Execution time: 
385.628 ms
effective_io_concurrency=1; id between 64 and 100000; Execution time: 
712.261 ms
effective_io_concurrency=0; id between 64 and 100000; Execution time: 
1610.618 ms
effective_io_concurrency=1; id between 64 and 100000; Execution time: 
438.211 ms
effective_io_concurrency=0; id between 64 and 100000; Execution time: 
393.341 ms
effective_io_concurrency=1; id between 64 and 100000; Execution time: 
744.768 ms

effective_io_concurrency=0; id between 65 and 100000; Execution time: 
846.759 ms
effective_io_concurrency=1; id between 65 and 100000; Execution time: 
514.668 ms
effective_io_concurrency=0; id between 65 and 100000; Execution time: 
536.640 ms
effective_io_concurrency=1; id between 65 and 100000; Execution time: 
461.966 ms
effective_io_concurrency=0; id between 65 and 100000; Execution time: 
1810.677 ms
effective_io_concurrency=1; id between 65 and 100000; Execution time: 
545.359 ms

effective_io_concurrency=0; id between 66 and 100000; Execution time: 
663.920 ms
effective_io_concurrency=1; id between 66 and 100000; Execution time: 
5571.118 ms
effective_io_concurrency=0; id between 66 and 100000; Execution time: 
683.056 ms
effective_io_concurrency=1; id between 66 and 100000; Execution time: 
5883.359 ms
effective_io_concurrency=0; id between 66 and 100000; Execution time: 
472.809 ms
effective_io_concurrency=1; id between 66 and 100000; Execution time: 
5461.794 ms

effective_io_concurrency=0; id between 100 and 100000; Execution time: 
647.292 ms
effective_io_concurrency=1; id between 100 and 100000; Execution time: 
7810.344 ms
effective_io_concurrency=0; id between 100 and 100000; Execution time: 
773.750 ms
effective_io_concurrency=1; id between 100 and 100000; Execution time: 
5637.014 ms
effective_io_concurrency=0; id between 100 and 100000; Execution time: 
726.111 ms
effective_io_concurrency=1; id between 100 and 100000; Execution time: 
7740.607 ms

effective_io_concurrency=0; id between 200 and 100000; Execution time: 
549.281 ms
effective_io_concurrency=1; id between 200 and 100000; Execution time: 
5032.522 ms
effective_io_concurrency=0; id between 200 and 100000; Execution time: 
692.631 ms
effective_io_concurrency=1; id between 200 and 100000; Execution time: 
5138.669 ms
effective_io_concurrency=0; id between 200 and 100000; Execution time: 
793.342 ms
effective_io_concurrency=1; id between 200 and 100000; Execution time: 
5375.822 ms

effective_io_concurrency=0; id between 1000 and 100000; Execution time: 
596.754 ms
effective_io_concurrency=1; id between 1000 and 100000; Execution time: 
5278.683 ms
effective_io_concurrency=0; id between 1000 and 100000; Execution time: 
638.706 ms
effective_io_concurrency=1; id between 1000 and 100000; Execution time: 
5404.002 ms
effective_io_concurrency=0; id between 1000 and 100000; Execution time: 
730.667 ms
effective_io_concurrency=1; id between 1000 and 100000; Execution time: 
5761.312 ms

effective_io_concurrency=0; id between 2000 and 100000; Execution time: 
656.086 ms
effective_io_concurrency=1; id between 2000 and 100000; Execution time: 
6156.003 ms
effective_io_concurrency=0; id between 2000 and 100000; Execution time: 
768.288 ms
effective_io_concurrency=1; id between 2000 and 100000; Execution time: 
4917.423 ms
effective_io_concurrency=0; id between 2000 and 100000; Execution time: 
500.931 ms
effective_io_concurrency=1; id between 2000 and 100000; Execution time: 
5659.255 ms

effective_io_concurrency=0; id between 5000 and 100000; Execution time: 
755.440 ms
effective_io_concurrency=1; id between 5000 and 100000; Execution time: 
5141.671 ms
effective_io_concurrency=0; id between 5000 and 100000; Execution time: 
542.174 ms
effective_io_concurrency=1; id between 5000 and 100000; Execution time: 
6074.953 ms
effective_io_concurrency=0; id between 5000 and 100000; Execution time: 
570.615 ms
effective_io_concurrency=1; id between 5000 and 100000; Execution time: 
6922.402 ms

effective_io_concurrency=0; id between 10000 and 100000; Execution time: 
469.544 ms
effective_io_concurrency=1; id between 10000 and 100000; Execution time: 
6083.361 ms
effective_io_concurrency=0; id between 10000 and 100000; Execution time: 
706.078 ms
effective_io_concurrency=1; id between 10000 and 100000; Execution time: 
4069.171 ms
effective_io_concurrency=0; id between 10000 and 100000; Execution time: 
526.792 ms
effective_io_concurrency=1; id between 10000 and 100000; Execution time: 
5289.984 ms

effective_io_concurrency=0; id between 20000 and 100000; Execution time: 
435.503 ms
effective_io_concurrency=1; id between 20000 and 100000; Execution time: 
5460.730 ms
effective_io_concurrency=0; id between 20000 and 100000; Execution time: 
454.323 ms
effective_io_concurrency=1; id between 20000 and 100000; Execution time: 
4163.030 ms
effective_io_concurrency=0; id between 20000 and 100000; Execution time: 
674.382 ms
effective_io_concurrency=1; id between 20000 and 100000; Execution time: 
3703.045 ms

effective_io_concurrency=0; id between 50000 and 100000; Execution time: 
226.094 ms
effective_io_concurrency=1; id between 50000 and 100000; Execution time: 
2584.720 ms
effective_io_concurrency=0; id between 50000 and 100000; Execution time: 
1431.037 ms
effective_io_concurrency=1; id between 50000 and 100000; Execution time: 
2651.834 ms
effective_io_concurrency=0; id between 50000 and 100000; Execution time: 
345.194 ms
effective_io_concurrency=1; id between 50000 and 100000; Execution time: 
2328.844 ms

effective_io_concurrency=0; id between 75000 and 100000; Execution time: 
120.121 ms
effective_io_concurrency=1; id between 75000 and 100000; Execution time: 
2125.927 ms
effective_io_concurrency=0; id between 75000 and 100000; Execution time: 
115.865 ms
effective_io_concurrency=1; id between 75000 and 100000; Execution time: 
1616.534 ms
effective_io_concurrency=0; id between 75000 and 100000; Execution time: 
138.005 ms
effective_io_concurrency=1; id between 75000 and 100000; Execution time: 
1651.880 ms

effective_io_concurrency=0; id between 90000 and 100000; Execution time: 
66.322 ms
effective_io_concurrency=1; id between 90000 and 100000; Execution time: 
443.317 ms
effective_io_concurrency=0; id between 90000 and 100000; Execution time: 
53.138 ms
effective_io_concurrency=1; id between 90000 and 100000; Execution time: 
566.945 ms
effective_io_concurrency=0; id between 90000 and 100000; Execution time: 
57.441 ms
effective_io_concurrency=1; id between 90000 and 100000; Execution time: 
525.749 ms

For some reason, with dense bitmap scans, when Bitmap Heap Scan / 
Recheck starts not from the first page of the table, the 
effective_io_concurrency=0 consistently and significantly outperforms 
effective_io_concurrency=1.

Regards,
Vitaliy


Вложения

Re: effective_io_concurrency on EBS/gp2

От
Rick Otten
Дата:


On Thu, Feb 8, 2018 at 11:40 AM, Vitaliy Garnashevich <vgarnashevich@gmail.com> wrote:
Anyway, there are still some strange things happening when effective_io_concurrency is non-zero.

...
 
Vitaliy


I was researching whether I could optimize a concatenated lvm2 volume when I have disks of different speeds (concatenated - not striped - and I think I can if I concatenate them in the right order - still testing on that front), when I came across this article from a few years ago:

In the article he talks about the performance of parallel io on different file systems.

Since I am already running XFS that led me to this tunable:

Which brought me back to this discussion about effective_io_concurrency from a couple of weeks ago.  I noticed that the recent round of tests being discussed never mentioned the file system used.  Was it XFS?  Does changing the agcount change the behaviour?


Re: effective_io_concurrency on EBS/gp2

От
Vitaliy Garnashevich
Дата:
> I noticed that the recent round of tests being discussed never 
> mentioned the file system used.  Was it XFS?  Does changing the 
> agcount change the behaviour?

It was ext4.

Regards,
Vitaliy