Обсуждение: 7 hrs for a pg_restore?

Поиск
Список
Период
Сортировка

7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
I spent a whopping seven hours restoring a database late Fri nite for a
client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I then
did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively crawl.
I'll grant you that it's a 5.1G tar file, but 7 hours seems excessive.

Is that kind of timeframe 'abnormal' or am I just impatient? :) If the former,
I can provide whatever you need, just ask for it.
Thanks!
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

I've been dying to hit something since I pressed "1" to join your conference.

Re: 7 hrs for a pg_restore?

От
"Joshua D. Drake"
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, 19 Feb 2008 13:03:58 -0500
Douglas J Hunley <doug@hunley.homeip.net> wrote:

> I spent a whopping seven hours restoring a database late Fri nite for
> a client. We stopped the application, ran pg_dump -v -Ft -b -o $db > 
> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to
> 8.3. I then did a pg_restore -v -d $db ./pre_8.3.tar and watched it
> positively crawl. I'll grant you that it's a 5.1G tar file, but 7
> hours seems excessive. 
> 
> Is that kind of timeframe 'abnormal' or am I just impatient? :) If
> the former, I can provide whatever you need, just ask for it. 
> Thanks!

7 hours for 5.1 G is excessive. It took me 11 hours to do 220G :). It
would be helpful if we knew what the machine was doing. Was it IO
bound? How much ram does it have? Is it just a single HD drive? What
are your settings for postgresql?

Joshua D. Drake


- -- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL SPI Liaison | SPI Director |  PostgreSQL political pundit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHuxwoATb/zqfZUUQRAjNzAJ9FYBIdEpytIWHtvuqC2L0Phah9EwCfdGrZ
kY1wItUqdtJ127ZA1Wl+95s=
=vvm+
-----END PGP SIGNATURE-----

Re: 7 hrs for a pg_restore?

От
Richard Huxton
Дата:
Douglas J Hunley wrote:
> I spent a whopping seven hours restoring a database late Fri nite for a
> client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I then
> did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively crawl.
> I'll grant you that it's a 5.1G tar file, but 7 hours seems excessive.

Depends, both on the machine and the database.

What sort of disk i/o are you seeing, what's the cpu(s) doing, and
what's the restore taking so long over (since you have -v)?

Oh, and have you tweaked the configuration settings for the restore?
Lots of work_mem, turn fsync off, that sort of thing.

--
   Richard Huxton
   Archonet Ltd

Re: 7 hrs for a pg_restore?

От
Dave Cramer
Дата:
On 19-Feb-08, at 1:12 PM, Joshua D. Drake wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Tue, 19 Feb 2008 13:03:58 -0500
> Douglas J Hunley <doug@hunley.homeip.net> wrote:
>
>> I spent a whopping seven hours restoring a database late Fri nite for
>> a client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
>> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to
>> 8.3. I then did a pg_restore -v -d $db ./pre_8.3.tar and watched it
>> positively crawl. I'll grant you that it's a 5.1G tar file, but 7
>> hours seems excessive.
>>
>> Is that kind of timeframe 'abnormal' or am I just impatient? :) If
>> the former, I can provide whatever you need, just ask for it.
>> Thanks!
>
> 7 hours for 5.1 G is excessive. It took me 11 hours to do 220G :). It
> would be helpful if we knew what the machine was doing. Was it IO
> bound? How much ram does it have? Is it just a single HD drive? What
> are your settings for postgresql?
>
Yeah, I did a 9G in about 20min. Did you optimize the new one ?
> Joshua D. Drake
>
>
> - --
> The PostgreSQL Company since 1997: http://www.commandprompt.com/
> PostgreSQL Community Conference: http://www.postgresqlconference.org/
> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
> PostgreSQL SPI Liaison | SPI Director |  PostgreSQL political pundit
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFHuxwoATb/zqfZUUQRAjNzAJ9FYBIdEpytIWHtvuqC2L0Phah9EwCfdGrZ
> kY1wItUqdtJ127ZA1Wl+95s=
> =vvm+
> -----END PGP SIGNATURE-----
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 7: You can help support the PostgreSQL project by donating at
>
>                http://www.postgresql.org/about/donate


Re: 7 hrs for a pg_restore?

От
Tom Lane
Дата:
Richard Huxton <dev@archonet.com> writes:
> Douglas J Hunley wrote:
>> I spent a whopping seven hours restoring a database late Fri nite for a

> Oh, and have you tweaked the configuration settings for the restore?
> Lots of work_mem, turn fsync off, that sort of thing.

maintenance_work_mem, to be more specific.  If that's too small it will
definitely cripple restore speed.  I'm not sure fsync would make much
difference, but checkpoint_segments would.  See
http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-DUMP

Also: why did you choose -o ... was there a real need to?  I can see
that being pretty expensive.

            regards, tom lane

Re: 7 hrs for a pg_restore?

От
Jeff Davis
Дата:
On Tue, 2008-02-19 at 13:03 -0500, Douglas J Hunley wrote:
> I spent a whopping seven hours restoring a database late Fri nite for a
> client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I then
> did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively crawl.
> I'll grant you that it's a 5.1G tar file, but 7 hours seems excessive.
>

Are there lots of indexes on localized text attributes? If you have a
big table with localized text (e.g. en_US.UTF-8), it can take a long
time to build the indexes. If the file is 5GB compressed, I wouldn't be
surprised if it took a long time to restore.

Keep in mind, if you have several GB worth of indexes, they take up
basically no space in the logical dump (just the "CREATE INDEX" command,
and that's it). But they can take a lot of processor time to build up
again, especially with localized text.

Regards,
    Jeff Davis


Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 13:12:54 Joshua D. Drake wrote:
> > I spent a whopping seven hours restoring a database late Fri nite for
> > a client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> > ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to
> > 8.3. I then did a pg_restore -v -d $db ./pre_8.3.tar and watched it
> > positively crawl. I'll grant you that it's a 5.1G tar file, but 7
> > hours seems excessive.
> >
> > Is that kind of timeframe 'abnormal' or am I just impatient? :) If
> > the former, I can provide whatever you need, just ask for it.
> > Thanks!
>
> 7 hours for 5.1 G is excessive. It took me 11 hours to do 220G :). It
> would be helpful if we knew what the machine was doing. Was it IO
> bound? How much ram does it have? Is it just a single HD drive? What
> are your settings for postgresql?

It wasn't doing anything but the restore. Dedicated DB box

postgresql.conf attached

system specs:
Intel(R) Xeon(TM) CPU 3.40GHz (dual, so shows 4 in Linux)

MemTotal:      8245524 kB

The db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm disks. 1
raid 6 logical volume. Compaq Smart Array 6404 controller

--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

We do nothing *FOR* users. We do things *TO* users. It's a fine distinction,
but an important one all the same.

Вложения

Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 13:13:37 Richard Huxton wrote:
> Douglas J Hunley wrote:
> > I spent a whopping seven hours restoring a database late Fri nite for a
> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> > ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I
> > then did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively
> > crawl. I'll grant you that it's a 5.1G tar file, but 7 hours seems
> > excessive.
>
> Depends, both on the machine and the database.
>
> What sort of disk i/o are you seeing, what's the cpu(s) doing, and
> what's the restore taking so long over (since you have -v)?

The I/O didn't seem abnormal to me for this customer, so I didn't record it.
It wasn't excessive though. It took the longest on a couple of our highest
volume tables. By far index creation took the longest of the entire process

>
> Oh, and have you tweaked the configuration settings for the restore?
> Lots of work_mem, turn fsync off, that sort of thing.

I didn't tweak anything for the restore specifically. Used the postgresql.conf
as attached in another reply


--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

One item could not be deleted because it was missing.  -- Mac System 7.0b1
error message

Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 13:22:58 Tom Lane wrote:
> Richard Huxton <dev@archonet.com> writes:
> > Douglas J Hunley wrote:
> >> I spent a whopping seven hours restoring a database late Fri nite for a
> >
> > Oh, and have you tweaked the configuration settings for the restore?
> > Lots of work_mem, turn fsync off, that sort of thing.
>
> maintenance_work_mem, to be more specific.  If that's too small it will
> definitely cripple restore speed.  I'm not sure fsync would make much
> difference, but checkpoint_segments would.  See
> http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-DUMP

from the postgresql.conf i posted:
~ $ grep maint postgresql.conf
maintenance_work_mem = 256MB            # min 1MB

thx for the pointer to the URL. I've made note of the recommendations therein
for next time.

>
> Also: why did you choose -o ... was there a real need to?  I can see
> that being pretty expensive.
>

I was under the impression our application made reference to OIDs. I'm now
doubting that heavily <g> and am seeking confirmation.

--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

I've got trouble with the wife again - she came into the bar looking for me
and I asked her for her number.

Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 13:23:23 Jeff Davis wrote:
> On Tue, 2008-02-19 at 13:03 -0500, Douglas J Hunley wrote:
> > I spent a whopping seven hours restoring a database late Fri nite for a
> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> > ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I
> > then did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively
> > crawl. I'll grant you that it's a 5.1G tar file, but 7 hours seems
> > excessive.
>
> Are there lots of indexes on localized text attributes? If you have a
> big table with localized text (e.g. en_US.UTF-8), it can take a long
> time to build the indexes. If the file is 5GB compressed, I wouldn't be
> surprised if it took a long time to restore.
>
> Keep in mind, if you have several GB worth of indexes, they take up
> basically no space in the logical dump (just the "CREATE INDEX" command,
> and that's it). But they can take a lot of processor time to build up
> again, especially with localized text.
>

that could be a factor here. It is a UNICODE db, and we do a lot of text-based
indexing for the application

--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

Be courteous to everyone, friendly to no one.

Re: 7 hrs for a pg_restore?

От
Dave Cramer
Дата:
shared buffers is *way* too small as is effective cache
set them to 2G/6G respectively.

Dave


Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:
> shared buffers is *way* too small as is effective cache
> set them to 2G/6G respectively.
>
> Dave

pardon my ignorance, but is this in the context of a restore only? or 'in
general'?

--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

Don't let Kirk show you what he affectionately calls the "Captain's Log"

Re: 7 hrs for a pg_restore?

От
Jeff Davis
Дата:
On Tue, 2008-02-19 at 14:20 -0500, Douglas J Hunley wrote:
> > Keep in mind, if you have several GB worth of indexes, they take up
> > basically no space in the logical dump (just the "CREATE INDEX" command,
> > and that's it). But they can take a lot of processor time to build up
> > again, especially with localized text.
> >
>
> that could be a factor here. It is a UNICODE db, and we do a lot of text-based
> indexing for the application

I assume you're _not_ talking about full text indexes here.

These factors:
* unicode (i.e. non-C locale)
* low I/O utilization
* indexes taking up most of the 7 hours

mean that we've probably found the problem.

Localized text uses sorting rules that are not the same as binary sort
order, and it takes much more CPU power to do the comparisons, and sorts
are already processor-intensive operations.

Unfortunately postgresql does not parallelize this sorting/indexing at
all, so you're only using one core.

I'd recommend restoring everything except the indexes, and then you can
restore the indexes concurrently in several different sessions so that
it uses all of your cores. Build your primary key/unique indexes first,
and then after those are built you can start using the database while
the rest of the indexes are building (use "CREATE INDEX CONCURRENTLY").

Regards,
    Jeff Davis


Re: 7 hrs for a pg_restore?

От
Jeff Davis
Дата:
On Tue, 2008-02-19 at 14:28 -0500, Dave Cramer wrote:
> shared buffers is *way* too small as is effective cache
> set them to 2G/6G respectively.

They are way too small, but I don't think that explains the index
creation time.

Effective_cache_size is only used by the planner, and this problem is
not caused by a poorly chosen plan.

It's important to set shared_buffers higher as well, but he has so much
RAM compared with his dataset that he's certainly not going to disk. I
don't think this explains it either.

I think it's just the result of building a lot of indexes on localized
text using only one core at a time.

Regards,
    Jeff Davis


Re: 7 hrs for a pg_restore?

От
Jeff
Дата:
On Feb 19, 2008, at 1:22 PM, Tom Lane wrote:

>
> maintenance_work_mem, to be more specific.  If that's too small it
> will
> definitely cripple restore speed.  I'm not sure fsync would make much
> difference, but checkpoint_segments would.  See
> http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-
> DUMP
>

I wonder if it would be worthwhile if pg_restore could emit a warning
if maint_work_mem is "low" (start flamewar on what "low" is).

And as an addition to that - allow a cmd line arg to have pg_restore
bump it before doing its work?  On several occasions I was moving a
largish table and the COPY part went plenty fast, but when it hit
index creation it slowed down to a crawl due to low maint_work_mem..

--
Jeff Trout <jeff@jefftrout.com>
www.dellsmartexitin.com
www.stuarthamm.net






Re: 7 hrs for a pg_restore?

От
Dave Cramer
Дата:
On 19-Feb-08, at 2:35 PM, Douglas J Hunley wrote:

> On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:
>> shared buffers is *way* too small as is effective cache
>> set them to 2G/6G respectively.
>>
>> Dave
>
> pardon my ignorance, but is this in the context of a restore only?
> or 'in
> general'?

This is the "generally accepted" starting point for a pg db for
production.

>
>
> --
> Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
> http://doug.hunley.homeip.net
>
> Don't let Kirk show you what he affectionately calls the "Captain's
> Log"
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match


Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 15:16:42 Dave Cramer wrote:
> On 19-Feb-08, at 2:35 PM, Douglas J Hunley wrote:
> > On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:
> >> shared buffers is *way* too small as is effective cache
> >> set them to 2G/6G respectively.
> >>
> >> Dave
> >
> > pardon my ignorance, but is this in the context of a restore only?  
> > or 'in
> > general'?
>
> This is the "generally accepted" starting point for a pg db for  
> production.

fair enough. I have scheduled this change for the next outage

--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

"The internet has had no impact on my life whatsoever.com" - anon

Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 15:07:30 Jeff wrote:
> On Feb 19, 2008, at 1:22 PM, Tom Lane wrote:
> > maintenance_work_mem, to be more specific.  If that's too small it
> > will
> > definitely cripple restore speed.  I'm not sure fsync would make much
> > difference, but checkpoint_segments would.  See
> > http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-
> > DUMP
>
> I wonder if it would be worthwhile if pg_restore could emit a warning
> if maint_work_mem is "low" (start flamewar on what "low" is).
>
> And as an addition to that - allow a cmd line arg to have pg_restore
> bump it before doing its work?  On several occasions I was moving a
> largish table and the COPY part went plenty fast, but when it hit
> index creation it slowed down to a crawl due to low maint_work_mem..

fwiw, I +1 this

now that I have a (minor) understanding of what's going on, I'd love to do
something like:
pg_restore -WM $large_value <normal options>


--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

There are no dead students here. This week.

Re: 7 hrs for a pg_restore?

От
Erik Jones
Дата:
On Feb 19, 2008, at 2:55 PM, Douglas J Hunley wrote:

> On Tuesday 19 February 2008 15:07:30 Jeff wrote:
>> On Feb 19, 2008, at 1:22 PM, Tom Lane wrote:
>>> maintenance_work_mem, to be more specific.  If that's too small it
>>> will
>>> definitely cripple restore speed.  I'm not sure fsync would make
>>> much
>>> difference, but checkpoint_segments would.  See
>>> http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-
>>> DUMP
>>
>> I wonder if it would be worthwhile if pg_restore could emit a warning
>> if maint_work_mem is "low" (start flamewar on what "low" is).
>>
>> And as an addition to that - allow a cmd line arg to have pg_restore
>> bump it before doing its work?  On several occasions I was moving a
>> largish table and the COPY part went plenty fast, but when it hit
>> index creation it slowed down to a crawl due to low maint_work_mem..
>
> fwiw, I +1 this
>
> now that I have a (minor) understanding of what's going on, I'd
> love to do
> something like:
> pg_restore -WM $large_value <normal options>

pg_restore is a postgres client app that uses libpq to connect and,
thus, will pick up anything in your $PGOPTIONS env variable.  So,

PGOPTONS="-c maintenance_work_mem=512MB" && pg_restore ....

Erik Jones

DBA | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




Re: 7 hrs for a pg_restore?

От
Greg Smith
Дата:
On Tue, 19 Feb 2008, Douglas J Hunley wrote:

> The db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm disks. 1
> raid 6 logical volume. Compaq Smart Array 6404 controller

You might consider doing some simple disk tests on the array just to prove
it's working well.  Reports here suggest the HP/Compaq arrays have been
somewhat inconsistant in performance, and it would be helpful to know if
you've got a good or a bad setup.  Some hints here are at
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: 7 hrs for a pg_restore?

От
"Pavan Deolasee"
Дата:
On Feb 19, 2008 11:53 PM, Jeff Davis <pgsql@j-davis.com> wrote:
>
>
> Keep in mind, if you have several GB worth of indexes, they take up
> basically no space in the logical dump (just the "CREATE INDEX" command,
> and that's it). But they can take a lot of processor time to build up
> again, especially with localized text.
>
>

I think it would be interesting if we can build these indexes in parallel.
Each index build requires a seq scan on the table. If the table does
not fit in shared buffers, each index build would most likely result
in lots of IO.

One option would be to add this facility to the backend so that multiple
indexes can be built with a single seq scan of the table. In theory, it
should be possible, but might be tricky given the way index build works
(it calls respective ambuild method to build the index which internally
does the seq scan).

Other option is to make pg_restore multi-threaded/processed. The
synchronized_scans facility would then synchronize the multiple heap
scans. ISTM that if we can make pg_restore mult-processed, then
we can possibly add more parallelism to the restore process.

My two cents.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

Re: 7 hrs for a pg_restore?

От
ohp@pyrenet.fr
Дата:
On Wed, 20 Feb 2008, Pavan Deolasee wrote:

> Date: Wed, 20 Feb 2008 14:31:09 +0530
> From: Pavan Deolasee <pavan.deolasee@gmail.com>
> To: Jeff Davis <pgsql@j-davis.com>
> Cc: Douglas J Hunley <doug@hunley.homeip.net>,
>      pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] 7 hrs for a pg_restore?
>
> On Feb 19, 2008 11:53 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> >
> >
> > Keep in mind, if you have several GB worth of indexes, they take up
> > basically no space in the logical dump (just the "CREATE INDEX" command,
> > and that's it). But they can take a lot of processor time to build up
> > again, especially with localized text.
> >
> >
>
> I think it would be interesting if we can build these indexes in parallel.
> Each index build requires a seq scan on the table. If the table does
> not fit in shared buffers, each index build would most likely result
> in lots of IO.
>
> One option would be to add this facility to the backend so that multiple
> indexes can be built with a single seq scan of the table. In theory, it
> should be possible, but might be tricky given the way index build works
> (it calls respective ambuild method to build the index which internally
> does the seq scan).
>
> Other option is to make pg_restore multi-threaded/processed. The
> synchronized_scans facility would then synchronize the multiple heap
> scans. ISTM that if we can make pg_restore mult-processed, then
> we can possibly add more parallelism to the restore process.
>
> My two cents.
>
> Thanks,
> Pavan
>
>
That'd be great! Maybe an option to pg_restore to spawn AT MOST n
processes (1 per CPU)
my .02 Euro
--
Olivier PRENANT                    Tel: +33-5-61-50-97-00 (Work)
15, Chemin des Monges                +33-5-61-50-97-01 (Fax)
31190 AUTERIVE                       +33-6-07-63-80-64 (GSM)
FRANCE                          Email: ohp@pyrenet.fr
------------------------------------------------------------------------------
Make your life a dream, make your dream a reality. (St Exupery)

Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 16:32:02 Erik Jones wrote:
> pg_restore is a postgres client app that uses libpq to connect and,  
> thus, will pick up anything in your $PGOPTIONS env variable.  So,
>
> PGOPTONS="-c maintenance_work_mem=512MB" && pg_restore ....

now that's just plain cool

/me updates our wiki

--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

Drugs may lead to nowhere, but at least it's the scenic route.

Re: 7 hrs for a pg_restore?

От
Douglas J Hunley
Дата:
On Tuesday 19 February 2008 17:53:45 Greg Smith wrote:
> On Tue, 19 Feb 2008, Douglas J Hunley wrote:
> > The db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm
> > disks. 1 raid 6 logical volume. Compaq Smart Array 6404 controller
>
> You might consider doing some simple disk tests on the array just to prove
> it's working well.  Reports here suggest the HP/Compaq arrays have been
> somewhat inconsistant in performance, and it would be helpful to know if
> you've got a good or a bad setup.  Some hints here are at
> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm

excellent! i'll look into doing this. thx!

--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net

Illiterate?  Write for help!

Re: 7 hrs for a pg_restore?

От
Gregory Stark
Дата:
"Douglas J Hunley" <doug@hunley.homeip.net> writes:

> On Tuesday 19 February 2008 16:32:02 Erik Jones wrote:
>> pg_restore is a postgres client app that uses libpq to connect and,  
>> thus, will pick up anything in your $PGOPTIONS env variable.  So,
>>
>> PGOPTONS="-c maintenance_work_mem=512MB" && pg_restore ....
>
> now that's just plain cool
>
> /me updates our wiki

I would suggest leaving out the && which only obfuscate what's going on here.

PGOPTIONS=... pg_restore ...

would work just as well and be clearer about what's going on.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

Re: 7 hrs for a pg_restore?

От
Erik Jones
Дата:
On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:

> "Douglas J Hunley" <doug@hunley.homeip.net> writes:
>
>> On Tuesday 19 February 2008 16:32:02 Erik Jones wrote:
>>> pg_restore is a postgres client app that uses libpq to connect and,
>>> thus, will pick up anything in your $PGOPTIONS env variable.  So,
>>>
>>> PGOPTONS="-c maintenance_work_mem=512MB" && pg_restore ....
>>
>> now that's just plain cool
>>
>> /me updates our wiki
>
> I would suggest leaving out the && which only obfuscate what's
> going on here.
>
> PGOPTIONS=... pg_restore ...
>
> would work just as well and be clearer about what's going on.

Right, that's just an unnecessary habit of mine.

Erik Jones

DBA | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




Re: 7 hrs for a pg_restore?

От
Tom Lane
Дата:
Erik Jones <erik@myemma.com> writes:
> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:
>> I would suggest leaving out the && which only obfuscate what's
>> going on here.
>>
>> PGOPTIONS=... pg_restore ...
>>
>> would work just as well and be clearer about what's going on.

> Right, that's just an unnecessary habit of mine.

Isn't that habit outright wrong?  ISTM that with the && in there,
what you're doing is equivalent to

    PGOPTIONS=whatever
    pg_restore ...

This syntax will set PGOPTIONS for the remainder of the shell session,
causing it to also affect (say) a subsequent psql invocation.  Which is
exactly not what is wanted.

            regards, tom lane

Re: 7 hrs for a pg_restore?

От
Matthew
Дата:
On Wed, 20 Feb 2008, Tom Lane wrote:
> Erik Jones <erik@myemma.com> writes:
>> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:
>>> I would suggest leaving out the && which only obfuscate what's
>>> going on here.
>>>
>>> PGOPTIONS=... pg_restore ...
>>>
>>> would work just as well and be clearer about what's going on.
>
>> Right, that's just an unnecessary habit of mine.
>
> Isn't that habit outright wrong?  ISTM that with the && in there,
> what you're doing is equivalent to
>
>     PGOPTIONS=whatever
>     pg_restore ...
>
> This syntax will set PGOPTIONS for the remainder of the shell session,
> causing it to also affect (say) a subsequent psql invocation.  Which is
> exactly not what is wanted.

It's even better than that. I don't see an "export" there, so it won't
take effect at all!

Matthew

--
Failure is not an option. It comes bundled with your Microsoft product.
                                                 -- Ferenc Mantfeld

Re: 7 hrs for a pg_restore?

От
Erik Jones
Дата:
On Feb 20, 2008, at 10:54 AM, Tom Lane wrote:

> Erik Jones <erik@myemma.com> writes:
>> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:
>>> I would suggest leaving out the && which only obfuscate what's
>>> going on here.
>>>
>>> PGOPTIONS=... pg_restore ...
>>>
>>> would work just as well and be clearer about what's going on.
>
>> Right, that's just an unnecessary habit of mine.
>
> Isn't that habit outright wrong?  ISTM that with the && in there,
> what you're doing is equivalent to
>
>     PGOPTIONS=whatever
>     pg_restore ...
>
> This syntax will set PGOPTIONS for the remainder of the shell session,
> causing it to also affect (say) a subsequent psql invocation.
> Which is
> exactly not what is wanted.

Yes.

Erik Jones

DBA | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




Re: 7 hrs for a pg_restore?

От
Jeff Davis
Дата:
On Wed, 2008-02-20 at 14:31 +0530, Pavan Deolasee wrote:
> I think it would be interesting if we can build these indexes in parallel.
> Each index build requires a seq scan on the table. If the table does
> not fit in shared buffers, each index build would most likely result
> in lots of IO.

He's already said that his I/O usage was not the problem. For one thing,
he has 8GB of memory for a 5GB dataset.

Even when the table is much larger than memory, what percentage of the
time is spent on the table scan? A table scan is O(N), whereas an index
build is O(N logN). If you combine that with expensive comparisons, e.g.
for localized text, then I would guess that the index building itself
was much more expensive than the scans themselves.

However, building indexes in parallel would allow better CPU
utilization.

> One option would be to add this facility to the backend so that multiple
> indexes can be built with a single seq scan of the table. In theory, it
> should be possible, but might be tricky given the way index build works
> (it calls respective ambuild method to build the index which internally
> does the seq scan).

I don't think that this would be necessary, because (as you say below)
the synchronized scan facility should already handle this.

> Other option is to make pg_restore multi-threaded/processed. The
> synchronized_scans facility would then synchronize the multiple heap
> scans. ISTM that if we can make pg_restore mult-processed, then
> we can possibly add more parallelism to the restore process.

I like this approach more. I think that pg_restore is the right place to
do this, if we can make the options reasonably simple enough to use.

See:

http://archives.postgresql.org/pgsql-hackers/2008-02/msg00699.php

Regards,
    Jeff Davis


Re: 7 hrs for a pg_restore?

От
Matthew
Дата:
On Wed, 20 Feb 2008, Jeff Davis wrote:
> However, building indexes in parallel would allow better CPU
> utilization.

We have a process here that dumps a large quantity of data into an empty
database, much like pg_restore, and then creates all the indexes at the
end. In order to speed up that bit, I initially made it spawn off several
threads, and make each thread run a CREATE INDEX operation in parallel.
However, this resulted in random errors from Postgres - something to do
with locked tables. So I changed it so that no two threads create indexes
for the same table at once, and that solved it.

Obviously creating several indexes for the same table in parallel is
better from a performance point of view, but you may have to fix that
error if you haven't already.

Matthew

--
for a in past present future; do
  for b in clients employers associates relatives neighbours pets; do
  echo "The opinions here in no way reflect the opinions of my $a $b."
done; done

Re: 7 hrs for a pg_restore?

От
Jeff Davis
Дата:
On Wed, 2008-02-20 at 18:18 +0000, Matthew wrote:
> On Wed, 20 Feb 2008, Jeff Davis wrote:
> > However, building indexes in parallel would allow better CPU
> > utilization.
>
> We have a process here that dumps a large quantity of data into an empty
> database, much like pg_restore, and then creates all the indexes at the
> end. In order to speed up that bit, I initially made it spawn off several
> threads, and make each thread run a CREATE INDEX operation in parallel.
> However, this resulted in random errors from Postgres - something to do
> with locked tables. So I changed it so that no two threads create indexes
> for the same table at once, and that solved it.

What was the specific problem? Were they UNIQUE indexes? Were you trying
to write to the tables while indexing? Did you use "CONCURRENTLY"?

Regards,
    Jeff Davis


Re: 7 hrs for a pg_restore?

От
Tom Lane
Дата:
Matthew <matthew@flymine.org> writes:
> We have a process here that dumps a large quantity of data into an empty
> database, much like pg_restore, and then creates all the indexes at the
> end. In order to speed up that bit, I initially made it spawn off several
> threads, and make each thread run a CREATE INDEX operation in parallel.
> However, this resulted in random errors from Postgres - something to do
> with locked tables. So I changed it so that no two threads create indexes
> for the same table at once, and that solved it.

How long ago was that?  There used to be some issues with two CREATE
INDEXes both trying to update the pg_class row, but I thought we'd fixed
it.

            regards, tom lane

Re: 7 hrs for a pg_restore?

От
Gregory Stark
Дата:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> Erik Jones <erik@myemma.com> writes:
>> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:
>>> I would suggest leaving out the && which only obfuscate what's
>>> going on here.
>>>
>>> PGOPTIONS=... pg_restore ...
>>>
>>> would work just as well and be clearer about what's going on.
>
>> Right, that's just an unnecessary habit of mine.
>
> Isn't that habit outright wrong?  ISTM that with the && in there,
> what you're doing is equivalent to
>
>     PGOPTIONS=whatever
>     pg_restore ...
>
> This syntax will set PGOPTIONS for the remainder of the shell session,
> causing it to also affect (say) a subsequent psql invocation.  Which is
> exactly not what is wanted.

When I said "obfuscating" I meant it. I'm pretty familiar with sh scripting
and I'm not even sure what the && behaviour would do. On at least some shells
I think the && will introduce a subshell. In that case the variable would not
continue. In bash I think it would because bash avoids a lot of subshells that
would otherwise be necessary.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

Re: 7 hrs for a pg_restore?

От
Chris
Дата:
> When I said "obfuscating" I meant it. I'm pretty familiar with sh scripting
> and I'm not even sure what the && behaviour would do.

It chains commands together so if the first fails the second doesn't happen.

$ echo 1 && echo 2
1
2

$ echo '1234' > /etc/file_that_doesnt_exist && echo 2
-bash: /etc/file_that_doesnt_exist: Permission denied


--
Postgresql & php tutorials
http://www.designmagick.com/

Re: 7 hrs for a pg_restore?

От
Gregory Stark
Дата:
"Chris" <dmagick@gmail.com> writes:

>> When I said "obfuscating" I meant it. I'm pretty familiar with sh scripting
>> and I'm not even sure what the && behaviour would do.
>
> It chains commands together so if the first fails the second doesn't happen.

I meant in this case, not in general. That is, does it introduce a subshell?

Sh traditionally has to introduce to implement some of the logical control and
pipe operators. I'm not sure if a simple && is enough but often it's
surprising how quickly that happens.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

Re: 7 hrs for a pg_restore?

От
Chris
Дата:
Gregory Stark wrote:
> "Chris" <dmagick@gmail.com> writes:
>
>>> When I said "obfuscating" I meant it. I'm pretty familiar with sh scripting
>>> and I'm not even sure what the && behaviour would do.
>> It chains commands together so if the first fails the second doesn't happen.
>
> I meant in this case, not in general. That is, does it introduce a subshell?

Ah - my misunderstanding then. No idea about that one.

--
Postgresql & php tutorials
http://www.designmagick.com/

Re: 7 hrs for a pg_restore?

От
Matthew
Дата:
On Wed, 20 Feb 2008, Tom Lane wrote:
>> However, this resulted in random errors from Postgres - something to do
>> with locked tables. So I changed it so that no two threads create indexes
>> for the same table at once, and that solved it.
>
> How long ago was that?  There used to be some issues with two CREATE
> INDEXes both trying to update the pg_class row, but I thought we'd fixed
> it.

It was a while back, and that sounds like exactly the error it returned.
It sounds like you have fixed it.

Matthew

--
Software suppliers are trying to make their software packages more
'user-friendly'.... Their best approach, so far, has been to take all
the old brochures, and stamp the words, 'user-friendly' on the cover.
-- Bill Gates

Re: 7 hrs for a pg_restore?

От
Guillaume Cottenceau
Дата:
Jeff <threshar 'at' torgo.978.org> writes:

> I wonder if it would be worthwhile if pg_restore could emit a warning
> if maint_work_mem is "low" (start flamewar on what "low" is).
>
> And as an addition to that - allow a cmd line arg to have pg_restore
> bump it before doing its work?  On several occasions I was moving a
> largish table and the COPY part went plenty fast, but when it hit
> index creation it slowed down to a crawl due to low maint_work_mem..

I have made a comparison restoring a production dump with default
and large maintenance_work_mem. The speedup improvement here is
only of 5% (12'30 => 11'50).

Apprently, on the restored database, data is 1337 MB[1] and
indexes 644 MB[2][2]. Pg is 8.2.3, checkpoint_segments 3,
maintenance_work_mem default (16MB) then 512MB, shared_buffers
384MB. It is rather slow disks (Dell's LSI Logic RAID1), hdparm
reports 82 MB/sec for reads.

Ref:
[1] db=# SELECT sum(relpages)*8/1024 FROM pg_class, pg_namespace WHERE pg_namespace.oid = pg_class.relnamespace AND
relkind= 'r' AND nspname = 'public'; 
 ?column?
----------
     1337

    (query run after ANALYZE)

    notice there are quite few toast pages to account:

    db=# SELECT relname, relpages FROM pg_class WHERE relname like '%toast%' ORDER BY relpages DESC;
       relname        | relpages
----------------------+----------
 pg_toast_2618        |       17
 pg_toast_2618_index  |        2
 pg_toast_87570_index |        1
 pg_toast_87582_index |        1
 (...)

[2] db=# SELECT sum(relpages)*8/1024 FROM pg_class, pg_namespace WHERE pg_namespace.oid = pg_class.relnamespace AND
relkind= 'i' AND nspname = 'public'; 
 ?column?
----------
      644

--
Guillaume Cottenceau

Re: 7 hrs for a pg_restore?

От
Tom Lane
Дата:
Guillaume Cottenceau <gc@mnc.ch> writes:
> I have made a comparison restoring a production dump with default
> and large maintenance_work_mem. The speedup improvement here is
> only of 5% (12'30 => 11'50).

> Apprently, on the restored database, data is 1337 MB[1] and
> indexes 644 MB[2][2]. Pg is 8.2.3, checkpoint_segments 3,
> maintenance_work_mem default (16MB) then 512MB, shared_buffers
> 384MB. It is rather slow disks (Dell's LSI Logic RAID1), hdparm
> reports 82 MB/sec for reads.

The main thing that jumps out at me is that boosting checkpoint_segments
would probably help.  I tend to set it to 30 or so (note that this
corresponds to about 1GB taken up by pg_xlog).

            regards, tom lane

Re: 7 hrs for a pg_restore?

От
Vivek Khera
Дата:
On Feb 21, 2008, at 12:28 PM, Guillaume Cottenceau wrote:

> I have made a comparison restoring a production dump with default
> and large maintenance_work_mem. The speedup improvement here is
> only of 5% (12'30 => 11'50).

At one point I was evaluating several server vendors and did a bunch
of DB restores.  The one thing that gave me the biggest benefit was to
bump the number of checkpoint segments to a high number, like 128 or
256.  Everything else was mostly minor increases in speed.



Re: 7 hrs for a pg_restore?

От
Guillaume Cottenceau
Дата:
Tom Lane <tgl 'at' sss.pgh.pa.us> writes:

> Guillaume Cottenceau <gc@mnc.ch> writes:
>> I have made a comparison restoring a production dump with default
>> and large maintenance_work_mem. The speedup improvement here is
>> only of 5% (12'30 => 11'50).
>
>> Apprently, on the restored database, data is 1337 MB[1] and
>> indexes 644 MB[2][2]. Pg is 8.2.3, checkpoint_segments 3,
>> maintenance_work_mem default (16MB) then 512MB, shared_buffers
>> 384MB. It is rather slow disks (Dell's LSI Logic RAID1), hdparm
>> reports 82 MB/sec for reads.
>
> The main thing that jumps out at me is that boosting checkpoint_segments
> would probably help.  I tend to set it to 30 or so (note that this
> corresponds to about 1GB taken up by pg_xlog).

Interestingly, from a bzipped dump, there is no win; however,
from an uncompressed dump, increasing checkpoint_segments from 3
to 30 decreases clock time from 9'50 to 8'30 (15% if I'm
correct).

--
Guillaume Cottenceau