Обсуждение: UPDATE slow
I have about 11000 rows and this query is much slower than I expected:
update stats set clicks = 123;
stats is basically just: id, clicks; with indexes on each. Right now, the query takes about 10 seconds to execute. It's also on windows 2000/cygwin (if that contributes to slowness). Is it normal for updates like this on that many rows to take a while?
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
Just tried it on Linux - still takes 5-10 seconds. Any speed-up tips? Also tried it on MySQL and the update is nearly instantaneous :(.
John
I have about 11000 rows and this query is much slower than I expected:
update stats set clicks = 123;
stats is basically just: id, clicks; with indexes on each. Right now, the query takes about 10 seconds to execute. It's also on windows 2000/cygwin (if that contributes to slowness). Is it normal for updates like this on that many rows to take a while?
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
If this helps :)
db=# explain analyze update stats set clicks = 3344 where link_id=1;
QUERY PLAN
---------------------------------------------------------------------------------------
Index Scan using idx_link_id on stats
(cost=0.00..149.42 rows=50 width=30) (actual time=0.58..727.45 rows=9994 loops=1)
Index Cond: (link_id = 1)
Total runtime: 8361.93 msec
(3 rows)
Just tried it on Linux - still takes 5-10 seconds. Any speed-up tips? Also tried it on MySQL and the update is nearly instantaneous :(.
John
I have about 11000 rows and this query is much slower than I expected:
update stats set clicks = 123;
stats is basically just: id, clicks; with indexes on each. Right now, the query takes about 10 seconds to execute. It's also on windows 2000/cygwin (if that contributes to slowness). Is it normal for updates like this on that many rows to take a while?
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
On Tue, 4 Feb 2003, John Smith wrote: > Just tried it on Linux - still takes 5-10 seconds. Any speed-up tips? > Also tried it on MySQL and the update is nearly instantaneous :(. Hmm, does stats reference another table or is it referenced by another table?
It has two foreign key references to two other tables. No other tables reference stats though.
John
Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:
On Tue, 4 Feb 2003, John Smith wrote:
> Just tried it on Linux - still takes 5-10 seconds. Any speed-up tips?
> Also tried it on MySQL and the update is nearly instantaneous :(.
Hmm, does stats reference another table or is it referenced by another
table?
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
I should also probably note that neither of the foreign key columns is being updated.
John
It has two foreign key references to two other tables. No other tables reference stats though.
John
Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:
On Tue, 4 Feb 2003, John Smith wrote:
> Just tried it on Linux - still takes 5-10 seconds. Any speed-up tips?
> Also tried it on MySQL and the update is nearly instantaneous :(.
Hmm, does stats reference another table or is it referenced by another
table?
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 4 Feb 2003 14:37:48 -0800 (PST), John Smith wrote: Hello John try: update stats set clicks=3344 where link_id=1::int8; I suppose that link_id is an int8 datatype? This bug is already known and will hopefully be fixed in 7.4. Patric >If this helps :) >db=# explain analyze update stats set clicks = 3344 where link_id=1; >QUERY PLAN >--------------------------------------------------------------------------------------- >Index Scan using idx_link_id on stats >(cost=0.00..149.42 rows=50 width=30) (actual time=0.58..727.45 rows=9994 loops=1) > Index Cond: (link_id = 1) >Total runtime: 8361.93 msec >(3 rows) >Just tried it on Linux - still takes 5-10 seconds. Any speed-up tips? Also tried it on MySQL and the update is nearly instantaneous:(. >John >I have about 11000 rows and this query is much slower than I expected: >update stats set clicks = 123; >stats is basically just: id, clicks; with indexes on each. Right now, the query takes about 10 seconds to execute. It'salso on windows 2000/cygwin (if that contributes to slowness). Is it normal for updates like this on that many rows to take a while? >--------------------------------- >Do you Yahoo!? >Yahoo! Mail Plus - Powerful. Affordable. Sign up now >--------------------------------- >Do you Yahoo!? >Yahoo! Mail Plus - Powerful. Affordable. Sign up now PGP Public Key Fingerprint: 2636 F26E F523 7D62 4377 D206 7C68 06BB -----BEGIN PGP SIGNATURE----- Version: PGPsdk version 1.7.1 (C) 1997-1999 Network Associates, Inc. and its affiliated companies. iQA+AwUBPkA2DXxoBrvMu8qQEQKJcwCY7c1OjuvS/RIQeN+q0Z4dWIHp/gCdFEAR VRTPdgIp8C0PM6Tv3a2NqgA= =4QYn -----END PGP SIGNATURE-----
On Tue, 4 Feb 2003, John Smith wrote: > I should also probably note that neither of the foreign key columns is being updated. It doesn't matter, unfortunately. It currently checks even in that case due to ON * SET DEFAULT having a failure case if you don't check when no keys are changed (if you remove the default - and you can't necessarily check this beforehand either if the defaults are not stable). Among the various changes I've been working with is making it only do the check if one of the set default cases is given or if the key changed.
On Tue, 4 Feb 2003, Stephan Szabo wrote: > > On Tue, 4 Feb 2003, John Smith wrote: > > > I should also probably note that neither of the foreign key columns is being updated. > > It doesn't matter, unfortunately. It currently checks even in that case > due to ON * SET DEFAULT having a failure case if you don't check when no > keys are changed (if you remove the default - and you can't necessarily > check this beforehand either if the defaults are not stable). Among the > various changes I've been working with is making it only do the check if > one of the set default cases is given or if the key changed. I've actually got most of that done as part of the other patches I've been working on. I could probably try to do the same to 7.3 (as opposed to 7.4 devel) and make a separate patch out of it.
Think that makes sense ;). However, I just dropped all the foreign key constraints on stats and there wasn't any speed increase in the query. I've also tried dropping the index on clicks (no speed-up there either).
John
Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:
On Tue, 4 Feb 2003, Stephan Szabo wrote:
>
> On Tue, 4 Feb 2003, John Smith wrote:
>
> > I should also probably note that neither of the foreign key columns is being updated.
>
> It doesn't matter, unfortunately. It currently checks even in that case
> due to ON * SET DEFAULT having a failure case if you don't check when no
> keys are changed (if you remove the default - and you can't necessarily
> check this beforehand either if the defaults are not stable). Among the
> various changes I've been working with is making it only do the check if
> one of the set default cases is given or if the key changed.
I've actually got most of that done as part of the other patches I've been
working on. I could probably try to do the same to 7.3 (as opposed to
7.4 devel) and make a separate patch out of it.
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
On Tue, 4 Feb 2003, John Smith wrote: > Think that makes sense ;). However, I just dropped all the foreign key > constraints on stats and there wasn't any speed increase in the query. > I've also tried dropping the index on clicks (no speed-up there > either). Can you duplicate it with a new table? Something like: create table foo as select * from stats; update foo set clicks=3344;
That works - updates on foo take about 1.4 seconds. I dropped all the indexes and fk's on stats and updates there take about 2.8 seconds. These are on the cygwin machine.
Looking at the tables, foo doesn't have any modifiers for the columns (stats does - not null default 0).
John
Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:
On Tue, 4 Feb 2003, John Smith wrote:
> Think that makes sense ;). However, I just dropped all the foreign key
> constraints on stats and there wasn't any speed increase in the query.
> I've also tried dropping the index on clicks (no speed-up there
> either).
Can you duplicate it with a new table?
Something like:
create table foo as select * from stats;
update foo set clicks=3344;
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
On Tue, 4 Feb 2003, John Smith wrote: > > That works - updates on foo take about 1.4 seconds. I dropped all the indexes and fk's on stats and updates there takeabout 2.8 seconds. These are on the cygwin machine. > Looking at the tables, foo doesn't have any modifiers for the columns (stats does - not null default 0). > John > Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:On Tue, 4 Feb 2003, John Smith wrote: > > > Think that makes sense ;). However, I just dropped all the foreign key > > constraints on stats and there wasn't any speed increase in the query. > > I've also tried dropping the index on clicks (no speed-up there > > either). > > Can you duplicate it with a new table? > > Something like: > > create table foo as select * from stats; > update foo set clicks=3344; Is this a table that got a lot of updates en masse? If it was regularly vacuumed but you still had a problem, it might be index bloat. Keep an eye on it, and if the table starts to get slow, try reindexing the indexes on that table and see if that happens. The "fix" for this problem is to crank up max fsm settings, and run vacuum more often, but that doesn't always actually fix things.
On Tue, 4 Feb 2003, John Smith wrote: > That works - updates on foo take about 1.4 seconds. I dropped all the > indexes and fk's on stats and updates there take about 2.8 seconds. > These are on the cygwin machine. The 2.8 seconds is on stats after dropping the fks and indexes? But it didn't help on the linux box? Have you done a vacuum/reindex recently?
On Tue, 4 Feb 2003, Stephan Szabo wrote: > On Tue, 4 Feb 2003, John Smith wrote: > > > That works - updates on foo take about 1.4 seconds. I dropped all the > > indexes and fk's on stats and updates there take about 2.8 seconds. > > These are on the cygwin machine. > > The 2.8 seconds is on stats after dropping the fks and indexes? But > it didn't help on the linux box? > I tried this earlier on just a plain: create table testme ( id integer unique, clicks integer unique ); This is my explain: desticorp=> explain analyze select count(1) from testme; NOTICE: QUERY PLAN: Aggregate (cost=22.50..22.50 rows=1 width=0) (actual time=78.52..78.52 rows=1 loops=1) -> Seq Scan on testme (cost=0.00..20.00 rows=1000 width=0) (actual time=0.11..51.74 rows=11999 loops=1) Total runtime: 78.67 msec EXPLAIN desticorp=> explain analyze update testme set clicks = clicks + 123; NOTICE: QUERY PLAN: Seq Scan on testme (cost=0.00..20.00 rows=1000 width=14) (actual time=0.03..256.21 rows=11999 loops=1) Total runtime: 2060.41 msec EXPLAIN As you can see this took 2 seconds after already been given the chance to cache the table. It's also possible to see that I used 12000 rows in my table and that the sequential scan part of the operation is a helluva lot slower when writing. This is on a dual P-III 550MHz system. Memory settings probably aren't tuned too much though I don't think that would impact too much on this quick test. Although loaded and usually noticable delays in window refresh when flicking through my screens this system seems to have a very low CPU utilisation and plenty of memory usable (for a change). Although I am wondering what I've run in the last couple of weeks that's pushed me to use 150MB of swap (768MB physical so not an insignificant amount). Therefore John's 2.8s seems a reasonable time to me. -- Nigel J. Andrews
On Wed, 5 Feb 2003, Nigel J. Andrews wrote: > On Tue, 4 Feb 2003, Stephan Szabo wrote: > > > On Tue, 4 Feb 2003, John Smith wrote: > > > > > That works - updates on foo take about 1.4 seconds. I dropped all the > > > indexes and fk's on stats and updates there take about 2.8 seconds. > > > These are on the cygwin machine. > > > > The 2.8 seconds is on stats after dropping the fks and indexes? But > > it didn't help on the linux box? > > > > desticorp=> explain analyze update testme set clicks = clicks + 123; > NOTICE: QUERY PLAN: > Seq Scan on testme (cost=0.00..20.00 rows=1000 width=14) (actual time=0.03..256.21 rows=11999 loops=1) > Total runtime: 2060.41 msec > > EXPLAIN > > > As you can see this took 2 seconds after already been given the chance to cache > the table. It's also possible to see that I used 12000 rows in my table and > that the sequential scan part of the operation is a helluva lot slower when > writing. > > This is on a dual P-III 550MHz system. Memory settings probably aren't tuned > too much though I don't think that would impact too much on this quick > test. Although loaded and usually noticable delays in window refresh when > flicking through my screens this system seems to have a very low CPU > utilisation and plenty of memory usable (for a change). Although I am wondering > what I've run in the last couple of weeks that's pushed me to use 150MB of swap > (768MB physical so not an insignificant amount). > > Therefore John's 2.8s seems a reasonable time to me. Yeah, but I thought he'd said that on the linux box, even after dropping indexes and fks it was taking 5-10 seconds. I'm also a bit confused because I'm not sure he's getting 2.8 seconds to update all the records or just a single record.
On Tue, 4 Feb 2003, Stephan Szabo wrote: > > On Wed, 5 Feb 2003, Nigel J. Andrews wrote: > > > On Tue, 4 Feb 2003, Stephan Szabo wrote: > > > > > On Tue, 4 Feb 2003, John Smith wrote: > > > > > > > That works - updates on foo take about 1.4 seconds. I dropped all the > > > > indexes and fk's on stats and updates there take about 2.8 seconds. > > > > These are on the cygwin machine. > > > > > > The 2.8 seconds is on stats after dropping the fks and indexes? But > > > it didn't help on the linux box? > >... > > Yeah, but I thought he'd said that on the linux box, even after dropping > indexes and fks it was taking 5-10 seconds. You miss remembered :) > > I'm also a bit confused because I'm not sure he's getting 2.8 seconds to > update all the records or just a single record. I'm pretty sure the command originally quoted was an unconstrained update setting a constant value, i.e. all the rows. I had to change my test because I'd setup unique indexes so couldn't do the constant value bit. -- Nigel Andrews
On Wed, 5 Feb 2003, Nigel J. Andrews wrote: > > On Tue, 4 Feb 2003, Stephan Szabo wrote: > > > > > On Wed, 5 Feb 2003, Nigel J. Andrews wrote: > > > > > On Tue, 4 Feb 2003, Stephan Szabo wrote: > > > > > > > On Tue, 4 Feb 2003, John Smith wrote: > > > > > > > > > That works - updates on foo take about 1.4 seconds. I dropped all the > > > > > indexes and fk's on stats and updates there take about 2.8 seconds. > > > > > These are on the cygwin machine. > > > > > > > > The 2.8 seconds is on stats after dropping the fks and indexes? But > > > > it didn't help on the linux box? > > > >... > > > > Yeah, but I thought he'd said that on the linux box, even after dropping > > indexes and fks it was taking 5-10 seconds. > > You miss remembered :) In any case that's 3x slower than my development box which is not terribly powerful and not set up as a database server for real using pretty much all default configuration settings with the database on the same partition as / (/usr, etc...). > > > > I'm also a bit confused because I'm not sure he's getting 2.8 seconds to > > update all the records or just a single record. > > I'm pretty sure the command originally quoted was an unconstrained update > setting a constant value, i.e. all the rows. I had to change my test because > I'd setup unique indexes so couldn't do the constant value bit. That's what the first message was, but his first followup with an explain analyze output used: explain analyze update stats set clicks = 3344 where link_id=1;
No, it's still in dev. I've tried vacuum/full/analyze and reindex stats - none cause any noticable speed improvements.
Here are my postgresql.conf settings (I just uncommented the mentioned vars and restarted):
#
# Shared Memory Size
#
shared_buffers = 64 # 2*max_connections, min 16, typically 8KB each
max_fsm_relations = 100 # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64 # min 10
#wal_buffers = 8 # min 4, typically 8KB each
#
# Non-shared Memory Sizes
#
sort_mem = 1024 # min 64, size in KB
#vacuum_mem = 8192 # min 1024, size in KB
After restarting and before I added the indexes, the update query was nearly instantaneous. But after adding the indexes, it's back to about the same speed (5-15+ seconds) - on both the cygwin and linux boxes.
John
Is this a table that got a lot of updates en masse? If it was regularly
vacuumed but you still had a problem, it might be index bloat. Keep an
eye on it, and if the table starts to get slow, try reindexing the indexes
on that table and see if that happens.
The "fix" for this problem is to crank up max fsm settings, and run vacuum
more often, but that doesn't always actually fix things.
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
On the linux box it's 2.8 secs with only one index (on clicks) and no fk's. W/O, it's 1.8 secs. Both were done after 'vacuum analyze' and 'reindex table stats'.
John
Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:
On Tue, 4 Feb 2003, John Smith wrote:
> That works - updates on foo take about 1.4 seconds. I dropped all the
> indexes and fk's on stats and updates there take about 2.8 seconds.
> These are on the cygwin machine.
The 2.8 seconds is on stats after dropping the fks and indexes? But
it didn't help on the linux box?Have you done a vacuum/reindex recently?
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
That seems to help :). The query takes about 2-5 seconds now (on the linux box with all indexes and fk's). Any tips on further improvement (to make it as fast as MySQL ;) )?
John
Justin Clift <justin@postgresql.org> wrote:
Hi John,
Noticed that you said you hadn't tuned the memory a whole lot.
Have you raised the sort_mem and shared_buffers variables?
For a "quick fix" amount, raising sort_mem to 8192 and shared_buffers to
say 500 should help. Not optimal, but better than the defaults.
You'll need to restart PostgreSQL after changing those for them to
become effective too.
Hope that helps.
Regards and best wishes,
Justin Clift
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now
John Smith <john_smith_45678@yahoo.com> writes: > Here are my postgresql.conf settings (I just uncommented the mentioned vars and restarted): > shared_buffers = 64 # 2*max_connections, min 16, typically 8KB each There's your problem, or part of it anyway. That's not enough buffers to manipulate a bunch of indexes efficiently. Production-grade settings for shared_buffers are more like 1 to 10 thousand. > max_fsm_relations = 100 # min 10, fsm is free space map, ~40 bytes > max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes You probably will want to jack those up too. > sort_mem = 1024 # min 64, size in KB And possibly that, though I don't think it affects this particular query. regards, tom lane