Обсуждение: trivial refactoring of WaitOnLock
This patch refactors some code in WaitOnLock slightly. The old code was slow, and I believe it was off-by-one (it allocates one byte of memory more than needed). Barring any objections I'll apply this to HEAD later today. -Neil Index: src/backend/storage/lmgr/lock.c =================================================================== RCS file: /var/lib/cvs/pgsql/src/backend/storage/lmgr/lock.c,v retrieving revision 1.147 diff -c -r1.147 lock.c *** src/backend/storage/lmgr/lock.c 1 Mar 2005 21:14:59 -0000 1.147 --- src/backend/storage/lmgr/lock.c 8 Mar 2005 05:42:06 -0000 *************** *** 1076,1081 **** --- 1076,1082 ---- LockMethod lockMethodTable = LockMethods[lockmethodid]; char *new_status, *old_status; + size_t len; Assert(lockmethodid < NumLockMethods); *************** *** 1083,1091 **** locallock->lock, locallock->tag.mode); old_status = pstrdup(get_ps_display()); ! new_status = (char *) palloc(strlen(old_status) + 10); ! strcpy(new_status, old_status); ! strcat(new_status, " waiting"); set_ps_display(new_status); awaitedLock = locallock; --- 1084,1092 ---- locallock->lock, locallock->tag.mode); old_status = pstrdup(get_ps_display()); ! len = strlen(old_status); ! new_status = (char *) palloc(len + 8 + 1); ! sprintf(new_status, "%s waiting", old_status); set_ps_display(new_status); awaitedLock = locallock;
off-by-one is true, but I am not sure if the revised code is faster. sprintf() need the extra job to parse the format. In win32, I am sure it is much slower. "Neil Conway" <neilc@samurai.com> ???? news:422E3EAC.9000403@samurai.com... > This patch refactors some code in WaitOnLock slightly. The old code was > slow, and I believe it was off-by-one (it allocates one byte of memory > more than needed). > > Barring any objections I'll apply this to HEAD later today. > > -Neil > ---------------------------------------------------------------------------- ---- > Index: src/backend/storage/lmgr/lock.c > =================================================================== > RCS file: /var/lib/cvs/pgsql/src/backend/storage/lmgr/lock.c,v > retrieving revision 1.147 > diff -c -r1.147 lock.c > *** src/backend/storage/lmgr/lock.c 1 Mar 2005 21:14:59 -0000 1.147 > --- src/backend/storage/lmgr/lock.c 8 Mar 2005 05:42:06 -0000 > *************** > *** 1076,1081 **** > --- 1076,1082 ---- > LockMethod lockMethodTable = LockMethods[lockmethodid]; > char *new_status, > *old_status; > + size_t len; > > Assert(lockmethodid < NumLockMethods); > > *************** > *** 1083,1091 **** > locallock->lock, locallock->tag.mode); > > old_status = pstrdup(get_ps_display()); > ! new_status = (char *) palloc(strlen(old_status) + 10); > ! strcpy(new_status, old_status); > ! strcat(new_status, " waiting"); > set_ps_display(new_status); > > awaitedLock = locallock; > --- 1084,1092 ---- > locallock->lock, locallock->tag.mode); > > old_status = pstrdup(get_ps_display()); > ! len = strlen(old_status); > ! new_status = (char *) palloc(len + 8 + 1); > ! sprintf(new_status, "%s waiting", old_status); > set_ps_display(new_status); > > awaitedLock = locallock; > ---------------------------------------------------------------------------- ---- > > ---------------------------(end of broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) >
If the problem is speed, then this may be faster. Index: src/backend/storage/lmgr/lock.c =================================================================== RCS file: /projects/cvsroot/pgsql/src/backend/storage/lmgr/lock.c,v retrieving revision 1.147 diff -c -r1.147 lock.c *** src/backend/storage/lmgr/lock.c 1 Mar 2005 21:14:59 -0000 1.147 --- src/backend/storage/lmgr/lock.c 9 Mar 2005 08:17:33 -0000 *************** *** 1074,1079 **** --- 1074,1080 ---- ResourceOwner owner) { LockMethod lockMethodTable = LockMethods[lockmethodid]; + int len; char *new_status, *old_status; *************** *** 1083,1091 **** locallock->lock, locallock->tag.mode); old_status = pstrdup(get_ps_display()); ! new_status = (char *) palloc(strlen(old_status) + 10); strcpy(new_status, old_status); ! strcat(new_status, " waiting"); set_ps_display(new_status); awaitedLock = locallock; --- 1084,1093 ---- locallock->lock, locallock->tag.mode); old_status = pstrdup(get_ps_display()); ! len = strlen(old_status); ! new_status = (char *) palloc(len + 9); strcpy(new_status, old_status); ! strcpy(&new_status[len], " waiting"); set_ps_display(new_status); awaitedLock = locallock; On Wed, 9 Mar 2005 06:42 pm, Qingqing Zhou wrote: > off-by-one is true, but I am not sure if the revised code is faster. > sprintf() need the extra job to parse the format. In win32, I am sure it is > much slower. > > "Neil Conway" <neilc@samurai.com> ???? news:422E3EAC.9000403@samurai.com... > > This patch refactors some code in WaitOnLock slightly. The old code was > > slow, and I believe it was off-by-one (it allocates one byte of memory > > more than needed). > > > > Barring any objections I'll apply this to HEAD later today. > > > > -Neil > > > > > ---------------------------------------------------------------------------- > ---- > > > > Index: src/backend/storage/lmgr/lock.c > > =================================================================== > > RCS file: /var/lib/cvs/pgsql/src/backend/storage/lmgr/lock.c,v > > retrieving revision 1.147 > > diff -c -r1.147 lock.c > > *** src/backend/storage/lmgr/lock.c 1 Mar 2005 21:14:59 -0000 1.147 > > --- src/backend/storage/lmgr/lock.c 8 Mar 2005 05:42:06 -0000 > > *************** > > *** 1076,1081 **** > > --- 1076,1082 ---- > > LockMethod lockMethodTable = LockMethods[lockmethodid]; > > char *new_status, > > *old_status; > > + size_t len; > > > > Assert(lockmethodid < NumLockMethods); > > > > *************** > > *** 1083,1091 **** > > locallock->lock, locallock->tag.mode); > > > > old_status = pstrdup(get_ps_display()); > > ! new_status = (char *) palloc(strlen(old_status) + 10); > > ! strcpy(new_status, old_status); > > ! strcat(new_status, " waiting"); > > set_ps_display(new_status); > > > > awaitedLock = locallock; > > --- 1084,1092 ---- > > locallock->lock, locallock->tag.mode); > > > > old_status = pstrdup(get_ps_display()); > > ! len = strlen(old_status); > > ! new_status = (char *) palloc(len + 8 + 1); > > ! sprintf(new_status, "%s waiting", old_status); > > set_ps_display(new_status); > > > > awaitedLock = locallock; > > > > > ---------------------------------------------------------------------------- > ---- > > > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 2: you can get off all lists at once with the unregister command > > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) > > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org > >
Yes, reduced one round of counting the length of new_status. "Russell Smith" <mr-russ@pws.com.au> > If the problem is speed, then this may be faster. > > > Index: src/backend/storage/lmgr/lock.c > =================================================================== > RCS file: /projects/cvsroot/pgsql/src/backend/storage/lmgr/lock.c,v > retrieving revision 1.147 > diff -c -r1.147 lock.c > *** src/backend/storage/lmgr/lock.c 1 Mar 2005 21:14:59 -0000 1.147 > --- src/backend/storage/lmgr/lock.c 9 Mar 2005 08:17:33 -0000 > *************** > *** 1074,1079 **** > --- 1074,1080 ---- > ResourceOwner owner) > { > LockMethod lockMethodTable = LockMethods[lockmethodid]; > + int len; > char *new_status, > *old_status; > > *************** > *** 1083,1091 **** > locallock->lock, locallock->tag.mode); > > old_status = pstrdup(get_ps_display()); > ! new_status = (char *) palloc(strlen(old_status) + 10); > strcpy(new_status, old_status); > ! strcat(new_status, " waiting"); > set_ps_display(new_status); > > awaitedLock = locallock; > --- 1084,1093 ---- > locallock->lock, locallock->tag.mode); > > old_status = pstrdup(get_ps_display()); > ! len = strlen(old_status); > ! new_status = (char *) palloc(len + 9); > strcpy(new_status, old_status); > ! strcpy(&new_status[len], " waiting"); > set_ps_display(new_status); > > awaitedLock = locallock; > > > > > > > > > > On Wed, 9 Mar 2005 06:42 pm, Qingqing Zhou wrote: > > off-by-one is true, but I am not sure if the revised code is faster. > > sprintf() need the extra job to parse the format. In win32, I am sure it is > > much slower. > > > > "Neil Conway" <neilc@samurai.com> ???? news:422E3EAC.9000403@samurai.com... > > > This patch refactors some code in WaitOnLock slightly. The old code was > > > slow, and I believe it was off-by-one (it allocates one byte of memory > > > more than needed). > > > > > > Barring any objections I'll apply this to HEAD later today. > > > > > > -Neil > > > > > > > > > -------------------------------------------------------------------------- -- > > ---- > > > > > > > Index: src/backend/storage/lmgr/lock.c > > > =================================================================== > > > RCS file: /var/lib/cvs/pgsql/src/backend/storage/lmgr/lock.c,v > > > retrieving revision 1.147 > > > diff -c -r1.147 lock.c > > > *** src/backend/storage/lmgr/lock.c 1 Mar 2005 21:14:59 -0000 1.147 > > > --- src/backend/storage/lmgr/lock.c 8 Mar 2005 05:42:06 -0000 > > > *************** > > > *** 1076,1081 **** > > > --- 1076,1082 ---- > > > LockMethod lockMethodTable = LockMethods[lockmethodid]; > > > char *new_status, > > > *old_status; > > > + size_t len; > > > > > > Assert(lockmethodid < NumLockMethods); > > > > > > *************** > > > *** 1083,1091 **** > > > locallock->lock, locallock->tag.mode); > > > > > > old_status = pstrdup(get_ps_display()); > > > ! new_status = (char *) palloc(strlen(old_status) + 10); > > > ! strcpy(new_status, old_status); > > > ! strcat(new_status, " waiting"); > > > set_ps_display(new_status); > > > > > > awaitedLock = locallock; > > > --- 1084,1092 ---- > > > locallock->lock, locallock->tag.mode); > > > > > > old_status = pstrdup(get_ps_display()); > > > ! len = strlen(old_status); > > > ! new_status = (char *) palloc(len + 8 + 1); > > > ! sprintf(new_status, "%s waiting", old_status); > > > set_ps_display(new_status); > > > > > > awaitedLock = locallock; > > > > > > > > > -------------------------------------------------------------------------- -- > > ---- > > > > > > > > > > ---------------------------(end of broadcast)--------------------------- > > > TIP 2: you can get off all lists at once with the unregister command > > > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) > > > > > > > > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 6: Have you searched our list archives? > > > > http://archives.postgresql.org > > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) >
Russell Smith wrote: > *** 1083,1091 **** > locallock->lock, locallock->tag.mode); > > old_status = pstrdup(get_ps_display()); > ! new_status = (char *) palloc(strlen(old_status) + 10); > strcpy(new_status, old_status); > ! strcat(new_status, " waiting"); > set_ps_display(new_status); > > awaitedLock = locallock; > --- 1084,1093 ---- > locallock->lock, locallock->tag.mode); > > old_status = pstrdup(get_ps_display()); > ! len = strlen(old_status); > ! new_status = (char *) palloc(len + 9); > strcpy(new_status, old_status); > ! strcpy(&new_status[len], " waiting"); > set_ps_display(new_status); memcpy(new_status, old_status, len) would be faster yet. Which is what I originally implemented, and then decided the sprintf() was clearer (since performance isn't very important here). On reflection, memcpy() + strcpy() should be fine; I'll commit this tomorrow. -Neil
Neil Conway wrote: > memcpy(new_status, old_status, len) would be faster yet. Which is what I > originally implemented, and then decided the sprintf() was clearer > (since performance isn't very important here). On reflection, memcpy() + > strcpy() should be fine; I'll commit this tomorrow. Applied. -Neil