Обсуждение: Postmaster hangs

Поиск
Список
Период
Сортировка

Postmaster hangs

От
Karen Pease
Дата:
I've been dealing with a postgres (8.3.8.1.fc10) problem for a while
that has become quite frustrating in recent weeks.  I have a set of
tables, one of which (geonames) is very large.  This contains the
geonames geographical information database -- about 7 million records:

CREATE TABLE geonames (
  geonameid BIGINT PRIMARY KEY,
  name VARCHAR(200),
  asciiname VARCHAR(200),
  alternatenames VARCHAR(4000),
  latitude DOUBLE PRECISION,
  longitude DOUBLE PRECISION,
  geoblock0 SMALLINT,
  geoblock1 SMALLINT,
  geoblock2 INTEGER,
  geoblock3 INTEGER,
  geoblock0_latitude SMALLINT,
  geoblock0_longitude SMALLINT,
  geoblock1_latitude SMALLINT,
  geoblock1_longitude SMALLINT,
  geoblock2_latitude SMALLINT,
  geoblock2_longitude SMALLINT,
  geoblock3_latitude SMALLINT,
  geoblock3_longitude SMALLINT,
  feature_class CHAR(1),
  feature_code VARCHAR(10),
  country_code CHAR(2),
  cc2 VARCHAR(60),
  admin1_code VARCHAR(20),
  admin2_code VARCHAR(80),
  admin3_code VARCHAR(20),
  admin4_code VARCHAR(20),
  name_admin1_code VARCHAR(221),
  name_admin1_name VARCHAR(601),
  population BIGINT,
  elevation INTEGER,
  gtopo30 INTEGER,
  timezone VARCHAR(255),
  modification_date TIMESTAMP,

  FOREIGN KEY (feature_class) REFERENCES feature_classes(class),
  FOREIGN KEY (feature_class, feature_code) REFERENCES
feature_codes(class, code),
  FOREIGN KEY (country_code) REFERENCES countries(iso),
  FOREIGN KEY (country_code, admin1_code) REFERENCES
admin1_codes(country, admin1_code),
  FOREIGN KEY (country_code, admin1_code, admin2_code) REFERENCES
admin2_codes(country, admin1_code, admin2_code),
  FOREIGN KEY (country_code, admin1_code, admin2_code, admin3_code)
REFERENCES admin3_codes(country, admin1_code, admin2_code, admin3_code),
  FOREIGN KEY (country_code, admin1_code, admin2_code, admin3_code,
admin4_code) REFERENCES admin4_codes(country, admin1_code, admin2_code,
admin3_code, admin4_code),
  FOREIGN KEY (timezone) REFERENCES timezones(name)
);

I'm in the middle of trying to set the "name_admin1"code" and
"name_admin1_name" fields.  Ideally, I'd like to use the following
command:

update geonames set name_admin1_code = upper(name || ',' ||
admin1_code), name_admin1_name = upper(name || ',' || (select
description from admin1_codes where admin1_codes.admin1_code =
geonames.admin1_code AND admin1_codes.country = geonames.country_code));

However, this command is *guaranteed* to lock up postmaster (more on
that in just a second).  As a consequence, I'm forced to break up the
command into smaller units, such as:

update geonames set name_admin1_code = upper(name || ',' ||
admin1_code), name_admin1_name = upper(name || ',' || (select
description from admin1_codes where admin1_codes.admin1_code =
geonames.admin1_code AND admin1_codes.country = geonames.country_code))
WHERE admin1_code = 'WV';

It'll get through about three or four of them (out of hundreds) before
it locks up.  Now, before lockup, postmaster is very active.  It shows
up on top.  The computer's hard drives clack nonstop.  Etc.  But once it
locks up (without warning), all of that stop.  Postmaster does nothing.
The computer goes silent.  I can't ctrl-break the psql process.  If I
try to start a new psql process, it won't get past the password prompt
-- psql will hang.  All Apache processes involving postgres queries
hang.  The postgres server cannot be restarted by any normal means (the
only solution I've found that works is a reboot).  And so forth.

This isn't the first time I've had this problem.  When setting my
geoblock fields, I had the exact same problem -- and this was before I
had any apache cgi scripts that would access the database.  So whatever
is jamming postgres seems to be entirely internal to the single update
process.

I've just been living with this problem until now, but tonight I got
sick enough of it that I felt I had to report it.  Is this a known bug?
Are there workarounds?

    - Karen

Re: Postmaster hangs

От
Tom Lane
Дата:
Karen Pease <meme@daughtersoftiresias.org> writes:
> It'll get through about three or four of them (out of hundreds) before
> it locks up.  Now, before lockup, postmaster is very active.  It shows
> up on top.  The computer's hard drives clack nonstop.  Etc.  But once it
> locks up (without warning), all of that stop.  Postmaster does nothing.
> The computer goes silent.  I can't ctrl-break the psql process.  If I
> try to start a new psql process, it won't get past the password prompt
> -- psql will hang.  All Apache processes involving postgres queries
> hang.  The postgres server cannot be restarted by any normal means (the
> only solution I've found that works is a reboot).  And so forth.

This sounds to me like it's a kernel problem, possibly triggered by
misbehaving disk hardware.  What you might try to confirm is a kill -9
on whichever postgres backend seems to be stuck.  If that fails to
remove the process, then it's definitely a kernel issue --- try googling
"uninterruptible disk wait" and similar phrases.

The cases that I've run into personally have been due to poor error
handling for a disk failure condition in a kernel-level disk driver.
If that's what it is for you, the bottom-level problem might be an
unreadable disk block somewhere.  Or it might just be a garden variety
kernel bug.  What's the platform?

            regards, tom lane

Re: Postmaster hangs

От
Karen Pease
Дата:
kill -9 does kill postmaster (or at least seems to).  But I can't figure
out a way to get it restarted without a reboot -- I don't know what I'm
missing.  The Fedora postgres restart scripts don't do the trick, and I
couldn't get it to work with pg_ctl either.

kill -9 doesn't work on the locked up httpd processes.  So that has to
have the system restarted.

[meme@chmmr]$ cat /proc/version
Linux version 2.6.27.37-170.2.104.fc10.i686
(mockbuild@xenbuilder4.fedora.phx.redhat.com) (gcc version 4.3.2
20081105 (Red Hat 4.3.2-7) (GCC) ) #1 SMP Mon Oct 12 22:01:53 EDT 2009

Postgres is by default in /var/lib/pgsql.  When / started running out of
space, I moved it to /scratch and symlinked:

lrwxrwxrwx 1 root       root         15 2009-09-11 16:57 pgsql
-> /scratch/pgsql//

/ is on md0 and is RAID-1.  /scratch is on md1 and is RAID-6:

[meme@chmmr]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               64G   42G   18G  71% /
/dev/md1              2.5T  2.2T  239G  91% /scratch
/dev/sdb1             190M   38M  143M  21% /boot
/dev/sde1             190M   86M   95M  48% /boot2
/dev/sdd1             190M   86M   95M  48% /boot3
/dev/sda1             190M   86M   95M  48% /boot4
/dev/sdc1             190M   86M   95M  48% /boot5
tmpfs                1000M     0 1000M   0% /dev/shm
[meme@chmmr]$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid6 sde4[0] sdc4[4] sda4[3] sdb4[2] sdd4[1]
      2722005120 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]

md0 : active raid1 sde3[0] sdc3[4] sda3[3] sdb3[2] sdd3[1]
      67119488 blocks [5/5] [UUUUU]

unused devices: <none>

Both filesystems are EXT-4.

Thanks for your help!

    - Karen

On Sun, 2009-10-25 at 23:13 -0400, Tom Lane wrote:
> Karen Pease <meme@daughtersoftiresias.org> writes:
> > It'll get through about three or four of them (out of hundreds) before
> > it locks up.  Now, before lockup, postmaster is very active.  It shows
> > up on top.  The computer's hard drives clack nonstop.  Etc.  But once it
> > locks up (without warning), all of that stop.  Postmaster does nothing.
> > The computer goes silent.  I can't ctrl-break the psql process.  If I
> > try to start a new psql process, it won't get past the password prompt
> > -- psql will hang.  All Apache processes involving postgres queries
> > hang.  The postgres server cannot be restarted by any normal means (the
> > only solution I've found that works is a reboot).  And so forth.
>
> This sounds to me like it's a kernel problem, possibly triggered by
> misbehaving disk hardware.  What you might try to confirm is a kill -9
> on whichever postgres backend seems to be stuck.  If that fails to
> remove the process, then it's definitely a kernel issue --- try googling
> "uninterruptible disk wait" and similar phrases.
>
> The cases that I've run into personally have been due to poor error
> handling for a disk failure condition in a kernel-level disk driver.
> If that's what it is for you, the bottom-level problem might be an
> unreadable disk block somewhere.  Or it might just be a garden variety
> kernel bug.  What's the platform?
>
>             regards, tom lane
>

Re: Postmaster hangs

От
Craig Ringer
Дата:
Karen Pease wrote:
> kill -9 does kill postmaster (or at least seems to).  But I can't figure
> out a way to get it restarted without a reboot -- I don't know what I'm
> missing.  The Fedora postgres restart scripts don't do the trick, and I
> couldn't get it to work with pg_ctl either.

It'd help to know where the postmaster was stuck, and if possible where
the backend you were using is stuck.

A backtrace from gdb can be handy for this.

http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

> kill -9 doesn't work on the locked up httpd processes.  So that has to
> have the system restarted.

If `kill -9' isn't working they're probably in uninterruptable sleep in
the kernel.

You can find out what they're sleeping in with `ps':

  ps ax -o pid,ppid,stat,wchan:50,cmd

(Filter for just the postmaster and postgres processes if you want)

> Both filesystems are EXT-4.

That's interesting given the issues you're having...

--
Craig Ringer

Re: Postmaster hangs

От
Karen Pease
Дата:
I did my best to follow the gdb instructions.  I ran:

gdb -p 2852

Then connected entered the logging statements, then ran "cont", then
ctrl-c'ed it a couple times.  I got:

Program received signal SIGINT, Interrupt.
0x001e6416 in __kernel_vsyscall ()
(gdb) bt
#0  0x001e6416 in __kernel_vsyscall ()
#1  0x00c7939d in ___newselect_nocancel () from /lib/libc.so.6
#2  0x081dbaf9 in ?? ()
#3  0x081dd20a in PostmasterMain ()
#4  0x08190f96 in main ()
(gdb) cont
Continuing.
^C
Program received signal SIGINT, Interrupt.
0x001e6416 in __kernel_vsyscall ()
(gdb) bt
#0  0x001e6416 in __kernel_vsyscall ()
#1  0x00c7939d in ___newselect_nocancel () from /lib/libc.so.6
#2  0x081dbaf9 in ?? ()
#3  0x081dd20a in PostmasterMain ()
#4  0x08190f96 in main ()
(gdb) quit

The jammed httpd processes, by your commandline, are:

[root@chmmr dbscripts]# ps ax -o pid,ppid,stat,wchan:50,cmd | grep -i
http
 3376     1 D
start_this_handle                                  /usr/sbin/httpd
 3379     1 D
start_this_handle                                  /usr/sbin/httpd
 3381     1 D
start_this_handle                                  /usr/sbin/httpd
 4147     1 D
start_this_handle                                  /usr/sbin/httpd
 4539     1 D
start_this_handle                                  /usr/sbin/httpd
 5484     1 D
start_this_handle                                  /usr/sbin/httpd
11100     1 D
start_this_handle                                  /usr/sbin/httpd
14882     1 D
start_this_handle                                  /usr/sbin/httpd

These cannot be killed by kill -9.  Example:

[root@chmmr dbscripts]# kill -9 3376
[root@chmmr dbscripts]# ps ax -o pid,ppid,stat,wchan:50,cmd | grep -i
http
 3376     1 D
start_this_handle                                  /usr/sbin/httpd
 3379     1 D
start_this_handle                                  /usr/sbin/httpd
 3381     1 D
start_this_handle                                  /usr/sbin/httpd
 4147     1 D
start_this_handle                                  /usr/sbin/httpd
 4539     1 D
start_this_handle                                  /usr/sbin/httpd
 5484     1 D
start_this_handle                                  /usr/sbin/httpd
11100     1 D
start_this_handle                                  /usr/sbin/httpd
14882     1 D
start_this_handle                                  /usr/sbin/httpd

As mentioned, I can kill postmaster.  But I can't restart it without a
reboot; it hangs:

[root@chmmr dbscripts]# ps -ef | grep -i postm
postgres  2852     1  0 Oct25 ?        00:00:00 /usr/bin/postmaster -p
5432 -D /var/lib/pgsql/data
root     15115 14844  0 04:23 pts/0    00:00:00 grep -i postm
[root@chmmr dbscripts]# /etc/init.d/postgresql stop
Stopping postgresql service: ^C^C                          [FAILED]
[root@chmmr dbscripts]#
[root@chmmr dbscripts]# killall -9 postmaster
[root@chmmr dbscripts]# ps -ef | grep -i postm
root     15183 14844  0 04:24 pts/0    00:00:00 grep -i postm
[root@chmmr dbscripts]# /etc/init.d/postgresql restart
Stopping postgresql service: ^C^C                          [FAILED]
^C
[root@chmmr dbscripts]# /etc/init.d/postgresql start
^C

I have no better luck using pg_ctl directly versus using the postgresql
control script.

Again I hope this helps.  Thanks!

    - Karen

On Mon, 2009-10-26 at 17:07 +0800, Craig Ringer wrote:
> Karen Pease wrote:
> > kill -9 does kill postmaster (or at least seems to).  But I can't figure
> > out a way to get it restarted without a reboot -- I don't know what I'm
> > missing.  The Fedora postgres restart scripts don't do the trick, and I
> > couldn't get it to work with pg_ctl either.
>
> It'd help to know where the postmaster was stuck, and if possible where
> the backend you were using is stuck.
>
> A backtrace from gdb can be handy for this.
>
> http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
>
> > kill -9 doesn't work on the locked up httpd processes.  So that has to
> > have the system restarted.
>
> If `kill -9' isn't working they're probably in uninterruptable sleep in
> the kernel.
>
> You can find out what they're sleeping in with `ps':
>
>   ps ax -o pid,ppid,stat,wchan:50,cmd
>
> (Filter for just the postmaster and postgres processes if you want)
>
> > Both filesystems are EXT-4.
>
> That's interesting given the issues you're having...
>
> --
> Craig Ringer

Re: Postmaster hangs

От
Craig Ringer
Дата:
On 26/10/2009 5:28 PM, Karen Pease wrote:
> I did my best to follow the gdb instructions.  I ran:
>
> gdb -p 2852
>
> Then connected entered the logging statements, then ran "cont", then
> ctrl-c'ed it a couple times.  I got:

OK, so there's nothing shrieklingly obviously wrong with what the
postmaster is up to. But what about the backend that's stopped
responding? Try connecting gdb to that "postgres" process once it's
stopped responding and get a backtrace from that.

> [root@chmmr dbscripts]# ps ax -o pid,ppid,stat,wchan:50,cmd | grep -i
> http
>  3376     1 D
> start_this_handle                                  /usr/sbin/httpd

start_this_handle appears in common ext4 call paths, and several lkml
issue reports over time:

http://lkml.org/lkml/2009/3/11/253

http://www.google.com.au/search?q=%22start_this_handle%22+ext4

Smells like kernel bug. When looking at two extremely stable pieces of
software (Pg and apache) both having issues on a well tested kernel
(Linux) with a new and fairly immature file system in use (ext4) it's
probably not an unreasonable assumption.

You can find out a bit more about what the kernel is doing using the
"magic" keyboard sequence "ALT-SysRQ-T" from a vconsole (not under X).
If the results scroll past too fast you can page through them with
"less" on /var/log/kern.log (or /var/log/dmesg depending on your distro)
or using the "dmesg" command.

I won't be too surprised if you see a kernel stack trace for your httpd
process(es) starting something like this:

schedule+0x18/0x40
start_this_handle+0x374/0x508
jbd2_journal_start+0xbc/0x11c
ext4_journal_start_sb+0x5c/0x84
ext4_dirty_inode+0xd4/0xf0

--
Craig Ringer

Re: Postmaster hangs

От
Tom Lane
Дата:
Karen Pease <meme@daughtersoftiresias.org> writes:
> Postgres is by default in /var/lib/pgsql.  When / started running out of
> space, I moved it to /scratch and symlinked:
> lrwxrwxrwx 1 root       root         15 2009-09-11 16:57 pgsql
> -> /scratch/pgsql//

Hmm, that could be a problem right there.  Do you have SELinux running?
It will normally try to prevent the postmaster from accessing files
outside /var/lib/pgsql.  Depending on how you moved the files, they
might have had the right security labels anyway, but I suspect some of
your symptoms might have to do with some files under /scratch/pgsql
not having the right labels.  Try poking around with ls -lZ.  If you
find some that have generic labels, you can fix them manually with
chcon, but a better long-term solution would be to teach selinux
that stuff under /scratch/pgsql should be treated the same as stuff
under /var/lib/pgsql would be --- that way a stray restorecon won't
mess up your work.  Last I heard, the relevant policy rules are

/var/lib/pgsql/data(/.*)?        gen_context(system_u:object_r:postgresql_db_t,s0)
/var/lib/pgsql/logfile(/.*)?        gen_context(system_u:object_r:postgresql_log_t,s0)
/var/lib/pgsql/pgstartup\.log        gen_context(system_u:object_r:postgresql_log_t,s0)

Unfortunately I don't know selinux well enough to know where to
add custom rules :-(, but a bit of manual-reading ought to tell you.

If it's *not* a permissions issue, then I would expect postgres to
be logging something relevant ... have you checked into the log
files?

            regards, tom lane

Re: Postmaster hangs

От
Karen Pease
Дата:
No -- one of the first things I do is shut off selinux, as it always is
a pain.

[root@chmmr meme]# more /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#       targeted - Targeted processes are protected,
#       mls - Multi Level Security protection.
SELINUXTYPE=targeted

    - Karen


On Mon, 2009-10-26 at 09:52 -0400, Tom Lane wrote:
> Karen Pease <meme@daughtersoftiresias.org> writes:
> > Postgres is by default in /var/lib/pgsql.  When / started running out of
> > space, I moved it to /scratch and symlinked:
> > lrwxrwxrwx 1 root       root         15 2009-09-11 16:57 pgsql
> > -> /scratch/pgsql//
>
> Hmm, that could be a problem right there.  Do you have SELinux running?
> It will normally try to prevent the postmaster from accessing files
> outside /var/lib/pgsql.  Depending on how you moved the files, they
> might have had the right security labels anyway, but I suspect some of
> your symptoms might have to do with some files under /scratch/pgsql
> not having the right labels.  Try poking around with ls -lZ.  If you
> find some that have generic labels, you can fix them manually with
> chcon, but a better long-term solution would be to teach selinux
> that stuff under /scratch/pgsql should be treated the same as stuff
> under /var/lib/pgsql would be --- that way a stray restorecon won't
> mess up your work.  Last I heard, the relevant policy rules are
>
> /var/lib/pgsql/data(/.*)?        gen_context(system_u:object_r:postgresql_db_t,s0)
> /var/lib/pgsql/logfile(/.*)?        gen_context(system_u:object_r:postgresql_log_t,s0)
> /var/lib/pgsql/pgstartup\.log        gen_context(system_u:object_r:postgresql_log_t,s0)
>
> Unfortunately I don't know selinux well enough to know where to
> add custom rules :-(, but a bit of manual-reading ought to tell you.
>
> If it's *not* a permissions issue, then I would expect postgres to
> be logging something relevant ... have you checked into the log
> files?
>
>             regards, tom lane

Re: Postmaster hangs

От
Karen Pease
Дата:
> OK, so there's nothing shrieklingly obviously wrong with what the
> postmaster is up to. But what about the backend that's stopped
> responding? Try connecting gdb to that "postgres" process once it's
> stopped responding and get a backtrace from that.
>

Okay -- I started up a psql instance, which immediately locks up.  I
then attached gdb to it and got this:

(gdb) cont
Continuing.
^C
Program received signal SIGINT, Interrupt.
0x00fe2416 in __kernel_vsyscall ()
(gdb) cont
Continuing.
^C
Program received signal SIGINT, Interrupt.
0x00fe2416 in __kernel_vsyscall ()

I also tried it on the psql instance that ran the update that caused the
lockup:

(gdb) cont
Continuing.
^C
Program received signal SIGINT, Interrupt.
0x00a09416 in __kernel_vsyscall ()
(gdb) cont
Continuing.
^C
Program received signal SIGINT, Interrupt.
0x00a09416 in __kernel_vsyscall ()
(gdb)


> You can find out a bit more about what the kernel is doing using the
> "magic" keyboard sequence "ALT-SysRQ-T" from a vconsole (not under X).

Nothing happened.  Nothing useful in dmesg -- certainly no stacktraces.

    - Karen

Re: Postmaster hangs

От
Craig Ringer
Дата:
On Tue, 2009-10-27 at 00:50 -0500, Karen Pease wrote:
> > OK, so there's nothing shrieklingly obviously wrong with what the
> > postmaster is up to. But what about the backend that's stopped
> > responding? Try connecting gdb to that "postgres" process once it's
> > stopped responding and get a backtrace from that.
> >
>
> Okay -- I started up a psql instance, which immediately locks up.  I
> then attached gdb to it and got this:
>
> (gdb) cont
> Continuing.
> ^C
> Program received signal SIGINT, Interrupt.
> 0x00fe2416 in __kernel_vsyscall ()

You didn't actually request a backtrace (bt), so all it shows is the top
stack frame. That doesn't tell us anything except that it's busy in a
system call in the kernel.

> > You can find out a bit more about what the kernel is doing using the
> > "magic" keyboard sequence "ALT-SysRQ-T" from a vconsole (not under X).
>
> Nothing happened.  Nothing useful in dmesg -- certainly no stacktraces.

Your kernel might not have the "magic sysrq key" enabled. Run:

  sudo sysctl -w kernel.sysrq=1

and try again. Note that on some systems with weird keyboards you might
have to hold the "Fn" key (if you have one) or disable "F-Lock" (if you
have it) to get SysRq to be recognised. The print screen / PrtScn key is
usually shared with SysRq even if it's not marked as such.

Hmm, it looks like the SysRq magic key sequences even work under X. I
didn't think they did, but on my system here hitting alt-sysrq-t under
X11 dumps a bunch of task trace data in to /var/log/kern.log (Ubuntu
system), including task info and some general info on the CPU states.

Oct 27 14:20:19 wallace kernel: [13668207.501781] postgres      S 28e389e2     0  3152  31105
Oct 27 14:20:19 wallace kernel: [13668207.501781]  f352bcac 00200086 c3634358 28e389e2 00029346 00000000 c069a340
c07d2180
Oct 27 14:20:19 wallace kernel: [13668207.501781]  c51fe480 c51fe6f8 c3634300 ffffffff c3634300 00000000 c3634300
c51fe6f8
Oct 27 14:20:19 wallace kernel: [13668207.501781]  f352bca8 c01398c8 c90e0000 7fffffff c90e01a8 f352bcf8 c050ec7d
f352bcc8
Oct 27 14:20:19 wallace kernel: [13668207.501781] Call Trace:
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c01398c8>] ? enqueue_task_fair+0x68/0x70
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c050ec7d>] schedule_timeout+0xad/0xe0
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c0156aea>] ? prepare_to_wait+0x3a/0x70
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c04aaf38>] unix_stream_data_wait+0x88/0xe0
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c0156890>] ? autoremove_wake_function+0x0/0x50
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c04ac631>] unix_stream_recvmsg+0x311/0x490
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c02b0990>] ? apparmor_socket_recvmsg+0x10/0x20
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c04356e4>] sock_recvmsg+0xf4/0x120
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c0156890>] ? autoremove_wake_function+0x0/0x50
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c01c3bb7>] ? __mem_cgroup_uncharge_common+0x137/0x170
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c01a39b8>] ? __dec_zone_page_state+0x18/0x20
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c01b0411>] ? page_remove_rmap+0x61/0x130
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c012eb00>] ? kunmap_atomic+0x50/0x60
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c043599c>] sys_recvfrom+0x7c/0xd0
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c019c305>] ? __pagevec_free+0x25/0x30
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c019eee0>] ? release_pages+0x160/0x1a0
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c02d2bfd>] ? rb_erase+0xcd/0x150
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c0435a26>] sys_recv+0x36/0x40
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c0435be7>] sys_socketcall+0x1b7/0x2b0
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c014617b>] ? sys_gettimeofday+0x2b/0x70
Oct 27 14:20:19 wallace kernel: [13668207.501781]  [<c0109eef>] sysenter_do_call+0x12/0x2f


BTW, if you do get kernel task trace information and you decide to
redact it, it'd be good to include at least all your `postgres'
instances, all `httpd' instances, and the summary information at the
end.

--
Craig Ringer

Re: Postmaster hangs

От
Karen Pease
Дата:
Sorry for the delay in responding, and thanks for your help.


> You didn't actually request a backtrace (bt), so all it shows is the top
> stack frame. That doesn't tell us anything except that it's busy in a
> system call in the kernel.

For a newly-started psql process that locks as soon as the password is
entered:

#0  0x00fbe416 in __kernel_vsyscall ()
#1  0x00c7673b in poll () from /lib/libc.so.6
#2  0x00d20eaa in ?? () from /usr/lib/libpq.so.5
#3  0x00d20faa in ?? () from /usr/lib/libpq.so.5
#4  0x00d1b344 in ?? () from /usr/lib/libpq.so.5
#5  0x00d1d571 in PQsetdbLogin () from /usr/lib/libpq.so.5
#6  0x08055662 in geteuid ()
#7  0x00bb36e5 in __libc_start_main () from /lib/libc.so.6
#8  0x0804a9c1 in geteuid ()

And for the process that ran the update commands that locked it:

#0  0x00e12416 in __kernel_vsyscall ()
#1  0x00c7673b in poll () from /lib/libc.so.6
#2  0x00d20eaa in ?? () from /usr/lib/libpq.so.5
#3  0x00d20faa in ?? () from /usr/lib/libpq.so.5
#4  0x00d21023 in ?? () from /usr/lib/libpq.so.5
#5  0x00d2062a in PQgetResult () from /usr/lib/libpq.so.5
#6  0x00d20853 in ?? () from /usr/lib/libpq.so.5
#7  0x0804e97f in geteuid ()
#8  0x080528bb in geteuid ()
#9  0x0804b37c in geteuid ()
#10 0x08055856 in geteuid ()
#11 0x00bb36e5 in __libc_start_main () from /lib/libc.so.6
#12 0x0804a9c1 in geteuid ()


> > > You can find out a bit more about what the kernel is doing using the
> > > "magic" keyboard sequence "ALT-SysRQ-T" from a vconsole (not under X).
> >
> > Nothing happened.  Nothing useful in dmesg -- certainly no stacktraces.
>
> Your kernel might not have the "magic sysrq key" enabled. Run:
>
>   sudo sysctl -w kernel.sysrq=1
>
> and try again. Note that on some systems with weird keyboards you might
> have to hold the "Fn" key (if you have one) or disable "F-Lock" (if you
> have it) to get SysRq to be recognised. The print screen / PrtScn key is
> usually shared with SysRq even if it's not marked as such.

That sysctl command did the trick.  Here's the relevant sections:

psql          S c1ff7c00     0 13794  13755
       eb383bcc 00200082 f11b1c00 c1ff7c00 eb383bdc c087c67c c087fc00
c087fc00
       c087fc00 c24ecce0 c24ecf54 c1ff7c00 00000002 c1ff7c00 c24ecf54
c1ff7c00
       00000002 c1ff7c00 e7f75b40 c24ecf54 05f35f1a 7cc8a1e3 00005a28
c24ecf54
Call Trace:
 [<c06aaccb>] schedule_timeout+0x17/0xbc
 [<c049e1ef>] ? __pollwait+0xad/0xb6
 [<c0691aa2>] ? unix_poll+0x17/0x8a
 [<c049d41c>] do_sys_poll+0x292/0x348
 [<c049e142>] ? __pollwait+0x0/0xb6
 [<c0428230>] ? default_wake_function+0x0/0xd
 [<c051b115>] ? __next_cpu+0x15/0x25
 [<c0425b31>] ? tg_shares_up+0x165/0x178
 [<c04259cc>] ? tg_shares_up+0x0/0x178
 [<c041fa2f>] ? walk_tg_tree+0x62/0x72
 [<c041fa91>] ? tg_nop+0x0/0x5
 [<c04149bb>] ? apic_wait_icr_idle+0x17/0x21
 [<c04221dc>] ? task_rq_lock+0x49/0x73
 [<c041445e>] ? __send_IPI_dest_field+0x49/0x4e
 [<c0414595>] ? send_IPI_mask_bitmask+0x5e/0x79
 [<c041fc04>] ? update_curr+0x8d/0xf0
 [<c04226d4>] ? dequeue_entity+0x15/0x13a
 [<c0422bd8>] ? dequeue_task_fair+0x3d/0x42
 [<c04036bf>] ? __switch_to+0xb9/0x139
 [<c042b045>] ? finish_task_switch+0x2f/0xb0
 [<c06aa9bf>] ? schedule+0x6ee/0x70d
 [<c045a5c0>] ? utrace_reset+0x1e7/0x1ef
 [<c045af80>] ? utrace_stop+0x19b/0x1ac
 [<c045b00f>] ? finish_resume_report+0x7e/0x8f
 [<c045b85c>] ? utrace_get_signal+0x4a8/0x5b1
 [<c04a2d5b>] ? touch_atime+0xa0/0xba
 [<c0434ef2>] ? ptrace_report_signal+0x0/0x253
 [<c04393da>] ? dequeue_signal+0x92/0x105
 [<c043970f>] ? get_signal_to_deliver+0x2c2/0x2d6
 [<c0403f10>] ? do_notify_resume+0x71/0x679
 [<c042262a>] ? set_next_entity+0x8b/0xf7
 [<c04036bf>] ? __switch_to+0xb9/0x139
 [<c042b045>] ? finish_task_switch+0x2f/0xb0
 [<c06aa9bf>] ? schedule+0x6ee/0x70d
 [<c0461e8f>] ? audit_syscall_entry+0xf9/0x123
 [<c049d74d>] do_restart_poll+0x2f/0x52
 [<c0437017>] sys_restart_syscall+0x11/0x13
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
psql          S c087fc00     0 18118  12388
       ee7a0bdc 00200082 f7836680 c087fc00 c087fc00 c087c67c c087fc00
c087fc00
       c087fc00 c44bc010 c44bc284 c1fecc00 00000001 c1fecc00 cdb9adb4
c44bc284
       05f2d5b0 001280d2 c07fc90c c44bc284 05f53997 ee7a0bc8 ee7a0bc8
00000000
Call Trace:
 [<c06aaccb>] schedule_timeout+0x17/0xbc
 [<c049e1ef>] ? __pollwait+0xad/0xb6
 [<c0691aa2>] ? unix_poll+0x17/0x8a
 [<c049d41c>] do_sys_poll+0x292/0x348
 [<c049e142>] ? __pollwait+0x0/0xb6
 [<c0428230>] ? default_wake_function+0x0/0xd
 [<c051b115>] ? __next_cpu+0x15/0x25
 [<c0425b31>] ? tg_shares_up+0x165/0x178
 [<c04259cc>] ? tg_shares_up+0x0/0x178
 [<c041fa2f>] ? walk_tg_tree+0x62/0x72
 [<c041fa91>] ? tg_nop+0x0/0x5
 [<c06abe09>] ? _spin_lock+0x8/0xb
 [<c04221dc>] ? task_rq_lock+0x49/0x73
 [<c041fc04>] ? update_curr+0x8d/0xf0
 [<c04226d4>] ? dequeue_entity+0x15/0x13a
 [<c0422bd8>] ? dequeue_task_fair+0x3d/0x42
 [<c04036bf>] ? __switch_to+0xb9/0x139
 [<c042b045>] ? finish_task_switch+0x2f/0xb0
 [<c06aa9bf>] ? schedule+0x6ee/0x70d
 [<c040a793>] ? save_i387+0x103/0x19e
 [<c048f651>] ? kmem_cache_free+0x8c/0xa7
 [<c0403e30>] ? setup_sigcontext+0xfa/0x169
 [<c040aaf5>] ? restore_i387+0xdd/0x12f
 [<c040467c>] ? restore_sigcontext+0x151/0x1a8
 [<c0461e8f>] ? audit_syscall_entry+0xf9/0x123
 [<c049d6ee>] sys_poll+0x3a/0x6a
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
postmaster    D c1fc8a00     0 18125   2877
       eb1eeda8 00200082 eb1eed4c c1fc8a00 eb1eed4c c087c67c c087fc00
c087fc00
       c087fc00 c44be680 c44be8f4 c1fecc00 00000001 c1fecc00 f750ae38
eb1eed88
       c041f325 00000000 c1fc8a00 c44be8f4 05edcf8f 00200246 eb1eeddc
eb1eeda8
Call Trace:
 [<c041f325>] ? kunmap_atomic+0x87/0xa7
 [<c043f1e8>] ? prepare_to_wait+0x51/0x58
 [<f8853904>] start_this_handle+0x2cc/0x3dd [jbd2]
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<f8853b7f>] jbd2_journal_start+0x8c/0xb9 [jbd2]
 [<f894de9b>] ext4_journal_start_sb+0x40/0x42 [ext4]
 [<f89420f8>] ext4_dirty_inode+0x24/0x97 [ext4]
 [<c04ac266>] __mark_inode_dirty+0x29/0x143
 [<c04a2d6f>] touch_atime+0xb4/0xba
 [<c04718ab>] generic_file_aio_read+0x4ff/0x53d
 [<c0492b62>] do_sync_read+0xab/0xe9
 [<c0480500>] ? __vma_link+0x6e/0x73
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<c04f683e>] ? security_file_permission+0xf/0x11
 [<c0492ab7>] ? do_sync_read+0x0/0xe9
 [<c0493418>] vfs_read+0x81/0xdc
 [<c049350c>] sys_read+0x3b/0x60
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
postmaster    D 00000002     0 13813   2877
       eb199da8 00200086 c44b9cd8 00000002 eb199d58 c087c67c c087fc00
c087fc00
       c087fc00 c44b99a0 c44b9c14 c1ff7c00 00000002 c1ff7c00 ef2d1900
c087c67c
       c087fc00 ec018600 eb199d94 c44b9c14 05e4138d 00200246 eb199ddc
eb199da8
Call Trace:
 [<c043f1e8>] ? prepare_to_wait+0x51/0x58
 [<f8853904>] start_this_handle+0x2cc/0x3dd [jbd2]
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<f8853b7f>] jbd2_journal_start+0x8c/0xb9 [jbd2]
 [<f894de9b>] ext4_journal_start_sb+0x40/0x42 [ext4]
 [<f89420f8>] ext4_dirty_inode+0x24/0x97 [ext4]
 [<c04ac266>] __mark_inode_dirty+0x29/0x143
 [<c04a2d6f>] touch_atime+0xb4/0xba
 [<c04718ab>] generic_file_aio_read+0x4ff/0x53d
 [<c0492b62>] do_sync_read+0xab/0xe9
 [<c0446572>] ? clockevents_program_event+0xd4/0xe3
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<c041f842>] ? need_resched+0x18/0x22
 [<c06aaab2>] ? _cond_resched+0x8/0x32
 [<c04f683e>] ? security_file_permission+0xf/0x11
 [<c0492ab7>] ? do_sync_read+0x0/0xe9
 [<c0493418>] vfs_read+0x81/0xdc
 [<c049350c>] sys_read+0x3b/0x60
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D e8859ec0     0 16604   3285
       e8859f24 00000082 00000060 e8859ec0 c041f842 c087c67c c087fc00
c087fc00
       c087fc00 eaaf0000 eaaf0274 c2002c00 00000003 c2002c00 c4e99d2e
00005b61
       00000901 00000000 00000000 eaaf0274 000041fd 00000002 00000284
000001f4
Call Trace:
 [<c041f842>] ? need_resched+0x18/0x22
 [<c06ab0ce>] __mutex_lock_common+0x8a/0xd9
 [<c06ab12f>] __mutex_lock_killable_slowpath+0x12/0x14
 [<c06ab15e>] ? mutex_lock_killable+0x2d/0x34
 [<c06ab15e>] mutex_lock_killable+0x2d/0x34
 [<c049ce61>] vfs_readdir+0x41/0x8f
 [<c049cbd0>] ? filldir64+0x0/0xcd
 [<c049cf12>] sys_getdents64+0x63/0xa1
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D e7db6ec0     0 16605   3285
       e7db6f24 00000082 00000060 e7db6ec0 c041f842 c087c67c c087fc00
c087fc00
       c087fc00 eaaf0cd0 eaaf0f44 c2002c00 00000003 c2002c00 bff09b8c
e7db6f60
       00000901 00000000 00000000 eaaf0f44 05f7ea2e 00000002 00000284
000001f4
Call Trace:
 [<c041f842>] ? need_resched+0x18/0x22
 [<c06ab0ce>] __mutex_lock_common+0x8a/0xd9
 [<c06ab12f>] __mutex_lock_killable_slowpath+0x12/0x14
 [<c06ab15e>] ? mutex_lock_killable+0x2d/0x34
 [<c06ab15e>] mutex_lock_killable+0x2d/0x34
 [<c049ce61>] vfs_readdir+0x41/0x8f
 [<c049cbd0>] ? filldir64+0x0/0xcd
 [<c049cf12>] sys_getdents64+0x63/0xa1
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D ea951d80     0 16606   3285
       ea951ddc 00000086 ea951da8 ea951d80 c0498e59 c087c67c c087fc00
c087fc00
       c087fc00 eaaf6680 eaaf68f4 c2002c00 00000003 c2002c00 c04a1bec
ea951e04
       f740c088 00000006 94f29ee3 eaaf68f4 05f8a91b ea951e10 f740c088
ea951e10
Call Trace:
 [<c0498e59>] ? path_to_nameidata+0x11/0x2a
 [<c04a1bec>] ? __d_lookup+0xb3/0xe7
 [<c04a22b2>] ? dput+0x34/0x107
 [<c06ab0ce>] __mutex_lock_common+0x8a/0xd9
 [<c06ab1bf>] __mutex_lock_slowpath+0x12/0x15
 [<c06ab040>] ? mutex_lock+0x29/0x2d
 [<c06ab040>] mutex_lock+0x29/0x2d
 [<c0499133>] do_lookup+0x6b/0x11e
 [<c049a5a1>] __link_path_walk+0x57e/0x6b5
 [<c049aac4>] path_walk+0x4c/0x9b
 [<c049acc0>] do_path_lookup+0x12d/0x175
 [<c049bdbb>] user_path_at+0x3c/0x62
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<c0495b75>] vfs_stat_fd+0x19/0x40
 [<c0495c4b>] vfs_stat+0x11/0x13
 [<c0495c61>] sys_stat64+0x14/0x2b
 [<c0493694>] ? generic_file_llseek+0x3a/0x46
 [<c0461e8f>] ? audit_syscall_entry+0xf9/0x123
 [<c040b098>] ? syscall_trace_enter+0xb5/0xc9
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D db54cec0     0 18182   3285
       db54cf24 00000082 00000060 db54cec0 c041f842 c087c67c c087fc00
c087fc00
       c087fc00 c44bb340 c44bb5b4 c2002c00 00000003 c2002c00 e2abbec3
00005b5c
       00000901 00000000 00000000 c44bb5b4 000041fd 00000002 00000284
000001f4
Call Trace:
 [<c041f842>] ? need_resched+0x18/0x22
 [<c06ab0ce>] __mutex_lock_common+0x8a/0xd9
 [<c06ab12f>] __mutex_lock_killable_slowpath+0x12/0x14
 [<c06ab15e>] ? mutex_lock_killable+0x2d/0x34
 [<c06ab15e>] mutex_lock_killable+0x2d/0x34
 [<c049ce61>] vfs_readdir+0x41/0x8f
 [<c049cbd0>] ? filldir64+0x0/0xcd
 [<c049cf12>] sys_getdents64+0x63/0xa1
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D db755d80     0 21298   3285
       db755ddc 00000082 db755da8 db755d80 c0498e59 c087c67c c087fc00
c087fc00
       c087fc00 ec4d2670 ec4d28e4 c2002c00 00000003 c2002c00 3c45a442
00005b4e
       f740c088 00000006 94f29ee3 ec4d28e4 ea5cfcc0 db755e10 f740c088
db755e10
Call Trace:
 [<c0498e59>] ? path_to_nameidata+0x11/0x2a
 [<c04a22b2>] ? dput+0x34/0x107
 [<c06ab0ce>] __mutex_lock_common+0x8a/0xd9
 [<c06ab1bf>] __mutex_lock_slowpath+0x12/0x15
 [<c06ab040>] ? mutex_lock+0x29/0x2d
 [<c06ab040>] mutex_lock+0x29/0x2d
 [<c0499133>] do_lookup+0x6b/0x11e
 [<c049a5a1>] __link_path_walk+0x57e/0x6b5
 [<c049aac4>] path_walk+0x4c/0x9b
 [<c049acc0>] do_path_lookup+0x12d/0x175
 [<c049bdbb>] user_path_at+0x3c/0x62
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<c0495b75>] vfs_stat_fd+0x19/0x40
 [<c0495c4b>] vfs_stat+0x11/0x13
 [<c0495c61>] sys_stat64+0x14/0x2b
 [<c0493694>] ? generic_file_llseek+0x3a/0x46
 [<c0461e8f>] ? audit_syscall_entry+0xf9/0x123
 [<c040b098>] ? syscall_trace_enter+0xb5/0xc9
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D cc21c700     0 21309   3285
       ee7afeb8 00000086 f893f20d cc21c700 ca80d110 c087c67c c087fc00
c087fc00
       c087fc00 ec4d6680 ec4d68f4 c1fe1c00 00000000 c1fe1c00 00000000
b557abd0
       ee7aff94 ee7afea4 c049cc67 ec4d68f4 05f3a48d 00000246 ee7afeec
ee7afeb8
Call Trace:
 [<f893f20d>] ? ext4_htree_store_dirent+0xdb/0xe6 [ext4]
 [<c049cc67>] ? filldir64+0x97/0xcd
 [<c043f1e8>] ? prepare_to_wait+0x51/0x58
 [<f8853904>] start_this_handle+0x2cc/0x3dd [jbd2]
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<f8853b7f>] jbd2_journal_start+0x8c/0xb9 [jbd2]
 [<f894de9b>] ext4_journal_start_sb+0x40/0x42 [ext4]
 [<f89420f8>] ext4_dirty_inode+0x24/0x97 [ext4]
 [<c04ac266>] __mark_inode_dirty+0x29/0x143
 [<c04a2d6f>] touch_atime+0xb4/0xba
 [<c049ce96>] vfs_readdir+0x76/0x8f
 [<c049cbd0>] ? filldir64+0x0/0xcd
 [<c049cf12>] sys_getdents64+0x63/0xa1
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D c17cf780     0 21310   3285
       ee7c0e14 00000082 c065fe70 c17cf780 ee7c0dc8 c087c67c c087fc00
c087fc00
       c087fc00 ec4d19a0 ec4d1c14 c1fe1c00 00000000 c1fe1c00 00000040
c06d9348
       ee7c0e54 ee7c0e0c c04acb17 ec4d1c14 05f6ab0b 00000246 ee7c0e48
ee7c0e14
Call Trace:
 [<c065fe70>] ? tcp_sendpage+0x0/0x39b
 [<c04acb17>] ? pipe_to_sendpage+0x56/0x61
 [<c043f1e8>] ? prepare_to_wait+0x51/0x58
 [<f8853904>] start_this_handle+0x2cc/0x3dd [jbd2]
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<f8853b7f>] jbd2_journal_start+0x8c/0xb9 [jbd2]
 [<f894de9b>] ext4_journal_start_sb+0x40/0x42 [ext4]
 [<f89420f8>] ext4_dirty_inode+0x24/0x97 [ext4]
 [<c04ac266>] __mark_inode_dirty+0x29/0x143
 [<c04a2d6f>] touch_atime+0xb4/0xba
 [<c04ad5fe>] splice_direct_to_actor+0x144/0x187
 [<c04ad156>] ? direct_splice_actor+0x0/0x1c
 [<c04ad6a8>] do_splice_direct+0x67/0x84
 [<c0492dd5>] do_sendfile+0x188/0x237
 [<c0492ebe>] sys_sendfile64+0x3a/0x7a
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D c1867000     0 21311   3285
       f12b8e14 00000082 c065fe70 c1867000 f12b8dc8 c087c67c c087fc00
c087fc00
       c087fc00 d4da0000 d4da0274 c1fe1c00 00000000 c1fe1c00 64fb23f0
00005a99
       f12b8e54 f12b8e0c c04acb17 d4da0274 f12b8df8 00000246 f12b8e48
f12b8e14
Call Trace:
 [<c065fe70>] ? tcp_sendpage+0x0/0x39b
 [<c04acb17>] ? pipe_to_sendpage+0x56/0x61
 [<c043f1e8>] ? prepare_to_wait+0x51/0x58
 [<f8853904>] start_this_handle+0x2cc/0x3dd [jbd2]
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<f8853b7f>] jbd2_journal_start+0x8c/0xb9 [jbd2]
 [<f894de9b>] ext4_journal_start_sb+0x40/0x42 [ext4]
 [<f89420f8>] ext4_dirty_inode+0x24/0x97 [ext4]
 [<c04ac266>] __mark_inode_dirty+0x29/0x143
 [<c04a2d6f>] touch_atime+0xb4/0xba
 [<c04ad5fe>] splice_direct_to_actor+0x144/0x187
 [<c04ad156>] ? direct_splice_actor+0x0/0x1c
 [<c04ad6a8>] do_splice_direct+0x67/0x84
 [<c0492dd5>] do_sendfile+0x188/0x237
 [<c0492ebe>] sys_sendfile64+0x3a/0x7a
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         D c1867000     0   681   3285
       e7df7e14 00000086 c065fe70 c1867000 e7df7dc8 c087c67c c087fc00
c087fc00
       c087fc00 cc328cd0 cc328f44 c2002c00 00000003 c2002c00 a29652c3
00005aa0
       e7df7e54 e7df7e0c c04acb17 cc328f44 e7df7df8 00000246 e7df7e48
e7df7e14
Call Trace:
 [<c065fe70>] ? tcp_sendpage+0x0/0x39b
 [<c04acb17>] ? pipe_to_sendpage+0x56/0x61
 [<c043f1e8>] ? prepare_to_wait+0x51/0x58
 [<f8853904>] start_this_handle+0x2cc/0x3dd [jbd2]
 [<c043efe2>] ? autoremove_wake_function+0x0/0x33
 [<f8853b7f>] jbd2_journal_start+0x8c/0xb9 [jbd2]
 [<f894de9b>] ext4_journal_start_sb+0x40/0x42 [ext4]
 [<f89420f8>] ext4_dirty_inode+0x24/0x97 [ext4]
 [<c04ac266>] __mark_inode_dirty+0x29/0x143
 [<c04a2d6f>] touch_atime+0xb4/0xba
 [<c04ad5fe>] splice_direct_to_actor+0x144/0x187
 [<c04ad156>] ? direct_splice_actor+0x0/0x1c
 [<c04ad6a8>] do_splice_direct+0x67/0x84
 [<c0492dd5>] do_sendfile+0x188/0x237
 [<c0492ebe>] sys_sendfile64+0x3a/0x7a
 [<c0404c8a>] syscall_call+0x7/0xb
httpd         D e9287ec0     0 14218   3285
       e9287f24 00000082 00000060 e9287ec0 c041f842 c087c67c c087fc00
c087fc00
       c087fc00 eb2c99a0 eb2c9c14 c2002c00 00000003 c2002c00 92108055
00005a55
       00000901 00000000 00000000 eb2c9c14 000041ff 00000009 00000284
000001f4
Call Trace:
 [<c041f842>] ? need_resched+0x18/0x22
 [<c06ab0ce>] __mutex_lock_common+0x8a/0xd9
 [<c06ab12f>] __mutex_lock_killable_slowpath+0x12/0x14
 [<c06ab15e>] ? mutex_lock_killable+0x2d/0x34
 [<c06ab15e>] mutex_lock_killable+0x2d/0x34
 [<c049ce61>] vfs_readdir+0x41/0x8f
 [<c049cbd0>] ? filldir64+0x0/0xcd
 [<c049cf12>] sys_getdents64+0x63/0xa1
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
httpd         S c06aaab2     0 18474   3285
       ef333d94 00000082 ef333d2c c06aaab2 ef333d38 c087c67c c087fc00
c087fc00
       c087fc00 ec6319a0 ec631c14 c1ff7c00 00000002 c1ff7c00 c087c8e8
ef333dc4
       ef333d80 c041f4f5 ef333d6c ec631c14 05f8a33e c04ec457 ef333dc8
ee32dea8
Call Trace:
 [<c06aaab2>] ? _cond_resched+0x8/0x32
 [<c041f4f5>] ? kmap_atomic_prot+0x1b0/0x1da
 [<c04ec457>] ? ipc_lock+0x2b/0x45
 [<c04ee244>] sys_semtimedop+0x4ca/0x579
 [<c047d8fb>] ? handle_mm_fault+0x2e8/0x6ec
 [<c041f325>] ? kunmap_atomic+0x87/0xa7
 [<c06adcde>] ? do_page_fault+0x3ac/0x710
 [<c041f8b4>] ? resched_task+0x3a/0x6e
 [<c06abfcb>] ? _spin_unlock_irqrestore+0x22/0x38
 [<c0422597>] ? __dequeue_entity+0x73/0x7b
 [<c048edad>] ? __slab_alloc+0x56/0x45b
 [<c04036bf>] ? __switch_to+0xb9/0x139
 [<c06abe09>] ? _spin_lock+0x8/0xb
 [<c043f21a>] ? add_wait_queue+0x2b/0x30
 [<c04b99b5>] ? ep_ptable_queue_proc+0x58/0x69
 [<c0407c02>] sys_ipc+0x48/0x149
 [<c040b098>] ? syscall_trace_enter+0xb5/0xc9
 [<c0404c8a>] syscall_call+0x7/0xb
 =======================
Sched Debug Version: v0.07, 2.6.27.37-170.2.104.fc10.i686 #1
now at 100508767.613095 msecs
  .sysctl_sched_latency                    : 60.000000
  .sysctl_sched_min_granularity            : 12.000000
  .sysctl_sched_wakeup_granularity         : 15.000000
  .sysctl_sched_child_runs_first           : 0.000001
  .sysctl_sched_features                   : 15999

cpu#0, 2397.603 MHz
  .nr_running                    : 3
  .load                          : 5169
  .nr_switches                   : 39955626
  .nr_load_updates               : 31722464
  .nr_uninterruptible            : 4294962483
  .jiffies                       : 100208693
  .next_balance                  : 100.207487
  .curr->pid                     : 1419
  .clock                         : 100508693.131789
  .cpu_load[0]                   : 0
  .cpu_load[1]                   : 0
  .cpu_load[2]                   : 0
  .cpu_load[3]                   : 0
  .cpu_load[4]                   : 0

cfs_rq[0]:/
  .exec_clock                    : 7190027.411929
  .MIN_vruntime                  : 9629754.177543
  .min_vruntime                  : 9630117.494520
  .max_vruntime                  : 9630057.048647
  .spread                        : 302.871104
  .spread0                       : 0.000000
  .nr_running                    : 3
  .load                          : 5169
  .yld_exp_empty                 : 0
  .yld_act_empty                 : 0
  .yld_both_empty                : 0
  .yld_count                     : 5798
  .sched_switch                  : 0
  .sched_count                   : 57208443
  .sched_goidle                  : 33335034
  .ttwu_count                    : 21522783
  .ttwu_local                    : 19234927
  .bkl_count                     : 257158
  .nr_spread_over                : 227460
  .shares                        : 0

rt_rq[0]:/
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 950.000000

runnable tasks:
            task   PID         tree-key  switches  prio     exec-runtime
-------------------------------------------------------------------------------
        events/0    15   9630057.048647    166379   115   9630057.048647
6
           httpd  3387   9629754.177543     11379   120   9629754.177543
6
R nspluginviewer  1419   9632731.819953   3910675   120   9632731.819953
782

cpu#1, 2397.603 MHz
  .nr_running                    : 0
  .load                          : 0
  .nr_switches                   : 44557404
  .nr_load_updates               : 26043206
  .nr_uninterruptible            : 4294951028
  .jiffies                       : 100208693
  .next_balance                  : 100.208709
  .curr->pid                     : 0
  .clock                         : 100508772.127200
  .cpu_load[0]                   : 0
  .cpu_load[1]                   : 0
  .cpu_load[2]                   : 0
  .cpu_load[3]                   : 0
  .cpu_load[4]                   : 0

cfs_rq[1]:/
  .exec_clock                    : 10760814.641896
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 16030451.392284
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : 6400333.897764
  .nr_running                    : 0
  .load                          : 0
  .yld_exp_empty                 : 0
  .yld_act_empty                 : 0
  .yld_both_empty                : 0
  .yld_count                     : 6012
  .sched_switch                  : 0
  .sched_count                   : 46573288
  .sched_goidle                  : 17609300
  .ttwu_count                    : 24786657
  .ttwu_local                    : 23332714
  .bkl_count                     : 329871
  .nr_spread_over                : 592231
  .shares                        : 0

rt_rq[1]:/
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 950.000000

runnable tasks:
            task   PID         tree-key  switches  prio     exec-runtime
-------------------------------------------------------------------------------

cpu#2, 2397.603 MHz
  .nr_running                    : 0
  .load                          : 0
  .nr_switches                   : 37194792
  .nr_load_updates               : 18136776
  .nr_uninterruptible            : 4294963669
  .jiffies                       : 100208693
  .next_balance                  : 100.208694
  .curr->pid                     : 0
  .clock                         : 100508693.131564
  .cpu_load[0]                   : 0
  .cpu_load[1]                   : 521
  .cpu_load[2]                   : 654
  .cpu_load[3]                   : 654
  .cpu_load[4]                   : 694

cfs_rq[2]:/
  .exec_clock                    : 4935114.000916
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 7676888.540213
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : -1953228.954307
  .nr_running                    : 0
  .load                          : 0
  .yld_exp_empty                 : 0
  .yld_act_empty                 : 0
  .yld_both_empty                : 0
  .yld_count                     : 8319
  .sched_switch                  : 0
  .sched_count                   : 38119479
  .sched_goidle                  : 15526926
  .ttwu_count                    : 21012904
  .ttwu_local                    : 19121428
  .bkl_count                     : 108453
  .nr_spread_over                : 356716
  .shares                        : 0

rt_rq[2]:/
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 950.000000

runnable tasks:
            task   PID         tree-key  switches  prio     exec-runtime
-------------------------------------------------------------------------------

cpu#3, 2397.603 MHz
  .nr_running                    : 0
  .load                          : 0
  .nr_switches                   : 39364990
  .nr_load_updates               : 19956577
  .nr_uninterruptible            : 24732
  .jiffies                       : 100208693
  .next_balance                  : 100.208698
  .curr->pid                     : 0
  .clock                         : 100508778.377187
  .cpu_load[0]                   : 0
  .cpu_load[1]                   : 0
  .cpu_load[2]                   : 0
  .cpu_load[3]                   : 0
  .cpu_load[4]                   : 0

cfs_rq[3]:/
  .exec_clock                    : 7390465.685135
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 8633311.798439
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : -996805.696081
  .nr_running                    : 0
  .load                          : 0
  .yld_exp_empty                 : 0
  .yld_act_empty                 : 0
  .yld_both_empty                : 0
  .yld_count                     : 4123
  .sched_switch                  : 0
  .sched_count                   : 39982547
  .sched_goidle                  : 15706527
  .ttwu_count                    : 21587076
  .ttwu_local                    : 20679894
  .bkl_count                     : 314888
  .nr_spread_over                : 391794
  .shares                        : 0

rt_rq[3]:/
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 950.000000

runnable tasks:
            task   PID         tree-key  switches  prio     exec-runtime
-------------------------------------------------------------------------------

It's greek to me, but I hope this helps.  Thanks again!

    - Karen

Re: Postmaster hangs

От
Craig Ringer
Дата:
Karen Pease wrote:
> Sorry for the delay in responding, and thanks for your help.

Sorry I have to keep on asking things. Unfortunately it's sometimes
quite hard to tell what part of a system is misbehaving, and it can
require the collection of quite a bit of diagnostic data :S

From what I've seen below (see inline comments) it looks most of the
tasks you've provided information for are busy doing atime updates on
files after reading them. This doesn't necessarily mean the file system
and the atime updates are part of the problem, but given that you're
observing problems and the problem processes seems to be busy doing
atime updates it's at least interesting.

Possible explanations that spring to mind:

- Your disk subsystem is hopelessly overloaded and is trying to service
too many requests, so all you're really seeing is processes stuck in
synchronous read requests in the kernel while they wait for I/O
operations to complete. The atime writes add lots of small writes in
amongst the reads, which can really kill the performance of some RAID
systems in particular. However, if this was the problem then you'd
expect that after killing the problem processes with a SIGTERM (kill
-15) they'd eventually exit, even if it took a few minutes. Also, if
you ran "vmstat 1" you'd see a fair bit of disk activity.

- There's something wrong with your storage subsystem so that it's
failing to return from some read or write attempts. I've seen this
before when bad blocks on a disk cause almost endless retries, with the
reading process stuck waiting for a read that'll take hours to return if
it ever does. You'd usually see signs of this sort of issue in the
kernel log (dmesg) for direct disk access, in hardware raid controller
monitoring tools if it's hardware RAID, in `mdadm --detail' for Linux
software RAID, etc. smartctl from smartmontools can check individual ATA
disks for bad sectors and can report their internal error logs; this
won't help for hardware RAID arrays or SCSI/FC disks.

- You've hit a kernel bug, probably related to ext4.

- ... something I haven't thought of ...


Anyway, first, turn off atime updates, at least for any file system you
use PostgreSQL on. It's a performance killer. The "noatime" mount option
will disable atime updates for ext2/3/4 ; add it to the options section
in /etc/fstab then "mount -o remount" the file system(s) of interest.

Second, consider updating to the latest Linux kernel and seeing if the
problem goes away.

Details below:

> For a newly-started psql process that locks as soon as the password is
> entered:
>
> #0  0x00fbe416 in __kernel_vsyscall ()
> #1  0x00c7673b in poll () from /lib/libc.so.6
> #2  0x00d20eaa in ?? () from /usr/lib/libpq.so.5
> #3  0x00d20faa in ?? () from /usr/lib/libpq.so.5
> #4  0x00d1b344 in ?? () from /usr/lib/libpq.so.5
> #5  0x00d1d571 in PQsetdbLogin () from /usr/lib/libpq.so.5
> #6  0x08055662 in geteuid ()
> #7  0x00bb36e5 in __libc_start_main () from /lib/libc.so.6
> #8  0x0804a9c1 in geteuid ()

You're missing debug symbols for libpq, so gdb can't show line numbers
in libpq, and it can't even show function names unless the function is a
public export.  Your Linux distro clearly separates debug info out of
packages, and you'll need to install some debug symbol packages. For
Fedora, you can find instructions at:

https://fedoraproject.org/wiki/StackTraces#debuginfo

In this case it's probably not important since it's pretty clear any
issue is on the backend side, but in general it's a good idea to make
sure you have appropriate debug info packages installed, as noted in the
Pg wiki article on getting a stack trace.

> And for the process that ran the update commands that locked it:
>
> #0  0x00e12416 in __kernel_vsyscall ()
> #1  0x00c7673b in poll () from /lib/libc.so.6
> #2  0x00d20eaa in ?? () from /usr/lib/libpq.so.5
> #3  0x00d20faa in ?? () from /usr/lib/libpq.so.5
> #4  0x00d21023 in ?? () from /usr/lib/libpq.so.5
> #5  0x00d2062a in PQgetResult () from /usr/lib/libpq.so.5
> #6  0x00d20853 in ?? () from /usr/lib/libpq.so.5
> #7  0x0804e97f in geteuid ()
> #8  0x080528bb in geteuid ()
> #9  0x0804b37c in geteuid ()
> #10 0x08055856 in geteuid ()
> #11 0x00bb36e5 in __libc_start_main () from /lib/libc.so.6
> #12 0x0804a9c1 in geteuid ()

OK, so nothing special - it's waiting for something to read on a socket,
in this case presumably waiting for the Pg backend to respond.

> That sysctl command did the trick.  Here's the relevant sections:

OK, so both postgresql instances (it's hard to tell from here which is a
backend and which is the postmaster) appear to be doing atime updates
for files touched in disk reads, or at least that's how it looks with my
very minimal kernel knowledge.

Second: From this one snapshot in time it's not possible to tell what
(if anything) is going wrong with the postgresql backends. Everything
looks pretty sane. The kernel stack trace doesn't tell us if PostgreSQL
is somehow stuck doing this atime update or whether it's just busy
touching a lot of files. There's nothing inconsistent with a big, busy
query.

This is when I start wanting to be able to play with the machine. It's
hard to do interactive debugging remotely, and that's what I really want
to do to get a handle on this. I don't think I'm smart enough to figure
this one out otherwise, though others here on this list probably are.


As for Apache: about half of your httpd instances appear to be waiting
on a lock. The other half of them are updating atime metadata on files
after performing reads. And this one I'm not sure what it's doing:

> httpd         S c06aaab2     0 18474   3285
>        ef333d94 00000082 ef333d2c c06aaab2 ef333d38 c087c67c c087fc00
> c087fc00
>        c087fc00 ec6319a0 ec631c14 c1ff7c00 00000002 c1ff7c00 c087c8e8
> ef333dc4
>        ef333d80 c041f4f5 ef333d6c ec631c14 05f8a33e c04ec457 ef333dc8
> ee32dea8
> Call Trace:
>  [<c06aaab2>] ? _cond_resched+0x8/0x32
>  [<c041f4f5>] ? kmap_atomic_prot+0x1b0/0x1da
>  [<c04ec457>] ? ipc_lock+0x2b/0x45
>  [<c04ee244>] sys_semtimedop+0x4ca/0x579
>  [<c047d8fb>] ? handle_mm_fault+0x2e8/0x6ec
>  [<c041f325>] ? kunmap_atomic+0x87/0xa7
>  [<c06adcde>] ? do_page_fault+0x3ac/0x710
>  [<c041f8b4>] ? resched_task+0x3a/0x6e
>  [<c06abfcb>] ? _spin_unlock_irqrestore+0x22/0x38
>  [<c0422597>] ? __dequeue_entity+0x73/0x7b
>  [<c048edad>] ? __slab_alloc+0x56/0x45b
>  [<c04036bf>] ? __switch_to+0xb9/0x139
>  [<c06abe09>] ? _spin_lock+0x8/0xb
>  [<c043f21a>] ? add_wait_queue+0x2b/0x30
>  [<c04b99b5>] ? ep_ptable_queue_proc+0x58/0x69
>  [<c0407c02>] sys_ipc+0x48/0x149
>  [<c040b098>] ? syscall_trace_enter+0xb5/0xc9
>  [<c0404c8a>] syscall_call+0x7/0xb

--
Craig Ringer

Re: Postmaster hangs

От
Craig Ringer
Дата:
On 31/10/2009 4:32 PM, Karen Pease wrote:
> Sorry for the delay in responding, and thanks for your help.

By the way, you may also be able to learn some more using the 'blktrace'
tool. This gathers low-level data about what processes are performing
I/O on your system.

# mount -t debugfs none /sys/kernel/debug
# btrace /dev/YOUR_DISK

eg on my workstation, Pg is busy on /dev/md1 so:

# btrace /dev/md1
>   9,1    0    13496     2.514811417 31109  A   W 114959520 + 8 <- (252,3) 31073056
>   9,1    0    13497     2.514811732 31109  Q   W 114959520 + 8 [postgres]
>   9,1    0    13498     2.514816413 31109  A   W 114959528 + 8 <- (252,3) 31073064
>   9,1    0    13499     2.514816732 31109  Q   W 114959528 + 8 [postgres]
>   9,1    0    13500     2.514819960 31109  A   W 114959536 + 8 <- (252,3) 31073072
>   9,1    0    13501     2.514820280 31109  Q   W 114959536 + 8 [postgres]
>   9,1    0    13502     2.514823399 31109  A   W 114959544 + 8 <- (252,3) 31073080
>   9,1    0    13503     2.514823737 31109  Q   W 114959544 + 8 [postgres]
>   9,1    0    13504     2.514826792 31109  A   W 114959552 + 8 <- (252,3) 31073088
>   9,1    0    13505     2.514827112 31109  Q   W 114959552 + 8 [postgres]
>   9,1    0    13506     2.514830715 31109  A   W 114959560 + 8 <- (252,3) 31073096
>   9,1    0    13507     2.514831034 31109  Q   W 114959560 + 8 [postgres]
>   9,1    0    13508     2.514834278 31109  A   W 114959568 + 8 <- (252,3) 31073104
>   9,1    0    13509     2.514834593 31109  Q   W 114959568 + 8 [postgres]
>   9,1    0    13510     2.514838977 31109  U   N [postgres] 0
>   9,1    0    13511     2.515773952 31109  U   N [postgres] 0
>   9,1    0    13512     2.515778889    16  U   N [kblockd/0] 0
>   9,1    0    13513     2.519146885 31109  U   N [postgres] 0
>   9,1    0    13514     2.520511306 31109  U   N [postgres] 0
> CPU0 (9,1):
>  Reads Queued:           1,        4KiB  Writes Queued:        6718,    26872KiB
>  Read Dispatches:        0,        0KiB  Write Dispatches:        0,        0KiB
>  Reads Requeued:         0               Writes Requeued:         0
>  Reads Completed:        0,        0KiB  Writes Completed:        0,        0KiB
>  Read Merges:            0,        0KiB  Write Merges:            0,        0KiB
>  Read depth:             0               Write depth:             0
>  IO unplugs:            53               Timer unplugs:           6

(ignore quotes; working around a stuipid Thunderbird limitation)

--
Craig Ringer