Обсуждение: Core dump

Поиск
Список
Период
Сортировка

Core dump

От
Dan Moschuk
Дата:
Sparc solaris 2.7 with postgres 7.0.2

It seems to be reproducable, the server crashes on us at a rate of about
every few hours.

Any ideas?

GNU gdb 4.17
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "sparc-sun-solaris2.7"...

warning: core file may not match specified executable file.
Core was generated by `postmaster -i -N 128 -B 512'.
Program terminated with signal 10, Bus Error.
Reading symbols from /usr/lib/libgen.so.1...done.
Reading symbols from /usr/lib/libcrypt_i.so.1...done.
Reading symbols from /usr/lib/libnsl.so.1...done.
Reading symbols from /usr/lib/libsocket.so.1...done.
Reading symbols from /usr/lib/libdl.so.1...done.
Reading symbols from /usr/lib/libm.so.1...done.
Reading symbols from /usr/lib/libcurses.so.1...done.
Reading symbols from /usr/lib/libc.so.1...done.
Reading symbols from /usr/lib/libmp.so.2...done.
Reading symbols from /usr/platform/SUNW,Ultra-2/lib/libc_psr.so.1...done.
Reading symbols from /usr/lib/nss_files.so.1...done.
#0  0xff145fa0 in _morecore ()
(gdb) bt
#0  0xff145fa0 in _morecore ()
#1  0xff1457c8 in _malloc_unlocked ()
#2  0xff1455bc in malloc ()
#3  0x1dd170 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:263
 
#4  0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#5  <signal handler called>
#6  0xff145c04 in realfree ()
#7  0xff14581c in _malloc_unlocked ()
#8  0xff1455bc in malloc ()
#9  0x1dce4c in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:176
 
#10 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#11 <signal handler called>
#12 0xff19814c in _libc_write ()
#13 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has 
d me that some other backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current
transactionand am going "...)   at elog.c:312
 
#14 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#15 <signal handler called>
#16 0xff19814c in _libc_write ()
#17 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#18 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#19 <signal handler called>
#20 0x1dcf7c in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:205
 
#21 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#22 <signal handler called>
#23 0xff19814c in _libc_write ()
#24 0x1dd210 in elog (lev=0, 
d me that some other backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current
transactionand am going "...)   at elog.c:312
 
#25 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#26 <signal handler called>
#27 0xff19814c in _libc_write ()
#28 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#29 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#30 <signal handler called>
#31 0xff17e1c0 in _doprnt ()
#32 0xff181d0c in vsnprintf ()
#33 0x1dd100 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:249
 
#34 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#35 <signal handler called>
#36 0xff19814c in _libc_write ()
#37 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#38 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#39 <signal handler called>
#40 0xff19814c in _libc_write ()
#41 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#42 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#43 <signal handler called>
#44 0xff177d34 in dcgettext_u ()
#45 0xff177cc4 in dgettext ()
#46 0x1dcd84 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:159
 
#47 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#48 <signal handler called>
#49 0xff19814c in _libc_write ()
#50 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#51 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#52 <signal handler called>
#53 0xff136df0 in strlen ()
#54 0x1dcddc in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:172
 
#55 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#56 <signal handler called>
#57 0xff19814c in _libc_write ()
#58 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#59 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#60 <signal handler called>
#61 0xff19814c in _libc_write ()
#62 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#63 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#64 <signal handler called>
#65 0xff19814c in _libc_write ()
#66 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#67 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#68 <signal handler called>
#69 0xff19814c in _libc_write ()
#70 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...) 
#71 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#72 <signal handler called>
#73 0xff19814c in _libc_write ()
#74 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#75 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#76 <signal handler called>
#77 0xff19814c in _libc_write ()
#78 0x1dd210 in elog (lev=0,    fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that
someother backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction
andam going "...)   at elog.c:312
 
#79 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
#80 <signal handler called>
#81 0xff195dd4 in _poll ()
#82 0xff14e79c in select ()
#83 0x14df58 in s_lock_sleep (spin=18) at s_lock.c:62
#84 0x14dfa0 in s_lock (lock=0xff270011 "ÿ", file=0x2197c8 "spin.c", line=127)   at s_lock.c:76
#85 0x154620 in SpinAcquire (lockid=0) at spin.c:127
#86 0x149100 in ReadBufferWithBufferLock (reln=0x2ce4e8, blockNum=4323,    bufferLockHeld=1 '\001') at bufmgr.c:297
#87 0x14a130 in ReleaseAndReadBuffer (buffer=360, relation=0x2ce4e8,    blockNum=4323) at bufmgr.c:900
#88 0x4d5c4 in heapgettup (relation=0x2ce4e8, tuple=0x2d7a60, dir=1,    buffer=0x2d7a8c, snapshot=0x2d7648, nkeys=0,
key=0x0)at heapam.c:488
 
#89 0x4ee00 in heap_getnext (scandesc=0x2d7a48, backw=0) at heapam.c:973
#90 0xc5c40 in SeqNext (node=0x2d6120) at nodeSeqscan.c:101
#91 0xbb674 in ExecScan (node=0x2d6120, accessMtd=0xc5afc <SeqNext>)   at execScan.c:103
#92 0xc5ccc in ExecSeqScan (node=0x2d6120) at nodeSeqscan.c:150
#93 0xb7a3c in ExecProcNode (node=0x2d6120, parent=0x2d6120)   at execProcnode.c:268
#94 0xb589c in ExecutePlan (estate=0x2d7858, plan=0x2d6120,    operation=CMD_SELECT, offsetTuples=0, numberTuples=0,
direction=ForwardScanDirection,destfunc=0x2d7698) at execMain.c:1052
 
#95 0xb47bc in ExecutorRun (queryDesc=0x2d7a30, estate=0x2d7858, feature=3,    limoffset=0x0, limcount=0x0) at
execMain.c:291
#96 0x165bd8 in ProcessQueryDesc (queryDesc=0x2d7a30, limoffset=0x0,    limcount=0x0) at pquery.c:310
#97 0x165c90 in ProcessQuery (parsetree=0x2d5470, plan=0x2d6120, dest=Remote)   at pquery.c:353
#98 0x163650 in pg_exec_query_dest (   query_string=0x26b3d8 "SELECT campid, login, pass FROM ppc_campaigns WHERE login
='xxx' AND pass = 'xxx'", dest=Remote, aclOverride=0 '\000')   at postgres.c:663
 
#99 0x163404 in pg_exec_query (   query_string=0x26b3d8 "SELECT campid, login, pass FROM ppc_campaigns WHERE login =
'xxx'AND pass = 'xxx'") at postgres.c:562
 
#100 0x1650ac in PostgresMain (argc=6, argv=0xffbef198, real_argc=6,    real_argv=0xffbefd9c) at postgres.c:1590
#101 0x1319cc in DoBackend (port=0x279360) at postmaster.c:2009
#102 0x13117c in BackendStartup (port=0x279360) at postmaster.c:1776
#103 0x12f6c4 in ServerLoop () at postmaster.c:1037
#104 0x12ec34 in PostmasterMain (argc=6, argv=0xffbefd9c) at postmaster.c:725
#105 0xd8abc in main (argc=6, argv=0xffbefd9c) at main.c:93


-- 
Man is a rational animal who always loses his temper when he is called
upon to act in accordance with the dictates of reason.               -- Oscar Wilde


Re: Core dump

От
Alfred Perlstein
Дата:
* Dan Moschuk <dan@freebsd.org> [001012 09:47] wrote:
> 
> Sparc solaris 2.7 with postgres 7.0.2
> 
> It seems to be reproducable, the server crashes on us at a rate of about
> every few hours.
> 
> Any ideas?
> 
> GNU gdb 4.17
> Copyright 1998 Free Software Foundation, Inc.

[snip]

> #78 0x1dd210 in elog (lev=0, 
>     fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me that some other backend died
abnormallyand possibly corrupted shared memory.\n\tI have rolled back the current transaction and am going "...)
 
>     at elog.c:312
> #79 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
> #80 <signal handler called>
> #81 0xff195dd4 in _poll ()
> #82 0xff14e79c in select ()
> #83 0x14df58 in s_lock_sleep (spin=18) at s_lock.c:62
> #84 0x14dfa0 in s_lock (lock=0xff270011 "ÿ", file=0x2197c8 "spin.c", line=127)
>     at s_lock.c:76
> #85 0x154620 in SpinAcquire (lockid=0) at spin.c:127
> #86 0x149100 in ReadBufferWithBufferLock (reln=0x2ce4e8, blockNum=4323, 
>     bufferLockHeld=1 '\001') at bufmgr.c:297

% uname -sr
SunOS 5.7

from sys/signal.h:

#define SIGUSR1 16      /* user defined signal 1 */

Are you sure you don't have any application running amok sending
signals to processes it shouldn't?  Getting a superfolous signal
seems out of place, this doesn't look like a crash or anything
because USR1 isn't delivered by the kernel afaik.

And why are you using solaris?  *smack*

Any why isn't postmaster either blocking these signals or shutting
down cleanly on reciept of them?

-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."


Re: Core dump

От
Dan Moschuk
Дата:
| % uname -sr
| SunOS 5.7
| 
| from sys/signal.h:
| 
| #define SIGUSR1 16      /* user defined signal 1 */
| 
| Are you sure you don't have any application running amok sending
| signals to processes it shouldn't?  Getting a superfolous signal
| seems out of place, this doesn't look like a crash or anything
| because USR1 isn't delivered by the kernel afaik.

Any of the applications that are running on that server do not use
SIGUSR1.  I haven't looked through the code yet, but I figure postgres
was sending the SIGUSR1.

| And why are you using solaris?  *smack*

Well, because our main database server is a sparc, and _someone_ never
got around to finishing his sparc port. :-)

-Dan
-- 
Man is a rational animal who always loses his temper when he is called
upon to act in accordance with the dictates of reason.               -- Oscar Wilde


Re: Core dump

От
Tom Lane
Дата:
Dan Moschuk <dan@freebsd.org> writes:
> Sparc solaris 2.7 with postgres 7.0.2
> It seems to be reproducable, the server crashes on us at a rate of about
> every few hours.

That's a very bizarre backtrace.  Why the multiple levels of recursive
entry to the quickdie() signal handler?  I wonder if you aren't looking
at some kind of Solaris bug --- perhaps it's not able to cope with a
signal handler turning around and issuing new kernel calls.

The core file you are looking at is probably *not* from the original
failure, whatever that is.  The sequence is probably

1. Some backend crashes for unknown reason, dumping core.

2. Postmaster observes messy death of a child, decides that mass suicide  followed by restart is called for.
Postmastersends SIGUSR1 to all  remaining backends to make them commit hara-kiri.
 

3. One or more other backends crash trying to obey postmaster's command.  The corefile left for you to examine comes
fromwhichever crashed  last.
 

So there are at least two problems here, but we only have evidence of
the second one.

Since the problem is fairly reproducible, I'd suggest you temporarily
dike out the elog(NOTICE) call in quickdie() (in
src/backend/tcop/postgres.c), which will probably allow the backends
to honor SIGUSR1 without dumping core.  Then you have a shot at seeing
the core from the original failure.

Assuming that this works (ie, you find a core that's not got anything
to do with quickdie()), I'd suggest an inquiry to Sun about whether
their signal handler logic hasn't got a problem with write() issued
from inside a signal handler.  Meanwhile let us know what the new
backtrace shows.
        regards, tom lane


Re: Core dump

От
Dan Moschuk
Дата:
| > Sparc solaris 2.7 with postgres 7.0.2
| > It seems to be reproducable, the server crashes on us at a rate of about
| > every few hours.
| 
| That's a very bizarre backtrace.  Why the multiple levels of recursive
| entry to the quickdie() signal handler?  I wonder if you aren't looking
| at some kind of Solaris bug --- perhaps it's not able to cope with a
| signal handler turning around and issuing new kernel calls.

I'm not sure that is the issue, see below.

| The core file you are looking at is probably *not* from the original
| failure, whatever that is.  The sequence is probably
| 
| 1. Some backend crashes for unknown reason, dumping core.
| 
| 2. Postmaster observes messy death of a child, decides that mass suicide
|    followed by restart is called for.  Postmaster sends SIGUSR1 to all
|    remaining backends to make them commit hara-kiri.
| 
| 3. One or more other backends crash trying to obey postmaster's command.
|    The corefile left for you to examine comes from whichever crashed
|    last.
| 
| So there are at least two problems here, but we only have evidence of
| the second one.
| 
| Since the problem is fairly reproducible, I'd suggest you temporarily
| dike out the elog(NOTICE) call in quickdie() (in
| src/backend/tcop/postgres.c), which will probably allow the backends
| to honor SIGUSR1 without dumping core.  Then you have a shot at seeing
| the core from the original failure.

I will try this, however the database is currently running under light load.
Only under high load does postgres start to choke, and eventually die.

| Assuming that this works (ie, you find a core that's not got anything
| to do with quickdie()), I'd suggest an inquiry to Sun about whether
| their signal handler logic hasn't got a problem with write() issued
| from inside a signal handler.  Meanwhile let us know what the new
| backtrace shows.

I wrote a quick test program to test this theory.  Below is the code and the
output.

#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
#include <signal.h>

static void moo (int);

int
main (void)
{       signal(SIGUSR1, moo);       raise(SIGUSR1);
}

static void
moo (cow)       int cow;
{       printf("Getting ready for write()\n");       write(STDOUT_FILENO, "Hello!\n", 7);       printf("Done.\n");
}

static void
moo (cow)       int cow;
{       printf("Getting ready for write()\n");       write(STDOUT_FILENO, "Hello!\n", 7);       printf("Done.\n");
}

eclipse% ./x
Getting ready for write()
Hello!
Done.
eclipse% 

It would appear from that very rough test program that solaris doesn't mind
system calls from within a signal handler.

-- 
Man is a rational animal who always loses his temper when he is called
upon to act in accordance with the dictates of reason.               -- Oscar Wilde


Re: Core dump

От
Tom Lane
Дата:
Dan Moschuk <dan@freebsd.org> writes:
> It would appear from that very rough test program that solaris doesn't mind
> system calls from within a signal handler.

Still, it's a mighty peculiar backtrace.

After looking at postmaster.c, I see that the postmaster will issue
SIGUSR1 to all remaining backends *each* time it sees a child exit
with nonzero status.  And it just so happens that quickdie() chooses
to exit with exit(1) not exit(0).  So a new theory is

1. Some backend crashes.

2. Postmaster issues SIGUSR1 to all remaining backends.

3. As each backend gives up the ghost, postmaster gets another wait()  response and issues another SIGUSR1 to the ones
thatare left.
 

4. Last remaining backend has been SIGUSR1'd enough times to overrun  stack memory, leading to coredump.

I'm not too enamored of this theory because it doesn't explain the
perfect repeatability shown in your backtrace.  It seems unlikely that
each recursive quickdie() call would get just as far as elog's write()
and no farther before the postmaster is able to issue another signal.
Still, it's a possibility.

We should probably tweak the postmaster to be less enthusiastic about
signaling its children repeatedly.

Meanwhile, have you tried looking in the postmaster log?  The postmaster
should have logged at least the exit status for the first backend to
fail.
        regards, tom lane


Re: Core dump

От
Dan Moschuk
Дата:
| Still, it's a mighty peculiar backtrace.

Indeed.

| After looking at postmaster.c, I see that the postmaster will issue
| SIGUSR1 to all remaining backends *each* time it sees a child exit
| with nonzero status.  And it just so happens that quickdie() chooses
| to exit with exit(1) not exit(0).  So a new theory is
| 
| 1. Some backend crashes.
| 
| 2. Postmaster issues SIGUSR1 to all remaining backends.
| 
| 3. As each backend gives up the ghost, postmaster gets another wait()
|    response and issues another SIGUSR1 to the ones that are left.
| 
| 4. Last remaining backend has been SIGUSR1'd enough times to overrun
|    stack memory, leading to coredump.

This theory might make a little more sense with the explanation below.

| I'm not too enamored of this theory because it doesn't explain the
| perfect repeatability shown in your backtrace.  It seems unlikely that
| each recursive quickdie() call would get just as far as elog's write()
| and no farther before the postmaster is able to issue another signal.
| Still, it's a possibility.

Well, when this happens the machine is _heavily_ loaded.  It could be that
the write()s are just taking longer than they should, giving it enough time
to be signaled by another SIGUSR1.  It may also explain why the SIGUSR1s
are being sent so much, as the heavily loaded machine tends not to clean up
its children as fast as it is expected.

| We should probably tweak the postmaster to be less enthusiastic about
| signaling its children repeatedly.

Perhaps have postgres ignore SIGUSR1 after it has already received one?

Regards,
-Dan
-- 
Man is a rational animal who always loses his temper when he is called
upon to act in accordance with the dictates of reason.               -- Oscar Wilde


Re: Core dump

От
Tom Lane
Дата:
Dan Moschuk <dan@freebsd.org> writes:
> | We should probably tweak the postmaster to be less enthusiastic about
> | signaling its children repeatedly.

> Perhaps have postgres ignore SIGUSR1 after it has already received one?

Now that you mention it, it tries to do exactly that:

void
quickdie(SIGNAL_ARGS)
{PG_SETMASK(&BlockSig);elog(NOTICE, "Message from PostgreSQL backend:"...

BlockSig includes SIGUSR1.  So why is the quickdie() routine entered
again?  I'm back to suspecting something funny in Solaris' signal
handling...
        regards, tom lane


Re: Core dump

От
Tom Lane
Дата:
I said:
> BlockSig includes SIGUSR1.

Oh, wait, I take that back.  It's initialized that way, but then
postmaster.c removes SIGUSR1 from the set.
        regards, tom lane


Re: Core dump

От
Dan Moschuk
Дата:
| I said:
| > BlockSig includes SIGUSR1.
| 
| Oh, wait, I take that back.  It's initialized that way, but then
| postmaster.c removes SIGUSR1 from the set.
| 
|             regards, tom lane

So, back to my initial question, why not make each postmaster SIG_IGN 
SIGUSR1 after it receives one?

-Dan
-- 
Man is a rational animal who always loses his temper when he is called
upon to act in accordance with the dictates of reason.               -- Oscar Wilde