pg_dump not in very good shape
От | Tom Lane |
---|---|
Тема | pg_dump not in very good shape |
Дата | |
Msg-id | 17930.947996172@sss.pgh.pa.us обсуждение исходный текст |
Ответы |
Re: [HACKERS] pg_dump not in very good shape
(Bruce Momjian <pgman@candle.pha.pa.us>)
Re: [HACKERS] pg_dump not in very good shape (Peter Eisentraut <peter_e@gmx.net>) Re: [HACKERS] pg_dump not in very good shape (Peter Eisentraut <peter_e@gmx.net>) |
Список | pgsql-hackers |
I have repaired the most recently introduced coredump in pg_dump, but it still crashes on the regression test database. Issue 1: The "most recently introduced coredump" came from the change to oidvector/int2vector to suppress trailing zeroes in the output routine. pg_dump was assuming that it would see exactly the right number of zeroes, and wasn't bothering to initialize any leftover array locations --- but it would happily try to dereference those locations later on. Ugh. Although cleaning up pg_dump's code is clearly good practice, maybe this should raise a flag about whether suppressing the zeroes is a good idea. Are there any client applications that will break because of this change? I'm not sure... Issue 2: The reason it's still broken is that the pqexpbuffer.c code I added to libpq doesn't support adding more than 1K characters to an "expansible string" in any one appendPQExpBuffer() call. pg_dump tries to use that routine to format function definitions, which can easily be over 1K. (Very likely there are other places in pg_dump that have similar problems, but this is the one you hit first when trying to pg_dump the regression DB.) That 1K limitation was OK when the module was just used internally in libpq, but if we're going to allow pg_dump to use it, we probably ought to relax the limitation. The equivalent backend code already has solved this problem, but it solved it by using vsnprintf() which isn't available everywhere. We have a vsnprintf() emulation in backend/port, so in theory we could link that routine into libpq if we are on a platform that hasn't got vsnprintf. The thing that bothers me about that is that if libpq exports a vsnprintf routine that's different from the system version, we could find ourselves changing the behavior of applications that thought they were calling the local system's vsnprintf. (The backend/port module would get linked if either snprintf() or vsnprintf() is missing --- there are machines that have only one --- and we'd effectively replace the system definition of the one that the local system did have.) That's not good. However, the alternative of hacking pg_dump so it doesn't try to format more than 1K at a time is mighty unattractive as well. I am inclined to go ahead and insert vsnprintf into libpq. The risk of problems seems pretty small (and it's zero on any machine with a reasonably recent libc, since then vsnprintf will be in libc and we won't link our version). The risk of missing a buffer-overrun condition in pg_dump, and shipping a pg_dump that will fail on someone's database, seems worse. Comments? Better ideas? regards, tom lane
В списке pgsql-hackers по дате отправления:
Следующее
От: Tom LaneДата:
Сообщение: Re: [HACKERS] I think we need an explicit parsetree node for CAST