Обсуждение: Problem reloading regression database

Поиск
Список
Период
Сортировка

Problem reloading regression database

От
Bruce Momjian
Дата:
I am testing pg_upgrade.  I successfully did a pg_upgrade of a 7.2
regression database into a fresh 7.2 install.  I compared the output of
pg_dump from both copies and found that c_star dump caused a crash.  I
then started doing more testing of the regression database and found
that the regression database does not load in cleanly.  These failures
cause pg_upgrade files not to match the loaded schema.

Looks like there is a problem with inheritance, patch attached listing
the pg_dump load failures. I also see what looks like a crash in the
server logs:

    DEBUG:  pq_flush: send() failed: Broken pipe
    FATAL 1:  Socket command type 1 unknown

Looks like it should be fixed before final.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
-- TOC Entry ID 2 (OID 18411)
--
-- Name: "widget_in" (opaque) Type: FUNCTION Owner: postgres
--
CREATE FUNCTION "widget_in" (opaque) RETURNS widget AS
'/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/test/regress/regress.so','widget_in' LANGUAGE 'C'; 
NOTICE:  ProcedureCreate: type widget is not yet defined
CREATE
--
-- TOC Entry ID 3 (OID 18412)
--
-- Name: "widget_out" (opaque) Type: FUNCTION Owner: postgres
--
--
CREATE TABLE "stud_emp" (
    "percent" integer
)
INHERITS ("emp", "student");
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "name"
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "age"
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "location"
CREATE
--
-- TOC Entry ID 61 (OID 18465)
--
-- Name: city Type: TABLE Owner: postgres
--
--
CREATE TABLE "d_star" (
    "dd" double precision
)
INHERITS ("b_star", "c_star");
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "class"
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "aa"
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "a"
CREATE
--
-- TOC Entry ID 73 (OID 18510)
--
-- Name: e_star Type: TABLE Owner: postgres
--
--
CREATE TABLE "d" (
    "dd" text
)
INHERITS ("b", "c", "a");
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "aa"
NOTICE:  CREATE TABLE: merging multiple inherited definitions of attribute "aa"
CREATE
--
-- TOC Entry ID 100 (OID 135278)
--
-- Name: street Type: VIEW Owner: postgres
-- Data for TOC Entry ID 325 (OID 18512)
--
-- Name: f_star Type: TABLE DATA Owner: postgres
--
COPY "f_star" FROM stdin;
ERROR:  copy: line 1, pg_atoi: error in "((1,3),(2,4))": can't parse "((1,3),(2,4))"
lost synchronization with server, resetting connection
--
-- Data for TOC Entry ID 326 (OID 18517)
--
-- Name: aggtest Type: TABLE DATA Owner: postgres

Re: Problem reloading regression database

От
Bruce Momjian
Дата:
pgman wrote:
> I am testing pg_upgrade.  I successfully did a pg_upgrade of a 7.2
> regression database into a fresh 7.2 install.  I compared the output of
> pg_dump from both copies and found that c_star dump caused a crash.  I
> then started doing more testing of the regression database and found
> that the regression database does not load in cleanly.  These failures
> cause pg_upgrade files not to match the loaded schema.
> 
> Looks like there is a problem with inheritance, patch attached listing
> the pg_dump load failures. I also see what looks like a crash in the
> server logs:
>     
>     DEBUG:  pq_flush: send() failed: Broken pipe
>     FATAL 1:  Socket command type 1 unknown
> 
> Looks like it should be fixed before final.

I should have been clearer how to reproduce this:
1) run regression tests2) pg_dump regression > /tmp/dump3) dropdb regression4) createdb regression5) psql regression <
/tmp/dump> out 2> err
 

Look at the err file.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Problem reloading regression database

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > I am testing pg_upgrade.  I successfully did a pg_upgrade of a 7.2
> > regression database into a fresh 7.2 install.  I compared the output of
> > pg_dump from both copies and found that c_star dump caused a crash.  I
> > then started doing more testing of the regression database and found
> > that the regression database does not load in cleanly.
> 
> No kidding.  That's been a known issue for *years*, Bruce.  Without a
> way to reorder the columns in COPY, it can't be fixed.  That's the main
> reason why we have a TODO item to allow column specification in COPY.
> 
> > I also see what looks like a crash in the server logs:
>     
> >     DEBUG:  pq_flush: send() failed: Broken pipe
> >     FATAL 1:  Socket command type 1 unknown
> 
> No, that's just the COPY failing (and resetting the connection).  That's
> not going to be fixed before final either, unless you'd like us to
> develop a new frontend COPY protocol before final...

I used to test regression dumps a long time ago.  It seems I haven't
done so recently;  guess this is a non-problem or at least a known,
minor one.

It also means my pg_upgrade is working pretty well if the rest of it
worked fine.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Problem reloading regression database

От
Tom Lane
Дата:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> I am testing pg_upgrade.  I successfully did a pg_upgrade of a 7.2
> regression database into a fresh 7.2 install.  I compared the output of
> pg_dump from both copies and found that c_star dump caused a crash.  I
> then started doing more testing of the regression database and found
> that the regression database does not load in cleanly.

No kidding.  That's been a known issue for *years*, Bruce.  Without a
way to reorder the columns in COPY, it can't be fixed.  That's the main
reason why we have a TODO item to allow column specification in COPY.

> I also see what looks like a crash in the server logs:
>     DEBUG:  pq_flush: send() failed: Broken pipe
>     FATAL 1:  Socket command type 1 unknown

No, that's just the COPY failing (and resetting the connection).  That's
not going to be fixed before final either, unless you'd like us to
develop a new frontend COPY protocol before final...
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-12 23:45] Tom Lane said:
| Bruce Momjian <pgman@candle.pha.pa.us> writes:
| > I am testing pg_upgrade.  I successfully did a pg_upgrade of a 7.2
| > regression database into a fresh 7.2 install.  I compared the output of
| > pg_dump from both copies and found that c_star dump caused a crash.  I
| > then started doing more testing of the regression database and found
| > that the regression database does not load in cleanly.
|
| No kidding.  That's been a known issue for *years*, Bruce.  Without a
| way to reorder the columns in COPY, it can't be fixed.  That's the main
| reason why we have a TODO item to allow column specification in COPY.

The attached patch is a first-cut at implementing column specification
in COPY FROM with the following syntax.

  COPY atable (col1,col2,col3,col4) FROM ...

The details:
  Add "List* attlist" member to CopyStmt parse node.
  Adds <please supply term ;-)>  to gram.y allowing opt_column_list
    in COPY FROM Node.
  In CopyFrom, if attlist present, create Form_pg_attribute* ordered
    same as attlist.
  If all columns in the table are not found in attlist, elog(ERROR).
  Continue normal operation.

Regression tests all still pass.  There is still a problem where
duplicate columns in the list will allow the operation to succeed,
but I believe this is the only problem.  If this approach is sane,
I'll clean it up later today.

comments?

cheers.
  brent

--
"Develop your talent, man, and leave the world something. Records are
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman

Вложения

Re: Problem reloading regression database

От
Tom Lane
Дата:
Brent Verner <brent@rcfile.org> writes:
>   In CopyFrom, if attlist present, create Form_pg_attribute* ordered
>     same as attlist.

Doesn't seem like this can possibly work as-is.  The eventual call to
heap_formtuple must supply the column values in the order expected
by the table, but I don't see you remapping from input-column indices to
table-column indices anywhere in the data processing loop.

Also, a reasonable version of this capability would allow the input
column list to be just a subset of the table column list; with the
column default expressions, if any, being evaluated to fill the missing
columns.  This would answer the requests we keep having for COPY to be
able to load a table containing a serial-number column.

Don't forget that if the syntax allows COPY (collist) TO file, people
will expect that to work too.
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-13 11:41] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| >   In CopyFrom, if attlist present, create Form_pg_attribute* ordered
| >     same as attlist.
| 
| Doesn't seem like this can possibly work as-is.  The eventual call to
| heap_formtuple must supply the column values in the order expected
| by the table, but I don't see you remapping from input-column indices to
| table-column indices anywhere in the data processing loop.

yup.  back to the drawing board ;-)

| Also, a reasonable version of this capability would allow the input
| column list to be just a subset of the table column list; with the
| column default expressions, if any, being evaluated to fill the missing
| columns.  This would answer the requests we keep having for COPY to be
| able to load a table containing a serial-number column.

right.

| Don't forget that if the syntax allows COPY (collist) TO file, people
| will expect that to work too.

;-)  darnit!

thanks. brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-13 12:33] Brent Verner said:
| [2002-01-13 11:41] Tom Lane said:
| | Brent Verner <brent@rcfile.org> writes:
| | >   In CopyFrom, if attlist present, create Form_pg_attribute* ordered
| | >     same as attlist.
| | 
| | Doesn't seem like this can possibly work as-is.  The eventual call to
| | heap_formtuple must supply the column values in the order expected
| | by the table, but I don't see you remapping from input-column indices to
| | table-column indices anywhere in the data processing loop.
| 
| yup.  back to the drawing board ;-)

I fixed this by making an int* mapping from specified collist 
position to actual rd_att->attrs position.

| | Also, a reasonable version of this capability would allow the input
| | column list to be just a subset of the table column list; with the
| | column default expressions, if any, being evaluated to fill the missing
| | columns.  This would answer the requests we keep having for COPY to be
| | able to load a table containing a serial-number column.
| 
| right.

I'm still a bit^W^W lost as hell on how the column default magic 
happens.  It appears that in the INSERT case, the query goes thru
the planner and picks up the necessary Node* representing the
default(s) for a relation, then later evaluates those nodes if
not attisset.

Should I be looking to call  ExecEvalFunc(stringToNode(adbin),ec,&rvnull,NULL);
when an attr is not specified and it has a default?  Or is there
a more straightforward way of getting the default for an att? 
(I sure hope there is ;-)

thanks. brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Tom Lane
Дата:
Brent Verner <brent@rcfile.org> writes:
> I fixed this by making an int* mapping from specified collist 
> position to actual rd_att->attrs position.

Sounds better.

> I'm still a bit^W^W lost as hell on how the column default magic 
> happens.

I'd say use build_column_default() in src/backend/optimizer/prep/preptlist.c
to set up a default expression (possibly just NULL) for every column
that's not supplied by the input.  That routine's not exported now, but
it could be, or perhaps it should be moved somewhere else.  (Suggestions
anyone?  Someplace in src/backend/catalog might be a more appropriate
place for it.)

Then in the per-tuple loop you use ExecEvalExpr, or more likely
ExecEvalExprSwitchContext, to execute the default expressions.
The econtext wanted by ExecEvalExpr can be had from the estate
that CopyFrom already creates; use GetPerTupleExprContext(estate).

You'll need to verify that you have got the memory context business
right, ie, no memory leak across rows.  I think the above sketch is
sufficient, but check it with a memory-eating default expression
evaluated for a few million input rows ... and you are doing your
testing with --enable-cassert, I trust, to catch any dangling pointers.
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-13 15:17] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > I fixed this by making an int* mapping from specified collist 
| > position to actual rd_att->attrs position.
| 
| Sounds better.
| 
| > I'm still a bit^W^W lost as hell on how the column default magic 
| > happens.
| 
| I'd say use build_column_default() in src/backend/optimizer/prep/preptlist.c
| to set up a default expression (possibly just NULL) for every column
| that's not supplied by the input.  That routine's not exported now, but
| it could be, or perhaps it should be moved somewhere else.  (Suggestions
| anyone?  Someplace in src/backend/catalog might be a more appropriate
| place for it.)

gotcha.

| Then in the per-tuple loop you use ExecEvalExpr, or more likely
| ExecEvalExprSwitchContext, to execute the default expressions.
| The econtext wanted by ExecEvalExpr can be had from the estate
| that CopyFrom already creates; use GetPerTupleExprContext(estate).

many, many thanks!

| You'll need to verify that you have got the memory context business
| right, ie, no memory leak across rows.  I think the above sketch is
| sufficient, but check it with a memory-eating default expression
| evaluated for a few million input rows ... 

Yes, the above info should get me through.

| and you are doing your
| testing with --enable-cassert, I trust, to catch any dangling pointers.

<ducks>
I am now :-o

thank you. brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-13 16:42] Brent Verner said:
| [2002-01-13 15:17] Tom Lane said:
| | Brent Verner <brent@rcfile.org> writes:
| | > I fixed this by making an int* mapping from specified collist
| | > position to actual rd_att->attrs position.
| |
| | Sounds better.
| |
| | > I'm still a bit^W^W lost as hell on how the column default magic
| | > happens.
| |
| | I'd say use build_column_default() in src/backend/optimizer/prep/preptlist.c
| | to set up a default expression (possibly just NULL) for every column
| | that's not supplied by the input.  That routine's not exported now, but
| | it could be, or perhaps it should be moved somewhere else.  (Suggestions
| | anyone?  Someplace in src/backend/catalog might be a more appropriate
| | place for it.)
|
| gotcha.
|
| | Then in the per-tuple loop you use ExecEvalExpr, or more likely
| | ExecEvalExprSwitchContext, to execute the default expressions.
| | The econtext wanted by ExecEvalExpr can be had from the estate
| | that CopyFrom already creates; use GetPerTupleExprContext(estate).
|
| many, many thanks!
|
| | You'll need to verify that you have got the memory context business
| | right, ie, no memory leak across rows.  I think the above sketch is
| | sufficient, but check it with a memory-eating default expression
| | evaluated for a few million input rows ...
|
| Yes, the above info should get me through.

round two...

  1) I (kludgily) exported build_column_default() from its current
     location.
  2) defaults expressions are now tried if a column is not in the
     COPY attlist specification.

There are still some kinks... (probably more than I've thought of)
  1) a column in attlist that is not in the table will cause a segv
     in the backend.
  2) duplicate names in attlist are still not treated as an error.

I believe the memory context issues are handled correctly, but I've
not run the few million copy tests yet, and I probably won't be able
to until late(r) tomorrow.  No strangeness running compiled with
--enable-cassert.  Regression tests still pass.

Sanity checks much appreciated.

cheers.
  brent

--
"Develop your talent, man, and leave the world something. Records are
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman

Вложения

Re: Problem reloading regression database

От
Tom Lane
Дата:
Brent Verner <brent@rcfile.org> writes:
> I must retract this assertion.  As posted, this patch dies on the
> second line of a COPY file...  argh.  What did I break?

First guess is that you allocated some data structure in the per-tuple
context that needs to be in the per-query context (ie, needs to live
throughout the copy).
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-13 21:39] Brent Verner said:
|
| I believe the memory context issues are handled correctly, but I've

I must retract this assertion.  As posted, this patch dies on the
second line of a COPY file...  argh.  What did I break?
 b
-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-13 22:51] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > I must retract this assertion.  As posted, this patch dies on the
| > second line of a COPY file...  argh.  What did I break?
| 
| First guess is that you allocated some data structure in the per-tuple
| context that needs to be in the per-query context (ie, needs to live
| throughout the copy).

yup.  The problem sneaks up when I get a default value for a "text"
column via ExecEvalExprSwithContext.  Commenting out the pfree above 
heap_formtuple makes the error go away, but I know that's not the
right answer.  Should I avoid freeing the !attbyval items when they've
come from ExecEvalExpr -- I don't see any other examples of freeing
returns from this fn.
 b

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Tom Lane
Дата:
Brent Verner <brent@rcfile.org> writes:
> yup.  The problem sneaks up when I get a default value for a "text"
> column via ExecEvalExprSwithContext.  Commenting out the pfree above 
> heap_formtuple makes the error go away, but I know that's not the
> right answer.

Oh, the pfree for the attribute values?  Ah so.  I knew that would
bite us someday.  See, the way this code presently works is that all of
copy.c runs in the per-query memory context.  It calls all of the
datatype conversion routines in that same context.  It assumes that
the routines that return pass-by-ref datatypes will return palloc'd
values (and not, say, pointers to constant values) --- which is not
a good assumption IMHO, even though I think it's true at the moment.
This assumption is what's needed to justify the pfree's at the bottom of
the loop.  What's even worse is that it assumes that the conversion
routines leak no other memory; if any conversion routine palloc's
something it doesn't pfree, then over the course of a long enough copy
we run out of memory.

In the case of ExecEvalExpr, if the expression is just a T_Const node
then what you get back (for a pass-by-ref datatype) is a pointer to
the value sitting in the Const node.  pfreeing this is bad juju.

> Should I avoid freeing the !attbyval items when they've
> come from ExecEvalExpr -- I don't see any other examples of freeing
> returns from this fn.

I believe the correct solution is to get rid of the retail pfree's
altogether.  The clean way to run this code would be to switch to
the per-tuple context at the head of the per-tuple loop (say, right
after ResetPerTupleExprContext), run all the datatype conversion
routines *and* ExecEvalExpr in this context, and then switch back
to per-query context just before heap_formtuple.  Then at the
loop bottom the only explicit free you need is the heap_freetuple.
The individual attribute values are inside the per-tuple context
and they'll be freed by the ResetPerTupleExprContext at the start
of the next loop.  Fewer cycles, works right whether the values are
palloc'd or not, and positively prevents any problems with leaks
inside the datatype conversion routines --- since any leaked pallocs
will also be inside the per-tuple context.

An even more radical approach would be to try to run the whole loop in
per-tuple context, but I think that will probably break things; the
index insertion code, at least, expects to be called in per-query
context because it sometimes makes allocations that must live across
calls.  (Cleaning that up is on my long-term to-do list; I'd prefer
to see almost all of the executor run in per-tuple contexts, so as
to avoid potential memory leaks very similar to the situation here.)

You'll need to make sure that the code isn't expecting to palloc
anything first-time-through and re-use it on later loops, but I
think that will be okay.  (The attribute_buf is the most obvious
risk, but that's all right, see stringinfo.c.)
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-14 00:03] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > yup.  The problem sneaks up when I get a default value for a "text"
| > column via ExecEvalExprSwithContext.  Commenting out the pfree above 
| > heap_formtuple makes the error go away, but I know that's not the
| > right answer.
| 
| Oh, the pfree for the attribute values?  Ah so.  I knew that would
| bite us someday.  See, the way this code presently works is that all of
| copy.c runs in the per-query memory context.  It calls all of the
| datatype conversion routines in that same context.  It assumes that
| the routines that return pass-by-ref datatypes will return palloc'd
| values (and not, say, pointers to constant values) --- which is not
| a good assumption IMHO, even though I think it's true at the moment.
| This assumption is what's needed to justify the pfree's at the bottom of
| the loop.  What's even worse is that it assumes that the conversion
| routines leak no other memory; if any conversion routine palloc's
| something it doesn't pfree, then over the course of a long enough copy
| we run out of memory.

check.  I just loaded 3mil records (with my hacked copy.c), and had
ended up around 36M... <phew!!>  I'm gonna load a similar file 
with a clean copy.c, just to see if that leak is present without
my changes -- I suspect it's not, but I'd like to see the empirical
effect of my change(s)....

Thanks for the commentary.  It really helps glue together the
thoughts I had from reading over the memory context code.

| In the case of ExecEvalExpr, if the expression is just a T_Const node
| then what you get back (for a pass-by-ref datatype) is a pointer to
| the value sitting in the Const node.  pfreeing this is bad juju.

yup, that seems like it'd explain my symptom.

| > Should I avoid freeing the !attbyval items when they've
| > come from ExecEvalExpr -- I don't see any other examples of freeing
| > returns from this fn.
| 
| I believe the correct solution is to get rid of the retail pfree's
| altogether.  The clean way to run this code would be to switch to
| the per-tuple context at the head of the per-tuple loop (say, right
| after ResetPerTupleExprContext), run all the datatype conversion
| routines *and* ExecEvalExpr in this context, and then switch back
| to per-query context just before heap_formtuple.  Then at the
| loop bottom the only explicit free you need is the heap_freetuple.
| The individual attribute values are inside the per-tuple context
| and they'll be freed by the ResetPerTupleExprContext at the start
| of the next loop.  Fewer cycles, works right whether the values are
| palloc'd or not, and positively prevents any problems with leaks
| inside the datatype conversion routines --- since any leaked pallocs
| will also be inside the per-tuple context.

Gotcha.  This certainly sounds like it will alleviate my pfree 
problem.  I'll get back to this tomorrow evening.

| An even more radical approach would be to try to run the whole loop in
| per-tuple context, but I think that will probably break things; the
| index insertion code, at least, expects to be called in per-query
| context because it sometimes makes allocations that must live across
| calls.  (Cleaning that up is on my long-term to-do list; I'd prefer
| to see almost all of the executor run in per-tuple contexts, so as
| to avoid potential memory leaks very similar to the situation here.)
| 
| You'll need to make sure that the code isn't expecting to palloc
| anything first-time-through and re-use it on later loops, but I
| think that will be okay.  (The attribute_buf is the most obvious
| risk, but that's all right, see stringinfo.c.)

So I /can't/ palloc some things /before/ switching context to 
per-tuple-context?  I ask because I'm palloc'ing a couple of 
arrays, that would have to be MaxHeapAttributeNumber long to 
make sure we've enough space.  Though, thinking about it, an
additional 13k of static storage in the binary is not all that
much.

thanks. brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Tom Lane
Дата:
Brent Verner <brent@rcfile.org> writes:
> | You'll need to make sure that the code isn't expecting to palloc
> | anything first-time-through and re-use it on later loops, but I
> | think that will be okay.  (The attribute_buf is the most obvious
> | risk, but that's all right, see stringinfo.c.)

> So I /can't/ palloc some things /before/ switching context to 
> per-tuple-context?

Oh, sure you can.  That's the point of having a per-query context.
What I was wondering was whether there were any pallocs executed
*after* entering the loop that the code expected to live across
loop cycles.  I don't think so, I'm just mentioning the risk as
part of your education ;-)
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-14 00:41] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > | You'll need to make sure that the code isn't expecting to palloc
| > | anything first-time-through and re-use it on later loops, but I
| > | think that will be okay.  (The attribute_buf is the most obvious
| > | risk, but that's all right, see stringinfo.c.)
| 
| > So I /can't/ palloc some things /before/ switching context to 
| > per-tuple-context?
| 
| Oh, sure you can.  That's the point of having a per-query context.
| What I was wondering was whether there were any pallocs executed
| *after* entering the loop that the code expected to live across
| loop cycles.  I don't think so, I'm just mentioning the risk as
| part of your education ;-)

gotcha.  No, I don't think anything inside that loop expects to 
persist across iterations.  The attribute_buf is static to the
file, and initialized in DoCopy.

What I ended up doing is switching to per-tuple-context prior to 
the input loop, then switching back to the (saved) query-context
after exiting the loop.  I followed ResetTupleExprContext back, and
it doesn't seem to do anything that would require a switch per loop.
Are there any problems this might cause that I'm not seeing  with 
my test case?

Memory use is now under control, and things look good (stable around 
2.8M).

sleepy:/usr/local/pg-7.2/bin
brent$ ./psql -c '\d yyy'                           Table "yyy"Column |  Type   |                   Modifiers
        
 
--------+---------+------------------------------------------------id     | integer | not null default
nextval('"yyy_id_seq"'::text)a     | integer | not null default 1b      | text    | not null default 'test'c      |
integer| 
 
Unique keys: yyy_id_key

sleepy:/usr/local/pg-7.2/bin
brent$ wc -l mmm
3200386 mmm
sleepy:/usr/local/pg-7.2/bin
brent$ head -10 mmm
\N
\N
\N
20
10
20
20
40
50
20
sleepy:/usr/local/pg-7.2/bin
brent$ ./psql -c 'copy yyy(c) from stdin' < mmm
sleepy:/usr/local/pg-7.2/bin


thanks. brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Tom Lane
Дата:
Brent Verner <brent@rcfile.org> writes:
> gotcha.  No, I don't think anything inside that loop expects to 
> persist across iterations.  The attribute_buf is static to the
> file, and initialized in DoCopy.

There is more to attribute_buf than meets the eye ;-)

> What I ended up doing is switching to per-tuple-context prior to 
> the input loop, then switching back to the (saved) query-context
> after exiting the loop.  I followed ResetTupleExprContext back, and
> it doesn't seem to do anything that would require a switch per loop.
> Are there any problems this might cause that I'm not seeing  with 
> my test case?

I really don't feel comfortable with running heap_insert or the
subsequent operations in a per-tuple context.  Have you tried any
test cases that involve triggers or indexes?
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-14 21:52] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > gotcha.  No, I don't think anything inside that loop expects to 
| > persist across iterations.  The attribute_buf is static to the
| > file, and initialized in DoCopy.
| 
| There is more to attribute_buf than meets the eye ;-)

I certainly don't doubt that, especially when it's my eye :-O

| > What I ended up doing is switching to per-tuple-context prior to 
| > the input loop, then switching back to the (saved) query-context
| > after exiting the loop.  I followed ResetTupleExprContext back, and
| > it doesn't seem to do anything that would require a switch per loop.
| > Are there any problems this might cause that I'm not seeing  with 
| > my test case?
| 
| I really don't feel comfortable with running heap_insert or the
| subsequent operations in a per-tuple context.  Have you tried any
| test cases that involve triggers or indexes?

no, I dropped the index for the 3mil COPY.  I will run with some
triggers and indexes in the table.

cheers. brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-14 21:52] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > gotcha.  No, I don't think anything inside that loop expects to 
| > persist across iterations.  The attribute_buf is static to the
| > file, and initialized in DoCopy.
| 
| There is more to attribute_buf than meets the eye ;-)
| 
| > What I ended up doing is switching to per-tuple-context prior to 
| > the input loop, then switching back to the (saved) query-context
| > after exiting the loop.  I followed ResetTupleExprContext back, and
| > it doesn't seem to do anything that would require a switch per loop.
| > Are there any problems this might cause that I'm not seeing  with 
| > my test case?
| 
| I really don't feel comfortable with running heap_insert or the
| subsequent operations in a per-tuple context.  Have you tried any
| test cases that involve triggers or indexes?

Yes, and I'm seeing no new problems (so far), but there is a problem 
in the current copy.c.  Running the following on unmodified 7.2b5 
causes the backend to consume 17-18Mb of memory.  Removing the 
REFERENCES on yyy.b causes memory use to be normal.

bash$ cat copy.sql 
DROP table yyy;
DROP SEQUENCE yyy_id_seq ;
DROP TABLE zzz;
DROP SEQUENCE zzz_id_seq ;
CREATE TABLE zzz ( id SERIAL, a INT, b TEXT NOT NULL DEFAULT 'test' PRIMARY KEY, c INT NOT NULL DEFAULT 1
);
CREATE TABLE yyy ( id SERIAL, a INT, b TEXT NOT NULL DEFAULT 'test' REFERENCES zzz(b), c INT NOT NULL DEFAULT 1
);
-- make sure there is a 'test' value in zzz.b
INSERT INTO zzz (a) VALUES (10);
COPY yyy FROM '/tmp/sometmpfilehuh'

bash$ for i in `seq 1 200000`; do echo "$i     $i      test    $i" >> /tmp/sometmpfilehuh; done

bash$ head -1 /tmp/sometmpfilehuh; tail -1 /tmp/sometmpfilehuh
1 1 test  1
200000  200000  test  200000

bash$ ./psql < copy.sql


Any ideas?  I'm looking around ExecBRInsertTriggers() to see what 
might need to be freed around that call.

thanks. brent


-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Tom Lane
Дата:
Brent Verner <brent@rcfile.org> writes:
> Yes, and I'm seeing no new problems (so far), but there is a problem 
> in the current copy.c.  Running the following on unmodified 7.2b5 
> causes the backend to consume 17-18Mb of memory.

Probably that's just the space consumed for the pending-trigger events
created by the AFTER trigger that implements the foreign key check.
There should be a provision for shoving that list out to disk when
it gets too large ... but it ain't happening for 7.2.
        regards, tom lane


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-15 00:44] Brent Verner said:

| I'm looking around ExecBRInsertTriggers() to see what 
| might need to be freed around that call.

scratch this idea.  this bit is not even hit in my test case... sorry.
 b
-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-15 01:07] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > Yes, and I'm seeing no new problems (so far), but there is a problem 
| > in the current copy.c.  Running the following on unmodified 7.2b5 
| > causes the backend to consume 17-18Mb of memory.
| 
| Probably that's just the space consumed for the pending-trigger events
| created by the AFTER trigger that implements the foreign key check.
| There should be a provision for shoving that list out to disk when
| it gets too large ... but it ain't happening for 7.2.

gotcha.  I'll move on along then...

thanks. brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


Re: Problem reloading regression database

От
Brent Verner
Дата:
[2002-01-14 21:52] Tom Lane said:
| Brent Verner <brent@rcfile.org> writes:
| > gotcha.  No, I don't think anything inside that loop expects to
| > persist across iterations.  The attribute_buf is static to the
| > file, and initialized in DoCopy.
|
| There is more to attribute_buf than meets the eye ;-)
|
| > What I ended up doing is switching to per-tuple-context prior to
| > the input loop, then switching back to the (saved) query-context
| > after exiting the loop.  I followed ResetTupleExprContext back, and
| > it doesn't seem to do anything that would require a switch per loop.
| > Are there any problems this might cause that I'm not seeing  with
| > my test case?
|
| I really don't feel comfortable with running heap_insert or the
| subsequent operations in a per-tuple context.  Have you tried any
| test cases that involve triggers or indexes?

Yes.  The attached patch appears to do the right thing with all
indexes and triggers (RI) that I've tested.  I'm still doing the
MemoryContextSwitchTo() outside the main loop, and have added some
more sanity checking for column name input.

If anyone could test this (with non-critical data ;-) or otherwise
give feedback, I'd appreciate it; especially if someone could test
with a BEFORE INSERT trigger.

cheers.
  brent

--
"Develop your talent, man, and leave the world something. Records are
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman

Вложения

Re: Problem reloading regression database

От
Bruce Momjian
Дата:
This has been saved for the 7.3 release:
http://candle.pha.pa.us/cgi-bin/pgpatches2

---------------------------------------------------------------------------

Brent Verner wrote:
> [2002-01-14 21:52] Tom Lane said:
> | Brent Verner <brent@rcfile.org> writes:
> | > gotcha.  No, I don't think anything inside that loop expects to 
> | > persist across iterations.  The attribute_buf is static to the
> | > file, and initialized in DoCopy.
> | 
> | There is more to attribute_buf than meets the eye ;-)
> | 
> | > What I ended up doing is switching to per-tuple-context prior to 
> | > the input loop, then switching back to the (saved) query-context
> | > after exiting the loop.  I followed ResetTupleExprContext back, and
> | > it doesn't seem to do anything that would require a switch per loop.
> | > Are there any problems this might cause that I'm not seeing  with 
> | > my test case?
> | 
> | I really don't feel comfortable with running heap_insert or the
> | subsequent operations in a per-tuple context.  Have you tried any
> | test cases that involve triggers or indexes?
> 
> Yes.  The attached patch appears to do the right thing with all 
> indexes and triggers (RI) that I've tested.  I'm still doing the
> MemoryContextSwitchTo() outside the main loop, and have added some 
> more sanity checking for column name input.
> 
> If anyone could test this (with non-critical data ;-) or otherwise 
> give feedback, I'd appreciate it; especially if someone could test
> with a BEFORE INSERT trigger.
> 
> cheers.
>   brent
> 
> -- 
> "Develop your talent, man, and leave the world something. Records are 
> really gifts from people. To think that an artist would love you enough
> to share his music with anyone is a beautiful thing."  -- Duane Allman

[ Attachment, skipping... ]

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026