Обсуждение: Allow SQL/plpgsql functions to accept record

Поиск
Список
Период
Сортировка

Allow SQL/plpgsql functions to accept record

От
Jim Nasby
Дата:
Is there a fundamental reason SQL/plpgsql functions won't accept record 
as an input type? If not, can someone point me at a patch that might 
show how much work would be involved in adding support?

My particular use case is a generic function that will count how many 
fields in a record are NULL. I can do it in pure SQL (below), but was 
hoping to wrap the entire thing in a function. Right now, I have to add 
a call to row_to_json() to the function call.

SELECT count(*)  FROM json_each_text( row_to_json($1) ) a  WHERE value IS NULL
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: Allow SQL/plpgsql functions to accept record

От
"David G. Johnston"
Дата:
On Sun, Apr 19, 2015 at 3:02 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
Is there a fundamental reason SQL/plpgsql functions won't accept record as an input type? If not, can someone point me at a patch that might show how much work would be involved in adding support?

My particular use case is a generic function that will count how many fields in a record are NULL. I can do it in pure SQL (below), but was hoping to wrap the entire thing in a function. Right now, I have to add a call to row_to_json() to the function call.

SELECT count(*)
  FROM json_each_text( row_to_json($1) ) a
  WHERE value IS NULL

​See also:

​SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v)​;
ERROR: record type has not been registered

While it may not be necessary to solve both problems I suspect they have the same underlying root cause - specifically the separation of concerns between the planner and the executor.

ISTM that the planner needs to be able to create arbitrarily named composite types and leave them "registered" in the session somewhere for the executor to find.  Session because:

PREPARE prep_rec AS SELECT record_input_func(v) FROM ( VALUES (ROW($1::integer,$2::boolean,$3::text)) src (v);
EXECUTE prep_rec USING (1, true, 'hi!');

If it requires additional smarts in the executor to make this all work I suspect the cost-benefit equations end up supporting the somewhat more verbose but workable status-quo.

I'm not sure how { row_to_json(record) } works but SQL (including pl/pgsql) needs to have some source of definition for what the record type should be in reality - and that source currently is the catalogs whose rows are locked by the planner and injected, I think, into a session cache.  The source query in pl/pgsql defines the type for fully embedded use of the record placeholder while the caller's function alias provides that information for RETURNS record.  The calling query needs to provide the same information for "CREATE FUNCTION func( arg1 record )" since the body of the pl/pgsql function needs to instantiate "arg1" with a known type as soon as the function is entered.  It is theoretically possible to impute the needed anonymous type from the query definition - the problem is how and where to register that information for execution.  

At least for pl/pgsql I could see possibly doing something like "func( arg1 packed_record_bytes)" and having pl/pgsql understand how to unpack those bytes into an anonymous but structured record (like it would with SELECT ... INTO record_var) seems plausible.  I would not expect pl/SQL to allow anything of the sort as it doesn't seem compatible with the idea of inline-ability.

Maybe the "C" code for "row_to_json" (or libpq in general) can provide inspiration (particularly for the "pack/unpack bytes") but as I do not know "C" I'm going to have to leave that to others.

David J.

Re: Allow SQL/plpgsql functions to accept record

От
Jim Nasby
Дата:
On 4/20/15 2:04 PM, David G. Johnston wrote:
>
> ​SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v)​;
> ERROR: record type has not been registered
>
> While it may not be necessary to solve both problems I suspect they have
> the same underlying root cause - specifically the separation of concerns
> between the planner and the executor.

I don't think they're related at all. C functions have the ability to 
accept a record, so the executor must be able to support it. It's just 
that SQL and plpgsql functions don't have that support. I suspect that's 
just because no one has gotten around to it.
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: Allow SQL/plpgsql functions to accept record

От
Andrew Dunstan
Дата:
On 04/22/2015 11:29 AM, Jim Nasby wrote:
> On 4/20/15 2:04 PM, David G. Johnston wrote:
>>
>> ​SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v)​;
>> ERROR: record type has not been registered
>>
>> While it may not be necessary to solve both problems I suspect they have
>> the same underlying root cause - specifically the separation of concerns
>> between the planner and the executor.
>
> I don't think they're related at all. C functions have the ability to
> accept a record, so the executor must be able to support it. It's just
> that SQL and plpgsql functions don't have that support. I suspect
> that's just because no one has gotten around to it.


Well, that's assuming everyone else thinks it would be a good idea.
Maybe they do, but I wouldn't assume it.

The answer in the past has been to use more dynamically typed languages
such as perl for which this problem is well suited.

There are actually several problems: first, given an arbitrary record
plpgsql has no easy and efficient way of finding out what field names it
has. Second, even if it has such knowledge it has no way of using it -
it's not like JavaScript where you can use a text value as a field name.
And third, it has no way of creating variables of the right type to hold
extracted values.

All of these could possibly be overcome, but it would not be a small
piece of work, I suspect. Given that plperl buys you all of that
already  (just try this, for example) people might think it not worth
the extra trouble.
   create function rkeys(record) returns text[] language plperl as $$   my $rec = shift; return [ keys %$rec ]; $$;
selectunnest(rkeys(r)) from (select * from pg_class limit 1) r; 


cheers

andrew



Re: Allow SQL/plpgsql functions to accept record

От
Merlin Moncure
Дата:
On Wed, Apr 22, 2015 at 11:20 AM, Andrew Dunstan <andrew@dunslane.net> wrote:
>
> On 04/22/2015 11:29 AM, Jim Nasby wrote:
>>
>> On 4/20/15 2:04 PM, David G. Johnston wrote:
>>>
>>>
>>> SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v);
>>> ERROR: record type has not been registered
>>>
>>> While it may not be necessary to solve both problems I suspect they have
>>> the same underlying root cause - specifically the separation of concerns
>>> between the planner and the executor.
>>
>> I don't think they're related at all. C functions have the ability to
>> accept a record, so the executor must be able to support it. It's just that
>> SQL and plpgsql functions don't have that support. I suspect that's just
>> because no one has gotten around to it.
>
> Well, that's assuming everyone else thinks it would be a good idea. Maybe
> they do, but I wouldn't assume it.
>
> The answer in the past has been to use more dynamically typed languages such
> as perl for which this problem is well suited.

I've never really been satisfied with this answer.  The two languages
with really good core support are perl and python, neither of which
are my cup of tea.   Also, there is no chance of inlining any of the
dynamic languages which has serious performance ramifications.  In a
perfect world, pl/v8 would be a good choice for a general purpose
database support language as javascript has a number of properties
that make it attractive for integration.  Even if we had that though
(and it's unlikely), a large percentage of postgres devs, including
myself, dislike coding in any language except sql plus extensions.

That being said, I think json types with their associated API, given
that they are core types, will ultimately handle these types of
problems.  That way, at least, we can avoid adding syntax and
functionality that will basically do the same thing.  This reminds me
a little bit of the json_build() vs enhanced row() syntax we discussed
some time back.  I didn't say so at the time, but for posterity, I
think you were right...json_build() is working fine for building
arbitrary record types and moving a record to json and deconstructing
it should work just as well.

merlin

merlin



Re: Allow SQL/plpgsql functions to accept record

От
Jim Nasby
Дата:
On 4/22/15 2:12 PM, Merlin Moncure wrote:
> That being said, I think json types with their associated API, given
> that they are core types, will ultimately handle these types of
> problems.  That way, at least, we can avoid adding syntax and
> functionality that will basically do the same thing.  This reminds me
> a little bit of the json_build() vs enhanced row() syntax we discussed
> some time back.  I didn't say so at the time, but for posterity, I
> think you were right...json_build() is working fine for building
> arbitrary record types and moving a record to json and deconstructing
> it should work just as well.

The one part I don't care for in that is it seems rather inefficient to 
cast something to JSON just so we can do things we really should be able 
to do with a record. But perhaps it's not all that costly.

As for allowing SQL and plpgsql functions to accept a record, I think 
our JSON functionality just provided plenty of reason we should allow 
accepting them, even if you can't do much with it: you *can* hand it to 
row_to_json(), which does allow you to do something useful with it. So 
it seems reasonable to me that we should at least accept it as a 
function argument.
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: Allow SQL/plpgsql functions to accept record

От
Robert Haas
Дата:
On Wed, Apr 22, 2015 at 6:12 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
> On 4/22/15 2:12 PM, Merlin Moncure wrote:
>> That being said, I think json types with their associated API, given
>> that they are core types, will ultimately handle these types of
>> problems.  That way, at least, we can avoid adding syntax and
>> functionality that will basically do the same thing.  This reminds me
>> a little bit of the json_build() vs enhanced row() syntax we discussed
>> some time back.  I didn't say so at the time, but for posterity, I
>> think you were right...json_build() is working fine for building
>> arbitrary record types and moving a record to json and deconstructing
>> it should work just as well.
>
> The one part I don't care for in that is it seems rather inefficient to cast
> something to JSON just so we can do things we really should be able to do
> with a record. But perhaps it's not all that costly.
>
> As for allowing SQL and plpgsql functions to accept a record, I think our
> JSON functionality just provided plenty of reason we should allow accepting
> them, even if you can't do much with it: you *can* hand it to row_to_json(),
> which does allow you to do something useful with it. So it seems reasonable
> to me that we should at least accept it as a function argument.

I agree that that would be useful.  I think the problem with an
expression like rowvar.something is that PL/pgsql cannot infer the
type of the result, and nothing else works without that.  I doubt that
it's practical to lift that restriction.  PL/pgsql could introduce
dedicated syntax for this operation, like DYNAMIC_EXTRACT(rowvar,
colname, resulttype) or something, but that's going to be clunky at
best.  Whether we eventually do that or not, if we can allow rows to
be passed in and then let people use json or hstore operators on them,
that would be a helpful step forward, IMHO.  I'm not sure if that's
practical either, but maybe...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Allow SQL/plpgsql functions to accept record

От
Pavel Stehule
Дата:


2015-04-23 15:19 GMT+02:00 Robert Haas <robertmhaas@gmail.com>:
On Wed, Apr 22, 2015 at 6:12 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
> On 4/22/15 2:12 PM, Merlin Moncure wrote:
>> That being said, I think json types with their associated API, given
>> that they are core types, will ultimately handle these types of
>> problems.  That way, at least, we can avoid adding syntax and
>> functionality that will basically do the same thing.  This reminds me
>> a little bit of the json_build() vs enhanced row() syntax we discussed
>> some time back.  I didn't say so at the time, but for posterity, I
>> think you were right...json_build() is working fine for building
>> arbitrary record types and moving a record to json and deconstructing
>> it should work just as well.
>
> The one part I don't care for in that is it seems rather inefficient to cast
> something to JSON just so we can do things we really should be able to do
> with a record. But perhaps it's not all that costly.
>
> As for allowing SQL and plpgsql functions to accept a record, I think our
> JSON functionality just provided plenty of reason we should allow accepting
> them, even if you can't do much with it: you *can* hand it to row_to_json(),
> which does allow you to do something useful with it. So it seems reasonable
> to me that we should at least accept it as a function argument.

I agree that that would be useful.  I think the problem with an
expression like rowvar.something is that PL/pgsql cannot infer the
type of the result, and nothing else works without that.  I doubt that
it's practical to lift that restriction.  PL/pgsql could introduce
dedicated syntax for this operation, like DYNAMIC_EXTRACT(rowvar,
colname, resulttype) or something, but that's going to be clunky at
best.  Whether we eventually do that or not, if we can allow rows to
be passed in and then let people use json or hstore operators on them,
that would be a helpful step forward, IMHO.  I'm not sure if that's
practical either, but maybe...

this need significant changes in plpgsql - it can enforce work with different types in expressions in cycle - so we should to leave expressions based on plans or we have to introduce alternated plans for different input types.

Pavel


 

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: Allow SQL/plpgsql functions to accept record

От
Jim Nasby
Дата:
On 4/23/15 8:33 AM, Pavel Stehule wrote:
>     I agree that that would be useful.  I think the problem with an
>     expression like rowvar.something is that PL/pgsql cannot infer the
>     type of the result, and nothing else works without that.  I doubt that
>     it's practical to lift that restriction.  PL/pgsql could introduce
>     dedicated syntax for this operation, like DYNAMIC_EXTRACT(rowvar,
>     colname, resulttype) or something, but that's going to be clunky at
>     best.  Whether we eventually do that or not, if we can allow rows to
>     be passed in and then let people use json or hstore operators on them,
>     that would be a helpful step forward, IMHO.  I'm not sure if that's
>     practical either, but maybe...
>
>
> this need significant changes in plpgsql - it can enforce work with
> different types in expressions in cycle - so we should to leave
> expressions based on plans or we have to introduce alternated plans for
> different input types.

Is this fundamentally an issue of not knowing what we're being handed 
when we compile the function? Perhaps a way around this limitation would 
be to recompile during execution if any record arguments get a different 
base type. In reality, I suspect that won't happen during a single query.

I'll take a look at at least allowing passing a record in so you can 
hand it to some other function. Any pointers on how to do that would be 
welcome; I've never hacked on plpgsql or SQL function code before.
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: Allow SQL/plpgsql functions to accept record

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, Apr 22, 2015 at 6:12 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
>> As for allowing SQL and plpgsql functions to accept a record, I think our
>> JSON functionality just provided plenty of reason we should allow accepting
>> them, even if you can't do much with it: you *can* hand it to row_to_json(),
>> which does allow you to do something useful with it. So it seems reasonable
>> to me that we should at least accept it as a function argument.

> I agree that that would be useful.  I think the problem with an
> expression like rowvar.something is that PL/pgsql cannot infer the
> type of the result, and nothing else works without that.  I doubt that
> it's practical to lift that restriction.

Well, we already support local variables of type RECORD in plpgsql, so
it's not immediately clear to me that function arguments would be much
worse.  There are a lot of deficiencies with the RECORD-local-variable
implementation: if you try to change the actual RECORD type from one call
to the next you'll probably have problems.  But it seems like we could
avoid that for function arguments by treating RECORD as a polymorphic
argument type, and thereby generating a separate set of plan trees for
each actual record type passed to the function within a given session.
So in principle it ought to work better than the local-variable case does
today.

In short I suspect that Jim is right and this has more to do with a
shortage of round tuits than any fundamental problem.

Not sure about the SQL-function case.  That might be even easier because
functions.c doesn't try to cache plans across queries; or maybe not.
        regards, tom lane



Re: Allow SQL/plpgsql functions to accept record

От
Jim Nasby
Дата:
On 4/25/15 4:50 PM, Tom Lane wrote:
> Well, we already support local variables of type RECORD in plpgsql, so
> it's not immediately clear to me that function arguments would be much
> worse.  There are a lot of deficiencies with the RECORD-local-variable
> implementation: if you try to change the actual RECORD type from one call
> to the next you'll probably have problems.  But it seems like we could
> avoid that for function arguments by treating RECORD as a polymorphic
> argument type, and thereby generating a separate set of plan trees for
> each actual record type passed to the function within a given session.
> So in principle it ought to work better than the local-variable case does
> today.
>
> In short I suspect that Jim is right and this has more to do with a
> shortage of round tuits than any fundamental problem.

I took a stab at plpgsql and it seems to work ok... but I'm not sure
it's terribly valuable because you end up with an anonymous record
instead of something that points back to what you handed it. The 'good'
news is it doesn't seem to blow up on successive calls with different
arguments...

> Not sure about the SQL-function case.  That might be even easier because
> functions.c doesn't try to cache plans across queries; or maybe not.

This on the other hand was rather easy. It's not horribly useful due to
built-in restrictions on dealing with record, but that's certainly not
plsql's fault, and this satisfies my initial use case of

create function cn(record) returns bigint language sql as $$
SELECT count(*)
   FROM json_each_text( row_to_json($1) ) a
   WHERE value IS NULL $$;

Attached patches both pass make check. The plpgsql is WIP, but I think
the SQL one is OK.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

Вложения

Re: Allow SQL/plpgsql functions to accept record

От
Andrew Dunstan
Дата:
On 04/27/2015 10:35 PM, Jim Nasby wrote:
> On 4/25/15 4:50 PM, Tom Lane wrote:
>> Well, we already support local variables of type RECORD in plpgsql, so
>> it's not immediately clear to me that function arguments would be much
>> worse.  There are a lot of deficiencies with the RECORD-local-variable
>> implementation: if you try to change the actual RECORD type from one 
>> call
>> to the next you'll probably have problems.  But it seems like we could
>> avoid that for function arguments by treating RECORD as a polymorphic
>> argument type, and thereby generating a separate set of plan trees for
>> each actual record type passed to the function within a given session.
>> So in principle it ought to work better than the local-variable case 
>> does
>> today.
>>
>> In short I suspect that Jim is right and this has more to do with a
>> shortage of round tuits than any fundamental problem.
>
> I took a stab at plpgsql and it seems to work ok... but I'm not sure 
> it's terribly valuable because you end up with an anonymous record 
> instead of something that points back to what you handed it. The 
> 'good' news is it doesn't seem to blow up on successive calls with 
> different arguments...
>
>> Not sure about the SQL-function case. That might be even easier because
>> functions.c doesn't try to cache plans across queries; or maybe not.
>
> This on the other hand was rather easy. It's not horribly useful due 
> to built-in restrictions on dealing with record, but that's certainly 
> not plsql's fault, and this satisfies my initial use case of
>
> create function cn(record) returns bigint language sql as $$
> SELECT count(*)
>   FROM json_each_text( row_to_json($1) ) a
>   WHERE value IS NULL $$;
>
> Attached patches both pass make check. The plpgsql is WIP, but I think 
> the SQL one is OK.


My point remains that we really need methods of a) getting the field 
names from generic records and b) using text values to access fields of 
generic records, both as lvalues and rvalues. Without those this feature 
will be of comparatively little value, IMNSHO. With them it will be much 
more useful and  powerful.

cheers

andrew



Re: Allow SQL/plpgsql functions to accept record

От
Jim Nasby
Дата:
On 4/27/15 10:06 PM, Andrew Dunstan wrote:
> My point remains that we really need methods of a) getting the field
> names from generic records and b) using text values to access fields of
> generic records, both as lvalues and rvalues. Without those this feature
> will be of comparatively little value, IMNSHO. With them it will be much
> more useful and  powerful.

Sure, and if I had some pointers on what was necessary there I'd take a 
look at it. But I'm not very familiar with plpgsql (let alone what we'd 
need to do this in SQL), so I'd just be fumbling around. As a reminder, 
one of the big issues there seems to be that while plSQL knows what the 
underlying type is, plpgsql has no idea, which seriously limits the use 
of passing it a record.

In the meantime I've got a patch that definitely works for plSQL and 
allows you to handle a record and pass it on to other functions (such as 
json_from_record()). Since that's my original motivation for looking at 
this, I'd like that patch to be considered unless there's a big drawback 
to it that I'm missing. (For 9.6, of course.)
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: Allow SQL/plpgsql functions to accept record

От
Merlin Moncure
Дата:
On Tue, Apr 28, 2015 at 12:44 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
> On 4/27/15 10:06 PM, Andrew Dunstan wrote:
>>
>> My point remains that we really need methods of a) getting the field
>> names from generic records and b) using text values to access fields of
>> generic records, both as lvalues and rvalues. Without those this feature
>> will be of comparatively little value, IMNSHO. With them it will be much
>> more useful and  powerful.
>
>
> Sure, and if I had some pointers on what was necessary there I'd take a look
> at it. But I'm not very familiar with plpgsql (let alone what we'd need to
> do this in SQL), so I'd just be fumbling around. As a reminder, one of the
> big issues there seems to be that while plSQL knows what the underlying type
> is, plpgsql has no idea, which seriously limits the use of passing it a
> record.
>
> In the meantime I've got a patch that definitely works for plSQL and allows
> you to handle a record and pass it on to other functions (such as
> json_from_record()). Since that's my original motivation for looking at
> this, I'd like that patch to be considered unless there's a big drawback to
> it that I'm missing. (For 9.6, of course.)

I think it's pretty useful actually.

merlin



Re: Allow SQL/plpgsql functions to accept record

От
Andrew Dunstan
Дата:
On 04/28/2015 01:44 PM, Jim Nasby wrote:
> On 4/27/15 10:06 PM, Andrew Dunstan wrote:
>> My point remains that we really need methods of a) getting the field
>> names from generic records and b) using text values to access fields of
>> generic records, both as lvalues and rvalues. Without those this feature
>> will be of comparatively little value, IMNSHO. With them it will be much
>> more useful and  powerful.
>
> Sure, and if I had some pointers on what was necessary there I'd take 
> a look at it. But I'm not very familiar with plpgsql (let alone what 
> we'd need to do this in SQL), so I'd just be fumbling around. As a 
> reminder, one of the big issues there seems to be that while plSQL 
> knows what the underlying type is, plpgsql has no idea, which 
> seriously limits the use of passing it a record.
>
> In the meantime I've got a patch that definitely works for plSQL and 
> allows you to handle a record and pass it on to other functions (such 
> as json_from_record()). Since that's my original motivation for 
> looking at this, I'd like that patch to be considered unless there's a 
> big drawback to it that I'm missing. (For 9.6, of course.)


If you look at composite_to_json() it gives you almost all that you'd 
need to construct an array of field names for an arbitrary record, and a 
lot of what you'd need to extract a value for an arbitrary field. 
populate_record_worker() has a good deal of what you'd need to set a 
value of an arbitrary field. None of that means that there isn't a good 
deal of work do do, but if you want pointers there are some.

cheers

andrew




Re: Allow SQL/plpgsql functions to accept record

От
Jim Nasby
Дата:
On 4/28/15 1:31 PM, Andrew Dunstan wrote:
>
> On 04/28/2015 01:44 PM, Jim Nasby wrote:
>> On 4/27/15 10:06 PM, Andrew Dunstan wrote:
>>> My point remains that we really need methods of a) getting the field
>>> names from generic records and b) using text values to access fields of
>>> generic records, both as lvalues and rvalues. Without those this feature
>>> will be of comparatively little value, IMNSHO. With them it will be much
>>> more useful and  powerful.
>>
>> Sure, and if I had some pointers on what was necessary there I'd take
>> a look at it. But I'm not very familiar with plpgsql (let alone what
>> we'd need to do this in SQL), so I'd just be fumbling around. As a
>> reminder, one of the big issues there seems to be that while plSQL
>> knows what the underlying type is, plpgsql has no idea, which
>> seriously limits the use of passing it a record.
>>
>> In the meantime I've got a patch that definitely works for plSQL and
>> allows you to handle a record and pass it on to other functions (such
>> as json_from_record()). Since that's my original motivation for
>> looking at this, I'd like that patch to be considered unless there's a
>> big drawback to it that I'm missing. (For 9.6, of course.)
>
>
> If you look at composite_to_json() it gives you almost all that you'd
> need to construct an array of field names for an arbitrary record, and a
> lot of what you'd need to extract a value for an arbitrary field.
> populate_record_worker() has a good deal of what you'd need to set a
> value of an arbitrary field. None of that means that there isn't a good
> deal of work do do, but if you want pointers there are some.

Thanks, those are helpful. BTW, I think this is a memory leak in 
populate_record_worker():
my_extra = (RecordIOData *) fcinfo->flinfo->fn_extra;if (my_extra == NULL ||    my_extra->ncolumns != ncolumns){
fcinfo->flinfo->fn_extra=        MemoryContextAlloc(fcinfo->flinfo->fn_mcxt,
 


The part that I'm still concerned about in plpgsql is how to handle the 
case of having a record that we should be able to associate with a 
specific composite type (such as a table type). That's not currently 
working in my patch, but I'm not sure why. Maybe I need to test for that 
and in that case set the variable up as a PLPGSQL_TTYPE_ROW instead of 
PLPGSQL_TTYPE_REC?
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com