Обсуждение: Make deparsing of column defaults faster

Поиск
Список
Период
Сортировка

Make deparsing of column defaults faster

От
Jeff Janes
Дата:
The user docs say about column defaults: "The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed)"

And also say about pg_get_expr "If the expression might contain Vars, specify the OID of the relation they refer to as the second parameter; if no Vars are expected, zero is sufficient"

Since defaults can't contain Vars, this patch converts the second parameter to zero in places where pg_get_expr is invoked on column defaults.  Doing this drops the time needed to run `pg_dump -s` on a 1600 column table with defaults on all columns by a factor of 40.  So pg_upgrade users with such tables should see a big win (cc Justin).

I also made the change to the other places in the source which I could identify with this issue.  \d is now much faster for such tables as well, but I didn't test the performance of the others.

There are surely more opportunities to speed up pg_get_expr for things other than column defaults, but I wanted to submit this for now.

I have not tested the pg_upgrade using source versions <8.4, because I can't get anything that old to "make check" correctly, so I have nothing to test on.

Cheers,

Jeff
Вложения

Re: Make deparsing of column defaults faster

От
Peter Eisentraut
Дата:
On 6/4/18 20:55, Jeff Janes wrote:
> The user docs say about column defaults: "The value is any variable-free
> expression (subqueries and cross-references to other columns in the
> current table are not allowed)"
> 
> And also say about pg_get_expr "If the expression might contain Vars,
> specify the OID of the relation they refer to as the second parameter;
> if no Vars are expected, zero is sufficient"
> 
> Since defaults can't contain Vars, this patch converts the second
> parameter to zero in places where pg_get_expr is invoked on column
> defaults.

My in-progress generated columns patch removes that assumption (since a
generated column generally refers to another column of the same table).

> Doing this drops the time needed to run `pg_dump -s` on a
> 1600 column table with defaults on all columns by a factor of 40.  So
> pg_upgrade users with such tables should see a big win (cc Justin).

That's impressive but also a bit disturbing.  Where does this
performance change come from?  Can we harness it in different ways?

> I have not tested the pg_upgrade using source versions <8.4, because I
> can't get anything that old to "make check" correctly, so I have nothing
> to test on.

If you have trouble getting older PostgreSQL versions to run, try
building with -O0.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Make deparsing of column defaults faster

От
Jeff Janes
Дата:
On Mon, Jun 4, 2018 at 10:00 PM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:
On 6/4/18 20:55, Jeff Janes wrote:
> The user docs say about column defaults: "The value is any variable-free
> expression (subqueries and cross-references to other columns in the
> current table are not allowed)"
>
> And also say about pg_get_expr "If the expression might contain Vars,
> specify the OID of the relation they refer to as the second parameter;
> if no Vars are expected, zero is sufficient"
>
> Since defaults can't contain Vars, this patch converts the second
> parameter to zero in places where pg_get_expr is invoked on column
> defaults.

My in-progress generated columns patch removes that assumption (since a
generated column generally refers to another column of the same table).

Will your patch put the generated columns data into pg_attrdef, or create a new catalog to hold them?  I guess it would make sense to re-use the catalog since you can't have both a default value and be a generated column.


> Doing this drops the time needed to run `pg_dump -s` on a
> 1600 column table with defaults on all columns by a factor of 40.  So
> pg_upgrade users with such tables should see a big win (cc Justin).

That's impressive but also a bit disturbing.  Where does this
performance change come from?  Can we harness it in different ways?

The most fundamental way would be to change make_colname_unique to use a hash table, rather than an N^2 algorithm.  It would probably be similar to Tom's commit 8004953b5a2449c26c4e "Speed up ruleutils' name de-duplication code" but he does warn in the commit message that his method would be trickier to use for column aliases than table aliases.  That would speed up a lot more cases than just DEFAULTS (check constraints, maybe triggers) , but would be harder to do.  

Even once we fix this, it still wouldn't make sense to create a deparse context which cannot be necessary.  Perhaps the call to deparse_context_for could be made lazy, so that it is delayed until a Var is actually found in the expression node tree, rather than being done preemptively.  Then all cases without Vars would be improved, not just cases where it could be proved by static code analysis that Vars are not possible.

When setting up the deparse context for a single table, the column names already need to be unique (I think).  So another possibility is to pass a flag down from set_simple_column_names to set_relation_column_names to indicate no unique checking is needed.

Since pg_dump calls pg_get_expr once over and over again on the same table consecutively, perhaps we could cache the column alias assignments in a single-entry cache, so if it is called on the same table as last time it just re-uses the aliases from last time.  I am not planning on working on that, I don't know where such a cache could be stored such that is freed and invalidated at the appropriate times.

Cheers,

Jeff

Re: Make deparsing of column defaults faster

От
Peter Eisentraut
Дата:
On 29.06.18 05:15, Jeff Janes wrote:
> Since pg_dump calls pg_get_expr once over and over again on the same
> table consecutively, perhaps we could cache the column alias assignments
> in a single-entry cache, so if it is called on the same table as last
> time it just re-uses the aliases from last time.  I am not planning on
> working on that, I don't know where such a cache could be stored such
> that is freed and invalidated at the appropriate times.

I looked into that.  deparse_context_for() is actually not that
expensive on its own, well below one second, but it gets somewhat
expensive when you call it 1600 times for one table.  So to address that
case, we can cache the deparse context between calls in the fn_extra
field of pg_get_expr.  The attached patch does that.  This makes the
pg_dump -s times pretty much constant even with 1600 columns with
defaults.  psql \d should benefit similarly.  I haven't seen any other
cases where we'd expect hundreds of related objects to deparse.  (Do
people have hundreds of policies per table?)

(I suppose you could create scenarios with very many such tables to make
the overhead visible again.)

How realistic is this use case?  Is it worth it?

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Вложения

Re: Make deparsing of column defaults faster

От
Justin Pryzby
Дата:
On Mon, Jun 04, 2018 at 10:00:53PM -0400, Peter Eisentraut wrote:
> On 6/4/18 20:55, Jeff Janes wrote:
> > Since defaults can't contain Vars, this patch converts the second
> > parameter to zero in places where pg_get_expr is invoked on column
> > defaults.
> 
> My in-progress generated columns patch removes that assumption (since a
> generated column generally refers to another column of the same table).

On Thu, Jul 05, 2018 at 04:45:07PM +0200, Peter Eisentraut wrote:
> On 29.06.18 05:15, Jeff Janes wrote:
> > Since pg_dump calls pg_get_expr once over and over again on the same
> > table consecutively, perhaps we could cache the column alias assignments
> > in a single-entry cache, so if it is called on the same table as last
> > time it just re-uses the aliases from last time.  I am not planning on
> 
> I looked into that.  deparse_context_for() is actually not that
> expensive on its own, well below one second, but it gets somewhat
> expensive when you call it 1600 times for one table.  So to address that
> case, we can cache the deparse context between calls in the fn_extra
> field of pg_get_expr.  The attached patch does that.  This makes the
> pg_dump -s times pretty much constant even with 1600 columns with
> defaults.

I checked on one customer running PG10.4, for which pg_dump takes 8 minutes to
pg_dump -s.

I imported existing schema to PG12dev (which itself took 25min) and compared:
patched: 2m33.616s
unpatched: 7m19.578s

Note that I've reduced the number of child tables in this DB recently (by
repartitioning tables from daily to monthly granularity), thereby reducing the
number of columns of the largest tables by a factor of 30, and reducing the
size of pg_dump -s to 51MB from 120MB (6 months ago).  I expect this patch
would've saved even more before the cleanup.

> How realistic is this use case?  Is it worth it?

Note, that affects pg_upgrade, which is how this issue originally came up [0].
(But I believe pg_upgrade likes to call pg_dump from the old server version, so
pg_upgrade to v11 couldn't benefit unless this was included in PG10.5).

[pryzbyj@database postgresql]$ grep -c 'SET DEFAULT' /srv/cdrperfbackup/ts/2018-07-04/pg_dump-section=pre-data
183915

Justin

[0] https://www.postgresql.org/message-id/CAMkU%3D1x-e%2BmaqefhM1yMeSiJ8J9Z%2BSJHgW7c9bqo3E3JMG4iJA%40mail.gmail.com


Re: Make deparsing of column defaults faster

От
Peter Eisentraut
Дата:
On 05.07.18 18:58, Justin Pryzby wrote:
> I checked on one customer running PG10.4, for which pg_dump takes 8 minutes to
> pg_dump -s.
> 
> I imported existing schema to PG12dev (which itself took 25min) and compared:
> patched: 2m33.616s
> unpatched: 7m19.578s

Which patch is that, Jeff's or mine?

> Note, that affects pg_upgrade, which is how this issue originally came up [0].
> (But I believe pg_upgrade likes to call pg_dump from the old server version, so
> pg_upgrade to v11 couldn't benefit unless this was included in PG10.5).

pg_upgrade calls the new pg_dump.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Make deparsing of column defaults faster

От
Jeff Janes
Дата:
On Thu, Jul 5, 2018 at 10:45 AM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:
On 29.06.18 05:15, Jeff Janes wrote:
> Since pg_dump calls pg_get_expr once over and over again on the same
> table consecutively, perhaps we could cache the column alias assignments
> in a single-entry cache, so if it is called on the same table as last
> time it just re-uses the aliases from last time.  I am not planning on
> working on that, I don't know where such a cache could be stored such
> that is freed and invalidated at the appropriate times.

I looked into that.  deparse_context_for() is actually not that
expensive on its own, well below one second, but it gets somewhat
expensive when you call it 1600 times for one table.  So to address that
case, we can cache the deparse context between calls in the fn_extra
field of pg_get_expr.  The attached patch does that.  This makes the
pg_dump -s times pretty much constant even with 1600 columns with
defaults.  psql \d should benefit similarly.  I haven't seen any other
cases where we'd expect hundreds of related objects to deparse.  (Do
people have hundreds of policies per table?)

One case that your patch doesn't improve (neither does my posted one) is check constraints.  To fix that, pg_get_constraintdef_worker would also need to grow a cache as well.  I don't know how often people put check constraints on most of the columns of a table.  Some people like their NOT NULL constraints to be named, not implicit.

But from the bigger picture of making pg_upgrade faster, a major issue is that while pg_dump -s gets faster for the column default case, the restore of that dump is still slow (again, my posted patch also doesn't fix that).  In that case it is deparse_context_for called from StoreAttrDefault which is slow.

(I suppose you could create scenarios with very many such tables to make
the overhead visible again.)

With partitioning, I think anything which happens once is likely to happen many times. But I don't see any extra improvement from applying my patch over yours (the bottleneck is shifted off to pg_dump itself), and yours is definitely cleaner and slightly more general.  I think that yours would be worthwhile, even if not the last word on the subject.

Cheers,

Jeff

Re: Make deparsing of column defaults faster

От
Peter Eisentraut
Дата:
On 07/07/2018 20:07, Jeff Janes wrote:
> One case that your patch doesn't improve (neither does my posted one) is
> check constraints.  To fix that, pg_get_constraintdef_worker would also
> need to grow a cache as well.  I don't know how often people put check
> constraints on most of the columns of a table.  Some people like their
> NOT NULL constraints to be named, not implicit.
> 
> But from the bigger picture of making pg_upgrade faster, a major issue
> is that while pg_dump -s gets faster for the column default case, the
> restore of that dump is still slow (again, my posted patch also doesn't
> fix that).  In that case it is deparse_context_for called from
> StoreAttrDefault which is slow.

Any thoughts on how to proceed here?  It seems there is more work to do
to cover all the issues with dumping and restoring tables with many
columns.  Since the original report was in the context of pg_upgrade, we
should surely address at least the pg_restore slowness.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Make deparsing of column defaults faster

От
Jeff Janes
Дата:
On Mon, Jul 30, 2018 at 7:03 AM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:
On 07/07/2018 20:07, Jeff Janes wrote:
> One case that your patch doesn't improve (neither does my posted one) is
> check constraints.  To fix that, pg_get_constraintdef_worker would also
> need to grow a cache as well.  I don't know how often people put check
> constraints on most of the columns of a table.  Some people like their
> NOT NULL constraints to be named, not implicit.
>
> But from the bigger picture of making pg_upgrade faster, a major issue
> is that while pg_dump -s gets faster for the column default case, the
> restore of that dump is still slow (again, my posted patch also doesn't
> fix that).  In that case it is deparse_context_for called from
> StoreAttrDefault which is slow.

Any thoughts on how to proceed here?  It seems there is more work to do
to cover all the issues with dumping and restoring tables with many
columns.  Since the original report was in the context of pg_upgrade, we
should surely address at least the pg_restore slowness.


I'll working on solving the problem using a hash table at the lowest level (making column names unique), for a future commit fest.  That should drop it from N^3 to N^2, which since N can't go above 1600 should be good enough.

So we can set this to rejected, as that will be an entirely different approach.  

Your caching patch might be worthwhile on its own, though.

Cheers,

Jeff

Re: Make deparsing of column defaults faster

От
Peter Eisentraut
Дата:
On 30/07/2018 13:51, Jeff Janes wrote:
>     Any thoughts on how to proceed here?  It seems there is more work to do
>     to cover all the issues with dumping and restoring tables with many
>     columns.  Since the original report was in the context of pg_upgrade, we
>     should surely address at least the pg_restore slowness.
> 
> I'll working on solving the problem using a hash table at the lowest
> level (making column names unique), for a future commit fest.  That
> should drop it from N^3 to N^2, which since N can't go above 1600 should
> be good enough.
> 
> So we can set this to rejected, as that will be an entirely different
> approach.  
> 
> Your caching patch might be worthwhile on its own, though.

I'm going to set this thread as returned with feedback until we have a
more complete solution.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services