Обсуждение: Deadlock risk while inserting directly into partition?

Поиск
Список
Период
Сортировка

Deadlock risk while inserting directly into partition?

От
Amit Kapila
Дата:
I noticed that while inserting directly into a partition table we
compute the PartitionCheckExpr by traversing all the parent partitions
via ExecPartitionCheck()->RelationGetPartitionQual()->generate_partition_qual().
We take AccessShareLock on parent tables while generating qual.

Now, on the other hand, while dropping constraint on a partitioned
table, we take the lock from parent to all the child tables.

I think taking locks in opposite directions can lead to deadlock in
these operations.

I have tried with the below example on HEAD.

Setup
=======
create or replace function func_dummy(price integer) returns integer as
$$
    begin
        raise notice 'hello from func_dummy';
        return price;
    end;
$$ language plpgsql immutable parallel unsafe;


CREATE TABLE pt_test (a int, c char(1000)) PARTITION BY range (a);
CREATE TABLE pt_test1 PARTITION OF pt_test FOR VALUES FROM (0) TO (100000);
CREATE TABLE pt_test2 PARTITION OF pt_test FOR VALUES FROM (100000) TO (400000);

ALTER TABLE pt_test ADD CONSTRAINT check_cons CHECK(func_dummy(a) == a);

Actual test
=============
Session-1
--------------
Add breakpoint in generate_partition_qual(). Perform below statement.
insert into pt_test2 values(100001, 'aaaa');

Now, stop in the debugger just before taking AccessShareLock on the
parent table.

Session-2
=========
 ALTER TABLE pt_test DROP CONSTRAINT check_cons;

You will see that session-2 is waiting to get a lock on pt_test2.
Then, continue debugging in session-1 which will lead to a deadlock.

Is this expected, if so why?

-- 
With Regards,
Amit Kapila.



RE: Deadlock risk while inserting directly into partition?

От
"houzj.fnst@fujitsu.com"
Дата:
On Wednesday, June 23, 2021 5:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> I noticed that while inserting directly into a partition table we compute the
> PartitionCheckExpr by traversing all the parent partitions via
> ExecPartitionCheck()->RelationGetPartitionQual()->generate_partition_qual().
> We take AccessShareLock on parent tables while generating qual.
> 
> Now, on the other hand, while dropping constraint on a partitioned table, we
> take the lock from parent to all the child tables.
> 
> I think taking locks in opposite directions can lead to deadlock in these
> operations.
> 
> I have tried with the below example on HEAD.
> 
> Setup
> =======
> create or replace function func_dummy(price integer) returns integer as $$
>     begin
>         raise notice 'hello from func_dummy';
>         return price;
>     end;
> $$ language plpgsql immutable parallel unsafe;
> 
> 
> CREATE TABLE pt_test (a int, c char(1000)) PARTITION BY range (a); CREATE
> TABLE pt_test1 PARTITION OF pt_test FOR VALUES FROM (0) TO (100000);
> CREATE TABLE pt_test2 PARTITION OF pt_test FOR VALUES FROM (100000) TO
> (400000);
> 
> ALTER TABLE pt_test ADD CONSTRAINT check_cons CHECK(func_dummy(a)
> == a);
> 
> Actual test
> =============
> Session-1
> --------------
> Add breakpoint in generate_partition_qual(). Perform below statement.
> insert into pt_test2 values(100001, 'aaaa');
> 
> Now, stop in the debugger just before taking AccessShareLock on the parent
> table.
> 
> Session-2
> =========
>  ALTER TABLE pt_test DROP CONSTRAINT check_cons;
> 
> You will see that session-2 is waiting to get a lock on pt_test2.
> Then, continue debugging in session-1 which will lead to a deadlock.

I can reproduce this dead lock issue with the above steps.
And I can see the following error message.

postgres=# insert into pt_test2 values(100001, 'aaaa');
NOTICE:  hello from func_dummy
ERROR:  deadlock detected
DETAIL:  Process 3068763 waits for AccessShareLock on relation 16385 of database 13027; blocked by process 3068966.
Process 3068966 waits for AccessExclusiveLock on relation 16393 of database 13027; blocked by process 3068763.
HINT:  See server log for query details.

Best regards,
houzj

Re: Deadlock risk while inserting directly into partition?

От
David Rowley
Дата:
On Wed, 23 Jun 2021 at 21:07, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I noticed that while inserting directly into a partition table we
> compute the PartitionCheckExpr by traversing all the parent partitions
> via ExecPartitionCheck()->RelationGetPartitionQual()->generate_partition_qual().
> We take AccessShareLock on parent tables while generating qual.
>
> Now, on the other hand, while dropping constraint on a partitioned
> table, we take the lock from parent to all the child tables.
>
> I think taking locks in opposite directions can lead to deadlock in
> these operations.

I wonder if it's possible to do any better here?  Surely when
traversing from child to parent we must lock the child before checking
what the parent relation is.

I think the reasons for doing operations directly on partitions are
being reduced with each release.  What operations do people really
need to do on partitions now? TRUNCATE is probably one, maybe there's
still a need to CREATE INDEX.  There's not much to gain performance
wise now inserting directly into a partition. There's a pending patch
around that aims to speed that up further by caching the last used
partition and trying that first.

I've recently been thinking it would be good if you were unable to
access partitions directly by name at all.  That would also get around
the problem of having to lock all non-pruned partitions during queries
to the partitioned table. Maybe it's too late for that though.

David



Re: Deadlock risk while inserting directly into partition?

От
Tom Lane
Дата:
David Rowley <dgrowleyml@gmail.com> writes:
> I've recently been thinking it would be good if you were unable to
> access partitions directly by name at all.

I strongly disagree.  That's essentially betting the whole farm on
our always being able to optimize parent-level operations fully,
which I do not think we are anywhere close to.

> That would also get around
> the problem of having to lock all non-pruned partitions during queries
> to the partitioned table. Maybe it's too late for that though.

Yeah, I think we are locked into the current design now, for better
or worse.

            regards, tom lane



Re: Deadlock risk while inserting directly into partition?

От
David Rowley
Дата:
On Thu, 24 Jun 2021 at 10:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> David Rowley <dgrowleyml@gmail.com> writes:
> > I've recently been thinking it would be good if you were unable to
> > access partitions directly by name at all.
>
> I strongly disagree.  That's essentially betting the whole farm on
> our always being able to optimize parent-level operations fully,
> which I do not think we are anywhere close to.

Did you have anything in particular in mind here?  I thought we got
all these in 8edd0e794.  I think the one that was missing was parallel
index scans. That commit adds code to add the missing paths in
add_paths_to_append_rel().

As of 14, UPDATE/DELETEs when a single partition remains after pruning
should be fairly comparable to a direct UPDATE/DELETE on the
partition.

Certainly, back when partitioning was added there were still lots of
use cases for querying partitions directly, but as far as I see it,
there's not many of those left. The patch in [1] aims to reduce the
overhead of one of these. I have a patch locally for another one. I'm
currently not aware of any other cases where querying a single
partition is slow.

But... maybe there are some cases where a user can be certain that all
interesting records are contained in a single partition but
partitioning pruning cannot prove it...So maybe what you say is right.
The workaround there would be to add a qual that allows pruning to
work.

David

[1] https://www.postgresql.org/message-id/CA+HiwqGqh-aHXGO8-_ftU7e2GdGUr_T-xqr6Z_6uagyJpEpJfA@mail.gmail.com



Re: Deadlock risk while inserting directly into partition?

От
Tom Lane
Дата:
David Rowley <dgrowleyml@gmail.com> writes:
> On Thu, 24 Jun 2021 at 10:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I strongly disagree.  That's essentially betting the whole farm on
>> our always being able to optimize parent-level operations fully,
>> which I do not think we are anywhere close to.

> Did you have anything in particular in mind here?

I don't think it's very hard to make up WHERE conditions that a person can
see select only one partition, but PG won't be able to figure that out.

> But... maybe there are some cases where a user can be certain that all
> interesting records are contained in a single partition but
> partitioning pruning cannot prove it...So maybe what you say is right.
> The workaround there would be to add a qual that allows pruning to
> work.

[ shrug... ]  It's about as easy to just name the partition you want.
When planning overhead is considered, maybe it's a lot easier.

            regards, tom lane



Re: Deadlock risk while inserting directly into partition?

От
David Rowley
Дата:
On Thu, 24 Jun 2021 at 12:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> David Rowley <dgrowleyml@gmail.com> writes:
> > But... maybe there are some cases where a user can be certain that all
> > interesting records are contained in a single partition but
> > partitioning pruning cannot prove it...So maybe what you say is right.
> > The workaround there would be to add a qual that allows pruning to
> > work.
>
> [ shrug... ]  It's about as easy to just name the partition you want.
> When planning overhead is considered, maybe it's a lot easier.

I'm not suggesting that we go and make it impossible for users to
directly reference partitions today.  What I mean is that as we add
more and more fixes to improve performance of partitioning, that there
comes a point where the ability to directly reference partitions is a
hindrance rather than something that's useful. Right now that
hindrance is the fact that we must lock every single partition in the
plan. We only need to do that in case some other backend is doing
something that bypasses taking a lock on the parent partitioned table.
The overhead of taking these locks is pretty significant for
partitioned tables with lots of partitions where only 1 of them
survives run-time partition pruning.  That's really terrible for
people that want to PREPARE queries and just look up a single row from
a single partition.  That seems like a pretty big use case that we're
just terrible at today.

David



Re: Deadlock risk while inserting directly into partition?

От
Tom Lane
Дата:
David Rowley <dgrowleyml@gmail.com> writes:
> ... What I mean is that as we add
> more and more fixes to improve performance of partitioning, that there
> comes a point where the ability to directly reference partitions is a
> hindrance rather than something that's useful. Right now that
> hindrance is the fact that we must lock every single partition in the
> plan. We only need to do that in case some other backend is doing
> something that bypasses taking a lock on the parent partitioned table.

TBH, I buy no part of that line of reasoning.  I don't think that the
ability to access partitions directly is a material problem here;
I doubt that we need to lock every partition in the plan when run-time
routing is working (surely we only need to lock the partition mapping);
and most especially I don't see why an operation on a child table that
doesn't lock the parent would cause a problem for queries that do not
need to access that child.  Perhaps we've got some implementation issues
to fix, but I see no fundamental problem there.

It is true that this design can lead to deadlocks between operations that
start from the parent vs ones that start from the child and then discover
that they need to lock the parent.  But the latter should be darn rare.
In any case, your solution seems to amount to prohibiting not only the
latter class of operations altogether, but *also* prohibiting operations
on the child that don't need to lock the parent.  I fail to see how that
makes anybody's life better.

            regards, tom lane



Re: Deadlock risk while inserting directly into partition?

От
David Rowley
Дата:
On Thu, 24 Jun 2021 at 12:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I don't think that the
> ability to access partitions directly is a material problem here;
> I doubt that we need to lock every partition in the plan when run-time
> routing is working (surely we only need to lock the partition mapping);
> and most especially I don't see why an operation on a child table that
> doesn't lock the parent would cause a problem for queries that do not
> need to access that child.  Perhaps we've got some implementation issues
> to fix, but I see no fundamental problem there.

Not quite sure I know what you mean by "lock the partition mapping".

We do unfortunately need to lock all partitions in the plan before
run-time pruning completes.  For example, if someone drops an index
from one of the partitions that's used in the plan, then we must take
the lock before execution so that we properly invalidate the plan and
get another one.  I'm not sure I see how that could be done during
execution, We might have already started returning rows to the client
by that time.

> It is true that this design can lead to deadlocks between operations that
> start from the parent vs ones that start from the child and then discover
> that they need to lock the parent.  But the latter should be darn rare.
> In any case, your solution seems to amount to prohibiting not only the
> latter class of operations altogether, but *also* prohibiting operations
> on the child that don't need to lock the parent.

Again, I'm not saying we need to go and make partitioning work this
way. I'm saying that the problem wouldn't exist if it did work that
way and that there appears to be no solution to fix it without making
it work that way.

> I fail to see how that
> makes anybody's life better.

Well, if you ignore the perfectly valid use case that I mentioned
then, yeah.  Or do you not think that doing a single-row lookup on a
partitioned table with a prepared query is a case worth worrying
about?

I grabbed a profile from a generic plan being executed on a
partitioned table with 100 partitions. It's completely dominated by
lock management and looks like this:

  22.42%  postgres  postgres            [.] hash_search_with_hash_value
   9.06%  postgres  postgres            [.] hash_bytes
   4.14%  postgres  postgres            [.] LockAcquireExtended
   3.90%  postgres  postgres            [.] AllocSetAlloc
   3.84%  postgres  postgres            [.] hash_seq_search
   3.77%  postgres  postgres            [.] LockReleaseAll

I don't think 100 partitions is excessive.

David



Re: Deadlock risk while inserting directly into partition?

От
Simon Riggs
Дата:
On Thu, Jun 24, 2021 at 1:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> David Rowley <dgrowleyml@gmail.com> writes:
> > ... What I mean is that as we add
> > more and more fixes to improve performance of partitioning, that there
> > comes a point where the ability to directly reference partitions is a
> > hindrance rather than something that's useful. Right now that
> > hindrance is the fact that we must lock every single partition in the
> > plan. We only need to do that in case some other backend is doing
> > something that bypasses taking a lock on the parent partitioned table.
>
> TBH, I buy no part of that line of reasoning.  I don't think that the
> ability to access partitions directly is a material problem here;
> I doubt that we need to lock every partition in the plan when run-time
> routing is working (surely we only need to lock the partition mapping);
> and most especially I don't see why an operation on a child table that
> doesn't lock the parent would cause a problem for queries that do not
> need to access that child.  Perhaps we've got some implementation issues
> to fix, but I see no fundamental problem there.
>
> It is true that this design can lead to deadlocks between operations that
> start from the parent vs ones that start from the child and then discover
> that they need to lock the parent.  But the latter should be darn rare.
> In any case, your solution seems to amount to prohibiting not only the
> latter class of operations altogether, but *also* prohibiting operations
> on the child that don't need to lock the parent.  I fail to see how that
> makes anybody's life better.

I agree with David's points above.

Maybe I've missed something but I don't see any benefit in being able
to reference individual partitions by name, as a feature. Maybe as a
temporary performance trick, but app devs just want partitioning to be
invisible to them at the application level. It's a modularity
violation to be able to access parts of a table, just like it would be
if we allowed people to reference individual smgr files.

If that requires that we add a new non-default option, no problem.
Most people will want to use that option, AFAICS.

-- 
Simon Riggs                http://www.EnterpriseDB.com/



Re: Deadlock risk while inserting directly into partition?

От
David Rowley
Дата:
On Thu, 24 Jun 2021 at 12:32, David Rowley <dgrowleyml@gmail.com> wrote:
> The overhead of taking these locks is pretty significant for
> partitioned tables with lots of partitions where only 1 of them
> survives run-time partition pruning.  That's really terrible for
> people that want to PREPARE queries and just look up a single row from
> a single partition.  That seems like a pretty big use case that we're
> just terrible at today.

I wonder, since we can't delay taking locks until after run-time
pruning due to being unable to invalidate cached plans, maybe instead
we could tag on any PartitionPruneInfo onto the PlannedStmt itself and
do the init plan run-time prune run during AcquireExecutorLocks().

A lock would need to be taken on each partitioned table before we
prune for it. So if there was multi-level partitioning, we'd need to
lock the partitioned table, do pruning for that partitioned table,
then lock any sub-partitioned tables before doing pruning on those.

I don't immediately see why it couldn't be made to work, it's just
that it adds quite a lot of complexity to what's being done in
AcquireExecutorLocks(), which today is a very simple function.

David



Re: Deadlock risk while inserting directly into partition?

От
Amit Langote
Дата:
On Fri, Jun 25, 2021 at 10:26 AM David Rowley <dgrowleyml@gmail.com> wrote:
> On Thu, 24 Jun 2021 at 12:32, David Rowley <dgrowleyml@gmail.com> wrote:
> > The overhead of taking these locks is pretty significant for
> > partitioned tables with lots of partitions where only 1 of them
> > survives run-time partition pruning.  That's really terrible for
> > people that want to PREPARE queries and just look up a single row from
> > a single partition.  That seems like a pretty big use case that we're
> > just terrible at today.
>
> I wonder, since we can't delay taking locks until after run-time
> pruning due to being unable to invalidate cached plans, maybe instead
> we could tag on any PartitionPruneInfo onto the PlannedStmt itself and
> do the init plan run-time prune run during AcquireExecutorLocks().

This is exactly what I was mulling doing when working on [1] some last
year, after an off-list discussion with Robert (he suggested the idea
IIRC), though I never quite finished writing a patch.  I have planned
to revisit this topic ("locking overhead in generic plans") for v15,
now that we have *some* proposals mentioned in [1] committed to v14,
so can look into this.

> A lock would need to be taken on each partitioned table before we
> prune for it. So if there was multi-level partitioning, we'd need to
> lock the partitioned table, do pruning for that partitioned table,
> then lock any sub-partitioned tables before doing pruning on those.
>
> I don't immediately see why it couldn't be made to work, it's just
> that it adds quite a lot of complexity to what's being done in
> AcquireExecutorLocks(), which today is a very simple function.

Yeah, AcquireExecutorLocks()'s current method of finding the set of
relations to lock is very simple -- just scan the range table
(PlannedStmt.rtable).  If we're to remove prunable leaf partitions
from that set, maybe we'd have to find a way to remove them from
PlannedStmt.rtable as part of running the "init" pruning, which we'd
have to do anyway, because perhaps the executor proper (mainly
InitPlan) should also see the shrunken version of the range table.
Not to mention the complexity of getting the "init" pruning itself to
run outside a full-blown executor context.

Anyway, do you agree with starting a thread to discuss possible
approaches to attack this?

--
Amit Langote
EDB: http://www.enterprisedb.com

[1] https://www.postgresql.org/message-id/CA+HiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA@mail.gmail.com



Re: Deadlock risk while inserting directly into partition?

От
Amit Langote
Дата:
On Thu, Jun 24, 2021 at 7:27 AM David Rowley <dgrowleyml@gmail.com> wrote:
> On Wed, 23 Jun 2021 at 21:07, Amit Kapila <amit.kapila16@gmail.com> wrote:
> > I noticed that while inserting directly into a partition table we
> > compute the PartitionCheckExpr by traversing all the parent partitions
> > via ExecPartitionCheck()->RelationGetPartitionQual()->generate_partition_qual().
> > We take AccessShareLock on parent tables while generating qual.
> >
> > Now, on the other hand, while dropping constraint on a partitioned
> > table, we take the lock from parent to all the child tables.
> >
> > I think taking locks in opposite directions can lead to deadlock in
> > these operations.
>
> I wonder if it's possible to do any better here?  Surely when
> traversing from child to parent we must lock the child before checking
> what the parent relation is.

I remember there was a discussion where I proposed to document the
deadlock hazard that exists when performing DML directly on
partitions.  The proposal didn't get enough attention, perhaps because
it was in the middle of a long reply about other concerns:

https://www.postgresql.org/message-id/16db1458-67cf-4add-736e-31b053115e8e%40lab.ntt.co.jp

Maybe a good idea to add a line or 2 in 5.11. Table Partitioning?

-- 
Amit Langote
EDB: http://www.enterprisedb.com



Re: Deadlock risk while inserting directly into partition?

От
Amit Langote
Дата:
On Mon, Jun 28, 2021 at 12:58 PM Amit Langote <amitlangote09@gmail.com> wrote:
> On Fri, Jun 25, 2021 at 10:26 AM David Rowley <dgrowleyml@gmail.com> wrote:
> > On Thu, 24 Jun 2021 at 12:32, David Rowley <dgrowleyml@gmail.com> wrote:
> > > The overhead of taking these locks is pretty significant for
> > > partitioned tables with lots of partitions where only 1 of them
> > > survives run-time partition pruning.  That's really terrible for
> > > people that want to PREPARE queries and just look up a single row from
> > > a single partition.  That seems like a pretty big use case that we're
> > > just terrible at today.
> >
> > I wonder, since we can't delay taking locks until after run-time
> > pruning due to being unable to invalidate cached plans, maybe instead
> > we could tag on any PartitionPruneInfo onto the PlannedStmt itself and
> > do the init plan run-time prune run during AcquireExecutorLocks().
>
> This is exactly what I was mulling doing when working on [1] some last
> year, after an off-list discussion with Robert (he suggested the idea
> IIRC), though I never quite finished writing a patch.

Ah, I *had* mentioned this bit in the first email of [1]:

"Another solution suggested to me by Robert Haas in an off-list
discussion is to teach AcquireExecutorLocks() or the nearby code to
perform EXTERN parameter based pruning before passing the plan tree to
the executor and lock partitions that survive that pruning. It's
perhaps doable if we refactor the ExecFindInitialMatchingSubPlans() to
not require a full-blown execution context. Or maybe we could do
something more invasive by rewriting AcquireExecutorLocks() to walk
the plan tree instead of the flat range table, looking for scan nodes
and nodes that support runtime pruning to lock the appropriate
relations."

Alas, I hadn't written down any concrete proposals as to how that
could be done. :(

-- 
Amit Langote
EDB: http://www.enterprisedb.com



Re: Deadlock risk while inserting directly into partition?

От
Amit Kapila
Дата:
On Mon, Jun 28, 2021 at 9:50 AM Amit Langote <amitlangote09@gmail.com> wrote:
>
> On Thu, Jun 24, 2021 at 7:27 AM David Rowley <dgrowleyml@gmail.com> wrote:
> > On Wed, 23 Jun 2021 at 21:07, Amit Kapila <amit.kapila16@gmail.com> wrote:
> > > I noticed that while inserting directly into a partition table we
> > > compute the PartitionCheckExpr by traversing all the parent partitions
> > > via ExecPartitionCheck()->RelationGetPartitionQual()->generate_partition_qual().
> > > We take AccessShareLock on parent tables while generating qual.
> > >
> > > Now, on the other hand, while dropping constraint on a partitioned
> > > table, we take the lock from parent to all the child tables.
> > >
> > > I think taking locks in opposite directions can lead to deadlock in
> > > these operations.
> >
> > I wonder if it's possible to do any better here?  Surely when
> > traversing from child to parent we must lock the child before checking
> > what the parent relation is.
>
> I remember there was a discussion where I proposed to document the
> deadlock hazard that exists when performing DML directly on
> partitions.
>

+1. I think it is better if we can also write in code comments or
README about this.  How about adding something to README/code about
locking of partitions for different operations? Unless I am missing
it, I think some of this information is there in bits and pieces but
it would be great if we can have it consolidated at someplace.

>  The proposal didn't get enough attention, perhaps because
> it was in the middle of a long reply about other concerns:
>
> https://www.postgresql.org/message-id/16db1458-67cf-4add-736e-31b053115e8e%40lab.ntt.co.jp
>
> Maybe a good idea to add a line or 2 in 5.11. Table Partitioning?
>

Sounds reasonable, but I think it would be better if can mention the
scenarios/cases where there is a possibility of deadlocks.

-- 
With Regards,
Amit Kapila.



Re: Deadlock risk while inserting directly into partition?

От
David Rowley
Дата:
On Sat, 26 Jun 2021 at 16:41, Justin Pryzby <pryzby@telsasoft.com> wrote:
>
> On Thu, Jun 24, 2021 at 10:27:06AM +1200, David Rowley wrote:
> > I think the reasons for doing operations directly on partitions are
> > being reduced with each release.  What operations do people really
> > need to do on partitions now? TRUNCATE is probably one, maybe there's
> > still a need to CREATE INDEX.
>
> We always SELECT out of parent tables, but need to be able to CREATE INDEX on
> partitions.

I imagined we'd have something along the lines of: ALTER TABLE
partitioned_table ALTER PARTITION part CREATE INDEX.  I admit I don't
know how that would look when faced with multi-level partitioning.

> And INSERT ON CONFLICT into partitions,

I didn't think of that one.  Looks like we're further away from
partitioning being transparent to queries and DML than I thought :-(

David



Re: Deadlock risk while inserting directly into partition?

От
David Rowley
Дата:
On Mon, 28 Jun 2021 at 15:59, Amit Langote <amitlangote09@gmail.com> wrote:
>
> On Fri, Jun 25, 2021 at 10:26 AM David Rowley <dgrowleyml@gmail.com> wrote:
> > I wonder, since we can't delay taking locks until after run-time
> > pruning due to being unable to invalidate cached plans, maybe instead
> > we could tag on any PartitionPruneInfo onto the PlannedStmt itself and
> > do the init plan run-time prune run during AcquireExecutorLocks().
>
> This is exactly what I was mulling doing when working on [1] some last
> year, after an off-list discussion with Robert (he suggested the idea
> IIRC), though I never quite finished writing a patch.  I have planned
> to revisit this topic ("locking overhead in generic plans") for v15,
> now that we have *some* proposals mentioned in [1] committed to v14,
> so can look into this.

I thought about this only a little bit more from when I wrote the
above.  I think it would require adding yet another stage of when we
do run-time pruning.  It should be possible to do pruning when there's
GeneratePruningStepsContext.has_exec_param == true.  However, I'm not
so sure that we could do GeneratePruningStepsContext.has_mutable_arg.
Evaluating the value for those requires some level of actual
execution.  That's a pity as we'd still need to take a bunch of extra
locks in a case like: SELECT * FROM time_parted WHERE ts >= NOW() -
INTERVAL '1 hour';

I see the param values are fairly easily accessible a couple of levels
up from AcquireExecutorLocks() in GetCachedPlan().

> Anyway, do you agree with starting a thread to discuss possible
> approaches to attack this?

Agreed about the separate thread. We can discuss it further there.

David

> [1] https://www.postgresql.org/message-id/CA+HiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA@mail.gmail.com



Re: Deadlock risk while inserting directly into partition?

От
Laurenz Albe
Дата:
On Mon, 2021-06-28 at 23:46 +1200, David Rowley wrote:
> On Sat, 26 Jun 2021 at 16:41, Justin Pryzby <pryzby@telsasoft.com> wrote:
> > On Thu, Jun 24, 2021 at 10:27:06AM +1200, David Rowley wrote:
> > > I think the reasons for doing operations directly on partitions are
> > > being reduced with each release.  What operations do people really
> > > need to do on partitions now? TRUNCATE is probably one, maybe there's
> > > still a need to CREATE INDEX.
> > 
> > We always SELECT out of parent tables, but need to be able to CREATE INDEX on
> > partitions.
> 
> I imagined we'd have something along the lines of: ALTER TABLE
> partitioned_table ALTER PARTITION part CREATE INDEX.  I admit I don't
> know how that would look when faced with multi-level partitioning.

Some general comments on this initiative:

I like it that partitions are normal tables in PostgreSQL, and that I
can just use them in SQL statements.

If there is really no other way to avoid certain problems, we can change
that, but I would greatly prefer if it remain the way it is now.
Perhaps we can document such deadlock risks, or we can find a ways to
avoid them.

I think reducing functionality should be the last route to consider.
If we introduce new syntax to access partitions, we will end up with a lot
of new syntax, and we might well have an endless stream of requests for
ways to do X with a partition.

Yours,
Laurenz Albe