Обсуждение: Atomic operations within spinlocks

Поиск
Список
Период
Сортировка

Atomic operations within spinlocks

От
Tom Lane
Дата:
In connection with the nearby thread about spinlock coding rule
violations, I noticed that we have several places that are doing
things like this:

    SpinLockAcquire(&WalRcv->mutex);
    ...
    written_lsn = pg_atomic_read_u64(&WalRcv->writtenUpto);
    ...
    SpinLockRelease(&WalRcv->mutex);

That's from pg_stat_get_wal_receiver(); there is similar code in
freelist.c's ClockSweepTick() and StrategySyncStart().

This seems to me to be very bad code.  In the first place, on machines
without the appropriate type of atomic operation, atomics.c is going
to be using a spinlock to emulate atomicity, which means this code
tries to take a spinlock while holding another one.  That's not okay,
either from the standpoint of performance or error-safety.  In the
second place, this coding seems to me to indicate serious confusion
about which lock is protecting what.  In the above example, if
writtenUpto is only accessed through atomic operations then it seems
like we could just move the pg_atomic_read_u64 out of the spinlock
section; or if the spinlock is adequate protection then we could just
do a normal fetch.  If we actually need both locks then this needs
significant re-thinking, IMO.

Comments?

            regards, tom lane



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-03 14:19:45 -0400, Tom Lane wrote:
> In connection with the nearby thread about spinlock coding rule
> violations, I noticed that we have several places that are doing
> things like this:
>
>     SpinLockAcquire(&WalRcv->mutex);
>     ...
>     written_lsn = pg_atomic_read_u64(&WalRcv->writtenUpto);
>     ...
>     SpinLockRelease(&WalRcv->mutex);
>
> That's from pg_stat_get_wal_receiver(); there is similar code in
> freelist.c's ClockSweepTick() and StrategySyncStart().
>
> This seems to me to be very bad code.  In the first place, on machines
> without the appropriate type of atomic operation, atomics.c is going
> to be using a spinlock to emulate atomicity, which means this code
> tries to take a spinlock while holding another one.  That's not okay,
> either from the standpoint of performance or error-safety.

I'm honestly not particularly concerned about either of those in
general:

- WRT performance: Which platforms where we care about performance can't
  do a tear-free read of a 64bit integer, and thus needs a spinlock for
  this? And while the cases in freelist.c aren't just reads, they are
  really cold paths (clock wrapping around).
- WRT error safety: What could happen here? The atomics codepaths is
  no-fail code? And nothing should ever nest inside the atomic-emulation
  spinlocks. What am I missing?

I think straight out prohibiting use of atomics inside a spinlock will
lead to more complicated and slower code, rather than than improving
anything. I'm to blame for the ClockSweepTick() case, and I don't really
see how we could avoid the atomic-while-spinlocked without relying on
64bit atomics, which are emulated on more platforms, and without having
a more complicated retry loop.


> In the second place, this coding seems to me to indicate serious
> confusion about which lock is protecting what.  In the above example,
> if writtenUpto is only accessed through atomic operations then it
> seems like we could just move the pg_atomic_read_u64 out of the
> spinlock section; or if the spinlock is adequate protection then we
> could just do a normal fetch.  If we actually need both locks then
> this needs significant re-thinking, IMO.

Yea, it doesn't seem necessary at all that writtenUpto is read with the
spinlock held. That's very recent:

commit 2c8dd05d6cbc86b7ad21cfd7010e041bb4c3950b
Author: Michael Paquier <michael@paquier.xyz>
Date:   2020-05-17 09:22:07 +0900

    Make pg_stat_wal_receiver consistent with the WAL receiver's shmem info

and I assume just was caused by mechanical replacement, rather than
intentionally doing so while holding the spinlock. As far as I can tell
none of the other writtenUpto accesses are under the spinlock.

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Thomas Munro
Дата:
On Thu, Jun 4, 2020 at 8:45 AM Andres Freund <andres@anarazel.de> wrote:
> On 2020-06-03 14:19:45 -0400, Tom Lane wrote:
> > In the second place, this coding seems to me to indicate serious
> > confusion about which lock is protecting what.  In the above example,
> > if writtenUpto is only accessed through atomic operations then it
> > seems like we could just move the pg_atomic_read_u64 out of the
> > spinlock section; or if the spinlock is adequate protection then we
> > could just do a normal fetch.  If we actually need both locks then
> > this needs significant re-thinking, IMO.
>
> Yea, it doesn't seem necessary at all that writtenUpto is read with the
> spinlock held. That's very recent:
>
> commit 2c8dd05d6cbc86b7ad21cfd7010e041bb4c3950b
> Author: Michael Paquier <michael@paquier.xyz>
> Date:   2020-05-17 09:22:07 +0900
>
>     Make pg_stat_wal_receiver consistent with the WAL receiver's shmem info
>
> and I assume just was caused by mechanical replacement, rather than
> intentionally doing so while holding the spinlock. As far as I can tell
> none of the other writtenUpto accesses are under the spinlock.

Yeah.  It'd be fine to move that after the spinlock release.  Although
it's really just for informational purposes only, not for any data
integrity purpose, reading it before the spinlock acquisition would
theoretically allow it to appear to be (reportedly) behind
flushedUpto, which would be silly.



Re: Atomic operations within spinlocks

От
Michael Paquier
Дата:
On Thu, Jun 04, 2020 at 09:40:31AM +1200, Thomas Munro wrote:
> Yeah.  It'd be fine to move that after the spinlock release.  Although
> it's really just for informational purposes only, not for any data
> integrity purpose, reading it before the spinlock acquisition would
> theoretically allow it to appear to be (reportedly) behind
> flushedUpto, which would be silly.

Indeed.  This could just be done after the spinlock section.  Sorry
about that.
--
Michael

Вложения

Re: Atomic operations within spinlocks

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2020-06-03 14:19:45 -0400, Tom Lane wrote:
>> This seems to me to be very bad code.

> I think straight out prohibiting use of atomics inside a spinlock will
> lead to more complicated and slower code, rather than than improving
> anything. I'm to blame for the ClockSweepTick() case, and I don't really
> see how we could avoid the atomic-while-spinlocked without relying on
> 64bit atomics, which are emulated on more platforms, and without having
> a more complicated retry loop.

Well, if you don't want to touch freelist.c then there is no point
worrying about what pg_stat_get_wal_receiver is doing.  But having
now studied that freelist.c code, I don't like it one bit.  It's
overly complicated and not very accurately commented, making it
*really* hard to convince oneself that it's not buggy.

I think a good case could be made for ripping out what's there now
and just redefining nextVictimBuffer as a pg_atomic_uint64 that we
never reset (ie, make its comment actually true).  Then ClockSweepTick
reduces to

pg_atomic_fetch_add_u64(&StrategyControl->nextVictimBuffer, 1) % NBuffers

and when we want to know how many cycles have been completed, we just
divide the counter by NBuffers.  Now admittedly, this is relying on both
pg_atomic_fetch_add_u64 being fast and int64 division being fast ... but
to throw your own argument back at you, do we really care anymore about
performance on machines where those things aren't true?  Furthermore,
since all this is happening in a code path that's going to lead to I/O,
I'm not exactly convinced that a few nanoseconds matter anyway.

            regards, tom lane



Re: Atomic operations within spinlocks

От
Tom Lane
Дата:
I wrote:
> I think a good case could be made for ripping out what's there now
> and just redefining nextVictimBuffer as a pg_atomic_uint64 that we
> never reset (ie, make its comment actually true).  Then ClockSweepTick
> reduces to
> pg_atomic_fetch_add_u64(&StrategyControl->nextVictimBuffer, 1) % NBuffers
> and when we want to know how many cycles have been completed, we just
> divide the counter by NBuffers.

Actually ... we could probably use this design with a uint32 counter
as well, on machines where the 64-bit operations would be slow.
In that case, integer overflow of nextVictimBuffer would happen from
time to time, resulting in

1. The next actual victim buffer index would jump strangely.  This
doesn't seem like it'd matter at all, as long as it was infrequent.

2. The computed completePasses value would go backwards.  I bet
that wouldn't matter too much either, or at least we could teach
BgBufferSync to cope.  (I notice the comments therein suggest that
it is already designed to cope with completePasses wrapping around,
so maybe nothing needs to be done.)

If NBuffers was large enough to be a significant fraction of UINT_MAX,
then these corner cases would happen often enough to perhaps be
problematic.  But I seriously doubt that'd be possible on hardware
that wasn't capable of using the 64-bit code path.

            regards, tom lane



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-04 13:57:19 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2020-06-03 14:19:45 -0400, Tom Lane wrote:
> >> This seems to me to be very bad code.
>
> > I think straight out prohibiting use of atomics inside a spinlock will
> > lead to more complicated and slower code, rather than than improving
> > anything. I'm to blame for the ClockSweepTick() case, and I don't really
> > see how we could avoid the atomic-while-spinlocked without relying on
> > 64bit atomics, which are emulated on more platforms, and without having
> > a more complicated retry loop.
>
> Well, if you don't want to touch freelist.c then there is no point
> worrying about what pg_stat_get_wal_receiver is doing.  But having
> now studied that freelist.c code, I don't like it one bit.  It's
> overly complicated and not very accurately commented, making it
> *really* hard to convince oneself that it's not buggy.
>
> I think a good case could be made for ripping out what's there now
> and just redefining nextVictimBuffer as a pg_atomic_uint64 that we
> never reset (ie, make its comment actually true).  Then ClockSweepTick
> reduces to

Note that we can't do that in the older back branches, there wasn't any
64bit atomics fallback before

commit e8fdbd58fe564a29977f4331cd26f9697d76fc40
Author: Andres Freund <andres@anarazel.de>
Date:   2017-04-07 14:44:47 -0700

    Improve 64bit atomics support.


> pg_atomic_fetch_add_u64(&StrategyControl->nextVictimBuffer, 1) % NBuffers
>
> and when we want to know how many cycles have been completed, we just
> divide the counter by NBuffers.  Now admittedly, this is relying on both
> pg_atomic_fetch_add_u64 being fast and int64 division being fast ... but
> to throw your own argument back at you, do we really care anymore about
> performance on machines where those things aren't true?  Furthermore,
> since all this is happening in a code path that's going to lead to I/O,
> I'm not exactly convinced that a few nanoseconds matter anyway.

It's very easy to observe this code being a bottleneck. If we only
performed a single clock tick before IO, sure, then the cost would
obviously be swamped by the IO cost. But it's pretty common to end up
having to do that ~ NBuffers * 5 times for a single buffer.

I don't think it's realistic to rely on 64bit integer division being
fast in this path. The latency is pretty darn significant (64bit div is
35-88 cycles on skylake-x, 64bit idiv 42-95). And unless I
misunderstand, you'd have to do so (for % NBuffers) every single clock
tick, not just when we wrap around.

We could however avoid the spinlock if we were to use 64bit atomics, by
storing the number of passes in the upper 32bit, and the next victim
buffer in the lower. But that doesn't seem that simple either, and will
regress performance on 32bit platforms.


I don't the whole strategy logic at all, so I guess there's some
argument to improve things from that end. It's probably possible to
avoid the lock with 32bit atomics as well.


I'd still like to know which problem we're actually trying to solve
here. I don't understand the "error" issues you mentioned upthread.

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-04 14:50:40 -0400, Tom Lane wrote:
> Actually ... we could probably use this design with a uint32 counter
> as well, on machines where the 64-bit operations would be slow.

On skylake-x even a 32bit [i]div is still 26 cycles. That's more than an
atomic operation 18 cycles.


> 2. The computed completePasses value would go backwards.  I bet
> that wouldn't matter too much either, or at least we could teach
> BgBufferSync to cope.  (I notice the comments therein suggest that
> it is already designed to cope with completePasses wrapping around,
> so maybe nothing needs to be done.)

If we're not concerned about that, then we can remove the
atomic-inside-spinlock, I think. The only reason for that right now is
to avoid assuming a wrong pass number.

I don't think completePasses wrapping around is comparable in frequency
to wrapping around nextVictimBuffer. It's not really worth worrying
about bgwriter wrongly assuming it lapped the clock sweep once ever
UINT32_MAX * NBuffers ticks, but there being a risk every NBuffers seems
worth worrying about.

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> I'd still like to know which problem we're actually trying to solve
> here. I don't understand the "error" issues you mentioned upthread.

If you error out of getting the inner spinlock, the outer spinlock
is stuck, permanently, because there is no mechanism for spinlock
release during transaction abort.  Admittedly it's not very likely
for the inner acquisition to fail, but it's possible.  Aside from
timeout scenarios (e.g., process holding lock gets swapped out to
Timbuktu), it could be that both spinlocks are mapped onto a single
implementation lock by spin.c, which notes

 * We map all spinlocks onto a set of NUM_SPINLOCK_SEMAPHORES semaphores.
 * It's okay to map multiple spinlocks onto one semaphore because no process
 * should ever hold more than one at a time.

You've falsified that argument ... and no, I don't want to upgrade
the spinlock infrastructure enough to make this OK.  We shouldn't
ever be holding spinlocks long enough, or doing anything complicated
enough inside them, for such an upgrade to have merit.

            regards, tom lane



Re: Atomic operations within spinlocks

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2020-06-04 14:50:40 -0400, Tom Lane wrote:
>> 2. The computed completePasses value would go backwards.  I bet
>> that wouldn't matter too much either, or at least we could teach
>> BgBufferSync to cope.  (I notice the comments therein suggest that
>> it is already designed to cope with completePasses wrapping around,
>> so maybe nothing needs to be done.)

> If we're not concerned about that, then we can remove the
> atomic-inside-spinlock, I think. The only reason for that right now is
> to avoid assuming a wrong pass number.

Hmm.  That might be a less-invasive path to a solution.  I can take
a look, if you don't want to.

            regards, tom lane



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-04 15:07:34 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > I'd still like to know which problem we're actually trying to solve
> > here. I don't understand the "error" issues you mentioned upthread.
>
> If you error out of getting the inner spinlock, the outer spinlock
> is stuck, permanently, because there is no mechanism for spinlock
> release during transaction abort.  Admittedly it's not very likely
> for the inner acquisition to fail, but it's possible.

We PANIC on stuck spinlocks, so I don't think that's a problem.


>  * We map all spinlocks onto a set of NUM_SPINLOCK_SEMAPHORES semaphores.
>  * It's okay to map multiple spinlocks onto one semaphore because no process
>  * should ever hold more than one at a time.
>
> You've falsified that argument ...  and no, I don't want to upgrade
> the spinlock infrastructure enough to make this OK.

Well, theoretically we take care to avoid this problem. That's why we
have a separate define for spinlocks and atomics:

/*
 * When we don't have native spinlocks, we use semaphores to simulate them.
 * Decreasing this value reduces consumption of OS resources; increasing it
 * may improve performance, but supplying a real spinlock implementation is
 * probably far better.
 */
#define NUM_SPINLOCK_SEMAPHORES        128

/*
 * When we have neither spinlocks nor atomic operations support we're
 * implementing atomic operations on top of spinlock on top of semaphores. To
 * be safe against atomic operations while holding a spinlock separate
 * semaphores have to be used.
 */
#define NUM_ATOMICS_SEMAPHORES        64

and

#ifndef HAVE_SPINLOCKS

    /*
     * NB: If we're using semaphore based TAS emulation, be careful to use a
     * separate set of semaphores. Otherwise we'd get in trouble if an atomic
     * var would be manipulated while spinlock is held.
     */
    s_init_lock_sema((slock_t *) &ptr->sema, true);
#else
    SpinLockInit((slock_t *) &ptr->sema);
#endif

But it looks like that code is currently buggy (and looks like it always
has been), because we don't look at the nested argument when
initializing the semaphore.  So we currently allocate too many
semaphores, without benefiting from them :(.


> We shouldn't ever be holding spinlocks long enough, or doing anything
> complicated enough inside them, for such an upgrade to have merit.

Well, I don't think atomic instructions are that complicated.  And I
think prohibiting atomics-within-spinlock adds a problematic
restriction, without much in the way of benefits:

There's plenty things where it's somewhat easy to make the fast-path
lock-free, but the slow path still needs a lock (e.g. around a
freelist). And for those it's really useful to still be able to have a
coherent update to an atomic variable, to synchronize with the fast-path
that doesn't take the spinlock.

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-04 15:13:29 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2020-06-04 14:50:40 -0400, Tom Lane wrote:
> >> 2. The computed completePasses value would go backwards.  I bet
> >> that wouldn't matter too much either, or at least we could teach
> >> BgBufferSync to cope.  (I notice the comments therein suggest that
> >> it is already designed to cope with completePasses wrapping around,
> >> so maybe nothing needs to be done.)
>
> > If we're not concerned about that, then we can remove the
> > atomic-inside-spinlock, I think. The only reason for that right now is
> > to avoid assuming a wrong pass number.
>
> Hmm.  That might be a less-invasive path to a solution.  I can take
> a look, if you don't want to.

First, I think it would be problematic:

    /*
     * Find out where the freelist clock sweep currently is, and how many
     * buffer allocations have happened since our last call.
     */
    strategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc);
...

    /*
     * Compute strategy_delta = how many buffers have been scanned by the
     * clock sweep since last time.  If first time through, assume none. Then
     * see if we are still ahead of the clock sweep, and if so, how many
     * buffers we could scan before we'd catch up with it and "lap" it. Note:
     * weird-looking coding of xxx_passes comparisons are to avoid bogus
     * behavior when the passes counts wrap around.
     */
    if (saved_info_valid)
    {
        int32        passes_delta = strategy_passes - prev_strategy_passes;

        strategy_delta = strategy_buf_id - prev_strategy_buf_id;
        strategy_delta += (long) passes_delta * NBuffers;

        Assert(strategy_delta >= 0);

ISTM that if we can get an out-of-sync strategy_passes and
strategy_buf_id we'll end up with a pretty wrong strategy_delta. Which,
I think, can cause to reset bgwriter's position:
        else
        {
            /*
             * We're behind, so skip forward to the strategy point and start
             * cleaning from there.
             */
#ifdef BGW_DEBUG
            elog(DEBUG2, "bgwriter behind: bgw %u-%u strategy %u-%u delta=%ld",
                 next_passes, next_to_clean,
                 strategy_passes, strategy_buf_id,
                 strategy_delta);
#endif
            next_to_clean = strategy_buf_id;
            next_passes = strategy_passes;
            bufs_to_lap = NBuffers;
        }


While I think that the whole logic in BgBufferSync doesn't make a whole
lot of sense, it does seem to me this has a fair potential to make it
worse. In a scenario with a decent cache hit ratio (leading to high
usagecounts) and a not that large NBuffers, we can end up up doing quite
a few passes (as in many a second), so it might not be that hard to hit
this.


I am not immediately coming up with a cheap solution that doesn't do the
atomics-within-spinlock thing. The best I can come up with is using a
64bit atomic, with the upper 32bit containing the number of passes, and
the lower 32bit containing the current buffer. Where the lower 32bit /
the buffer is handled like it currently is, i.e. we "try" to keep it
below NBuffers. So % is only used for the "cold" path. That'd just add a
64->32 bit cast in the hot path, which shouldn't be measurable. But it'd
regress platforms without 64bit atomics substantially.

We could obviously also just rewrite the BgBufferSync() logic, so it
doesn't care about things like "lapping", but that's not an easy change.


So the best I can really suggest, unless we were to agree on atomics
being ok inside spinlocks, is probably to just replace the spinlock with
an lwlock. That'd perhaps cause a small slowdown for a few cases, but
it'd make workload that e.g. use the freelist a lot (e.g. when tables
are dropped regularly) scale better.

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-04 19:33:02 -0700, Andres Freund wrote:
> But it looks like that code is currently buggy (and looks like it always
> has been), because we don't look at the nested argument when
> initializing the semaphore.  So we currently allocate too many
> semaphores, without benefiting from them :(.

I wrote a patch for this, and when I got around to to testing it, I
found that our tests currently don't pass when using both
--disable-spinlocks and --disable-atomics. Turns out to not be related
to the issue above, but the global barrier support added in 13.

That *reads* two 64 bit atomics in a signal handler. Which is normally
fine, but not at all cool when atomics (or just 64 bit atomics) are
backed by spinlocks. Because we can "self interrupt" while already
holding the spinlock.

It looks to me that that's a danger whenever 64bit atomics are backed by
spinlocks, not just when both --disable-spinlocks and --disable-atomics
are used. But I suspect that it's really hard to hit the tiny window of
danger when those options aren't used. While we have buildfarm animals
testing each of those separately, we don't have one that tests both
together...

I'm not really sure what to do about that issue. The easisest thing
would probably be to change the barrier generation to 32bit (which
doesn't have to use locks for reads in any situation).  I tested doing
that, and it fixes the hangs for me.


Randomly noticed while looking at the code:
    uint64        flagbit = UINT64CONST(1) << (uint64) type;

that shouldn't be 64bit, right?

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> I wrote a patch for this, and when I got around to to testing it, I
> found that our tests currently don't pass when using both
> --disable-spinlocks and --disable-atomics. Turns out to not be related
> to the issue above, but the global barrier support added in 13.
> That *reads* two 64 bit atomics in a signal handler. Which is normally
> fine, but not at all cool when atomics (or just 64 bit atomics) are
> backed by spinlocks. Because we can "self interrupt" while already
> holding the spinlock.

This is the sort of weird platform-specific problem that I'd prefer to
avoid by minimizing our expectations of what spinlocks can be used for.

> I'm not really sure what to do about that issue. The easisest thing
> would probably be to change the barrier generation to 32bit (which
> doesn't have to use locks for reads in any situation).

Yeah, I think we need a hard rule that you can't use a spinlock in
an interrupt handler --- which means no atomics that don't have
non-spinlock implementations on every platform.

At some point I think we'll have to give up --disable-spinlocks;
it's really of pretty marginal use (how often does anyone port PG
to a new CPU type?) and the number of weird interactions it adds
in this area seems like more than it's worth.  But of course
requiring 64-bit atomics is still a step too far.

> Randomly noticed while looking at the code:
>     uint64        flagbit = UINT64CONST(1) << (uint64) type;

I'm surprised we didn't get any compiler warnings about that.

            regards, tom lane



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-05 21:01:56 -0400, Tom Lane wrote:
> > I'm not really sure what to do about that issue. The easisest thing
> > would probably be to change the barrier generation to 32bit (which
> > doesn't have to use locks for reads in any situation).
>
> Yeah, I think we need a hard rule that you can't use a spinlock in
> an interrupt handler --- which means no atomics that don't have
> non-spinlock implementations on every platform.

Yea, that might be the easiest thing to do.  The only other thing I can
think of would be to mask all signals for the duration of the
atomic-using-spinlock operation. That'd make the fallback noticably more
expensive, but otoh, do we care enough?

I think a SIGNAL_HANDLER_BEGIN(); SIGNAL_HANDLER_END(); to back an
Assert(!InSignalHandler()); could be quite useful.  Could also save
errno etc.


> At some point I think we'll have to give up --disable-spinlocks; it's
> really of pretty marginal use (how often does anyone port PG to a new
> CPU type?) and the number of weird interactions it adds in this area
> seems like more than it's worth.

Indeed. And any new architecture one would port PG to would have good
enough compiler intrinsics to make that trivial. I still think it'd make
sense to have a fallback implementation using compiler intrinsics...

And I think we should just require 32bit atomics at the same time. Would
probably kill gaur though.


I did just find a longstanding bug in the spinlock emulation code:

void
s_init_lock_sema(volatile slock_t *lock, bool nested)
{
    static int    counter = 0;

    *lock = ((++counter) % NUM_SPINLOCK_SEMAPHORES) + 1;
}

void
s_unlock_sema(volatile slock_t *lock)
{
    int            lockndx = *lock;

    if (lockndx <= 0 || lockndx > NUM_SPINLOCK_SEMAPHORES)
        elog(ERROR, "invalid spinlock number: %d", lockndx);
    PGSemaphoreUnlock(SpinlockSemaArray[lockndx - 1]);
}


I don't think it's ok that counter is a signed integer... While it maybe
used to be unlikely that we ever have that many spinlocks, I don't think
it's that hard anymore, because we dynamically allocate them for a lot
of parallel query stuff.  A small regression test that initializes
enough spinlocks indeed errors out with
2020-06-05 18:08:29.110 PDT [734946][3/2:0] ERROR:  invalid spinlock number: -126
2020-06-05 18:08:29.110 PDT [734946][3/2:0] STATEMENT:  SELECT test_atomic_ops();



> > Randomly noticed while looking at the code:
> >     uint64        flagbit = UINT64CONST(1) << (uint64) type;
>
> I'm surprised we didn't get any compiler warnings about that.

Unfortunately I don't think one can currently compile postgres with
warnings for "implicit casts" enabled :(.


> But of course requiring 64-bit atomics is still a step too far.

If we had a 32bit compare-exchange it ought to be possible to write a
signal-safe emulation of 64bit atomics. I think. Something *roughly*
like:


typedef struct pg_atomic_uint64
{
    /*
     * Meaning of state bits:
     * 0-1: current valid
     * 2-4: current proposed
     * 5: in signal handler
     * 6-31: pid of proposer
     */
    pg_atomic_uint32 state;

    /*
     * One current value, two different proposed values.
     */
    uint64 value[3];
} pg_atomic_uint64;

The update protocol would be something roughly like:

bool
pg_atomic_compare_exchange_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 *expected, uint64 newval)
{
    while (true)
    {
    uint32 old_state = pg_atomic_read_u32(&ptr->state);
        uint32 updater_pid = PID_FROM_STATE(old_state);
        uint32 new_state;
        uint64 old_value;

        int proposing;

        /*
         * Value changed, so fail. This is obviously racy, but we'll
         * notice concurrent updates later.
         */
        if (ptr->value[VALID_FIELD(old_state)] != *expected)
        {
            return false;
        }

        if (updater_pid == INVALID_PID)
        {

            new_state = old_state;

            /* signal that current process is updating */
            new_state |= MyProcPid >> PID_STATE_SHIFT;
            if (InSignalHandler)
                new_state |= PROPOSER_IN_SIGNAL_HANDLER_BIT;

            /* set which index is being proposed */
            new_state = (new_state & ~PROPOSER_BITS) |
                        NEXT_PROPOSED_FIELD(old_state, &proposing);

            /*
             * If we successfully can update state to contain our new
             * value, we have a right to do so, and can only be
             * interrupted by ourselves, in a signal handler.
             */
            if (!pg_atomic_compare_exchange(&ptr->state, &old_state, new_state))
            {
                /* somebody else updated, restart */
                continue;
            }

            old_state = new_state;

            /*
             * It's ok to compare the values now. If we are interrupted
             * by a signal handler, we'll notice when updating
             * state. There's no danger updating the same proposed value
             * in two processes, because they they always would get
             * offsets to propse into.
             */
             ptr->value[proposing] = newval;

            /* set the valid field to the one we just filled in */
            new_state = (new_state & ~VALID_FIELD_BITS) | proposed;
            /* remove ourselve as updater */
            new_state &= UPDATER_BITS;

            if (!pg_atomic_compare_exchange(&ptr->state, &old_state, new_state))
            {
                /*
                 * Should only happen when we were interrupted by this
                 * processes' handler.
                 */
                Assert(!InSignalHandler);

                /*
                 * Signal handler must have cleaned out pid as updater.
                 */
                Assert(PID_FROM_STATE(old_state) != MyProcPid);
                continue;
            }
            else
            {
                return true;
            }
        }
    else if (PID_FROM_STATE(current_state) == MyProcPid)
    {
        /*
         * This should only happen when in a signal handler. We don't
         * currently allow nesting of signal handlers.
         */
        Assert(!(current_state & PROPOSER_IN_SIGNAL_HANDLER_BIT));

            /* interrupt our own non-signal-handler update */
            new_state = old_state | PROPOSER_IN_SIGNAL_HANDLER_BIT;

            /* set which index is being proposed */
            new_state = (new_state & ~PROPOSER_BITS) |
                NEXT_PROPOSED_FIELD(old_state, &proposing);

            // FIXME: assert that previous value still was what we assumed
            pg_atomic_exchange_u32(&ptr_state.state, new_state);
        }
    else
    {
            do
            {
                pg_spin_delay();

                current_state = pg_atomic_read_u32(&ptr->state);
            } while (PID_FROM_STATE(current_state) != INVALID_PID)
    }
    }
}

While that's not trivial, it'd not be that expensive. The happy path
would be two 32bit atomic operations to simulate a 64bit one.


Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2020-06-05 21:01:56 -0400, Tom Lane wrote:
>> At some point I think we'll have to give up --disable-spinlocks; it's
>> really of pretty marginal use (how often does anyone port PG to a new
>> CPU type?) and the number of weird interactions it adds in this area
>> seems like more than it's worth.

> Indeed. And any new architecture one would port PG to would have good
> enough compiler intrinsics to make that trivial. I still think it'd make
> sense to have a fallback implementation using compiler intrinsics...

> And I think we should just require 32bit atomics at the same time. Would
> probably kill gaur though.

Not only gaur.  A quick buildfarm survey finds these active members
reporting not having 32-bit atomics:

 anole         | 2020-06-05 11:20:17 | pgac_cv_gcc_atomic_int32_cas=no
 chipmunk      | 2020-05-29 22:27:56 | pgac_cv_gcc_atomic_int32_cas=no
 curculio      | 2020-06-05 22:30:06 | pgac_cv_gcc_atomic_int32_cas=no
 frogfish      | 2020-05-31 13:00:25 | pgac_cv_gcc_atomic_int32_cas=no
 gaur          | 2020-05-19 13:33:25 | pgac_cv_gcc_atomic_int32_cas=no
 gharial       | 2020-06-05 12:41:14 | pgac_cv_gcc_atomic_int32_cas=no
 hornet        | 2020-06-05 09:11:26 | pgac_cv_gcc_atomic_int32_cas=no
 hoverfly      | 2020-06-05 22:06:14 | pgac_cv_gcc_atomic_int32_cas=no
 locust        | 2020-06-05 10:14:29 | pgac_cv_gcc_atomic_int32_cas=no
 mandrill      | 2020-06-05 09:20:03 | pgac_cv_gcc_atomic_int32_cas=no
 prairiedog    | 2020-06-05 09:55:49 | pgac_cv_gcc_atomic_int32_cas=no

It looks to me like this is mostly about compiler support not the
hardware; that doesn't make it not a problem, though.  (I also
remain skeptical about the quality of the compiler intrinsics
on non-mainstream hardware.)

            regards, tom lane



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-05 22:52:47 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2020-06-05 21:01:56 -0400, Tom Lane wrote:
> >> At some point I think we'll have to give up --disable-spinlocks; it's
> >> really of pretty marginal use (how often does anyone port PG to a new
> >> CPU type?) and the number of weird interactions it adds in this area
> >> seems like more than it's worth.
>
> > Indeed. And any new architecture one would port PG to would have good
> > enough compiler intrinsics to make that trivial. I still think it'd make
> > sense to have a fallback implementation using compiler intrinsics...
>
> > And I think we should just require 32bit atomics at the same time. Would
> > probably kill gaur though.
>
> Not only gaur.  A quick buildfarm survey finds these active members
> reporting not having 32-bit atomics:

Hm, I don't think that's the right test. We have bespoke code to support
most of these, I think:


>  anole         | 2020-06-05 11:20:17 | pgac_cv_gcc_atomic_int32_cas=no

Has support via acc specific intrinsics.


>  chipmunk      | 2020-05-29 22:27:56 | pgac_cv_gcc_atomic_int32_cas=no

Doesn't have support for __atomic, but does have support for 32bit
__sync.


>  gharial       | 2020-06-05 12:41:14 | pgac_cv_gcc_atomic_int32_cas=no

__sync support for both 32 and 64 bit.


>  curculio      | 2020-06-05 22:30:06 | pgac_cv_gcc_atomic_int32_cas=no
>  frogfish      | 2020-05-31 13:00:25 | pgac_cv_gcc_atomic_int32_cas=no

__sync support for both 32 and 64 bit.


>  mandrill      | 2020-06-05 09:20:03 | pgac_cv_gcc_atomic_int32_cas=no

__sync support for 32, as well as as inline asm for 32bit atomics
(although we might be able to add 64 bit).


>  hornet        | 2020-06-05 09:11:26 | pgac_cv_gcc_atomic_int32_cas=no
>  hoverfly      | 2020-06-05 22:06:14 | pgac_cv_gcc_atomic_int32_cas=no

__sync support for both 32 and 64 bit, and we have open coded ppc asm.


>  locust        | 2020-06-05 10:14:29 | pgac_cv_gcc_atomic_int32_cas=no
>  prairiedog    | 2020-06-05 09:55:49 | pgac_cv_gcc_atomic_int32_cas=no

Wee, these don't have __sync? But I think it should be able to use the
asm ppc implementation for 32 bit atomics.



>  gaur          | 2020-05-19 13:33:25 | pgac_cv_gcc_atomic_int32_cas=no

As far as I understand pa-risc doesn't have any atomic instructions
except for TAS.


So I think gaur is really the only one that'd drop.



> It looks to me like this is mostly about compiler support not the
> hardware; that doesn't make it not a problem, though.  (I also
> remain skeptical about the quality of the compiler intrinsics
> on non-mainstream hardware.)

I think that's fair enough for really old platforms, but at least for
gcc / clang I don't think it's a huge concern for newer ones. Even if
not mainstream. For gcc/clang the intrinsics basically back the
C11/C++11 "language level" atomics support. And those are extremely
widely used these days.

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
On 2020-06-05 17:19:26 -0700, Andres Freund wrote:
> Hi,
>
> On 2020-06-04 19:33:02 -0700, Andres Freund wrote:
> > But it looks like that code is currently buggy (and looks like it always
> > has been), because we don't look at the nested argument when
> > initializing the semaphore.  So we currently allocate too many
> > semaphores, without benefiting from them :(.
>
> I wrote a patch for this, and when I got around to to testing it, I
> found that our tests currently don't pass when using both
> --disable-spinlocks and --disable-atomics. Turns out to not be related
> to the issue above, but the global barrier support added in 13.
>
> That *reads* two 64 bit atomics in a signal handler. Which is normally
> fine, but not at all cool when atomics (or just 64 bit atomics) are
> backed by spinlocks. Because we can "self interrupt" while already
> holding the spinlock.
>
> It looks to me that that's a danger whenever 64bit atomics are backed by
> spinlocks, not just when both --disable-spinlocks and --disable-atomics
> are used. But I suspect that it's really hard to hit the tiny window of
> danger when those options aren't used. While we have buildfarm animals
> testing each of those separately, we don't have one that tests both
> together...
>
> I'm not really sure what to do about that issue. The easisest thing
> would probably be to change the barrier generation to 32bit (which
> doesn't have to use locks for reads in any situation).  I tested doing
> that, and it fixes the hangs for me.
>
>
> Randomly noticed while looking at the code:
>     uint64        flagbit = UINT64CONST(1) << (uint64) type;
>
> that shouldn't be 64bit, right?

Attached is a series of patches addressing these issues, of varying
quality:

1) This fixes the above mentioned issue in the global barrier code by
   using 32bit atomics. That might be fine, or it might not. I just
   included it here because otherwise the tests cannot be run fully.


2) Fixes spinlock emulation when more than INT_MAX spinlocks are
   initialized in the lifetime of a single backend

3) Add spinlock tests to normal regression tests.
   - Currently as part of test_atomic_ops. Probably not worth having a
     separate SQL function?
   - Currently contains a test for 1) that's run when the spinlock
     emulation is used. Probably too slow to actually indclude? Takes 15s
     on my computer... OTOH, it's just with --disable-spinlocks...
   - Could probably remove the current spinlock tests after this. The
     only thing they additionally test is a stuck spinlock. Since
     they're not run currently, they don't seem worth much?

4) Fix the potential for deadlocks when using atomics while holding a
   spinlock, add tests for that.

Any comments?

Greetings,

Andres Freund

Вложения

global barrier & atomics in signal handlers (Re: Atomic operationswithin spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-08 23:08:47 -0700, Andres Freund wrote:
> On 2020-06-05 17:19:26 -0700, Andres Freund wrote:
> > I wrote a patch for this, and when I got around to to testing it, I
> > found that our tests currently don't pass when using both
> > --disable-spinlocks and --disable-atomics. Turns out to not be related
> > to the issue above, but the global barrier support added in 13.
> >
> > That *reads* two 64 bit atomics in a signal handler. Which is normally
> > fine, but not at all cool when atomics (or just 64 bit atomics) are
> > backed by spinlocks. Because we can "self interrupt" while already
> > holding the spinlock.
> >
> > It looks to me that that's a danger whenever 64bit atomics are backed by
> > spinlocks, not just when both --disable-spinlocks and --disable-atomics
> > are used. But I suspect that it's really hard to hit the tiny window of
> > danger when those options aren't used. While we have buildfarm animals
> > testing each of those separately, we don't have one that tests both
> > together...
> >
> > I'm not really sure what to do about that issue. The easisest thing
> > would probably be to change the barrier generation to 32bit (which
> > doesn't have to use locks for reads in any situation).  I tested doing
> > that, and it fixes the hangs for me.
> >
> >
> > Randomly noticed while looking at the code:
> >     uint64        flagbit = UINT64CONST(1) << (uint64) type;
> >
> > that shouldn't be 64bit, right?
> 
> Attached is a series of patches addressing these issues, of varying
> quality:
> 
> 1) This fixes the above mentioned issue in the global barrier code by
>    using 32bit atomics. That might be fine, or it might not. I just
>    included it here because otherwise the tests cannot be run fully.

Hm. Looking at this again, perhaps the better fix would be to simply not
look at the concrete values of the barrier inside the signal handler?
E.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers
ProcSignalBarrierPending to be set. And then have
ProcessProcSignalBarrier do the check that's currently in
CheckProcSignalBarrier()?

Greetings,

Andres Freund



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Tue, Jun 9, 2020 at 3:37 PM Andres Freund <andres@anarazel.de> wrote:
> Hm. Looking at this again, perhaps the better fix would be to simply not
> look at the concrete values of the barrier inside the signal handler?
> E.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers
> ProcSignalBarrierPending to be set. And then have
> ProcessProcSignalBarrier do the check that's currently in
> CheckProcSignalBarrier()?

That seems like a good idea.

Also, I wonder if someone would be willing to set up a BF animal for this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-09 17:04:42 -0400, Robert Haas wrote:
> On Tue, Jun 9, 2020 at 3:37 PM Andres Freund <andres@anarazel.de> wrote:
> > Hm. Looking at this again, perhaps the better fix would be to simply not
> > look at the concrete values of the barrier inside the signal handler?
> > E.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers
> > ProcSignalBarrierPending to be set. And then have
> > ProcessProcSignalBarrier do the check that's currently in
> > CheckProcSignalBarrier()?
> 
> That seems like a good idea.

Cool.


> Also, I wonder if someone would be willing to set up a BF animal for this.

You mean having both --disable-atomics and --disable-spinlocks? If so,
I'm planning to do that (I already have the animals that do those
separately, so it seems to make sense to add it to that collection).

What do you think about my idea of having a BEGIN/END_SIGNAL_HANDLER?
That'd make it much easier to write assertions forbidding palloc, 64bit
atomics, ...

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Robert Haas
Дата:
On Fri, Jun 5, 2020 at 8:19 PM Andres Freund <andres@anarazel.de> wrote:
> Randomly noticed while looking at the code:
>         uint64          flagbit = UINT64CONST(1) << (uint64) type;
>
> that shouldn't be 64bit, right?

I'm going to admit ignorance here. What's the proper coding rule?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Atomic operations within spinlocks

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Fri, Jun 5, 2020 at 8:19 PM Andres Freund <andres@anarazel.de> wrote:
>> Randomly noticed while looking at the code:
>>     uint64          flagbit = UINT64CONST(1) << (uint64) type;
>> 
>> that shouldn't be 64bit, right?

> I'm going to admit ignorance here. What's the proper coding rule?

The shift distance can't exceed 64, so there's no need for it to be
wider than int.  "type" is an enum, so explicitly casting it to an
integral type seems like good practice, but int is sufficient.

ISTR older compilers insisting that the shift distance not be
wider than int.  But C99 doesn't seem to require that -- it only
restricts the value of the right operand.

            regards, tom lane



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Tue, Jun 9, 2020 at 6:54 PM Andres Freund <andres@anarazel.de> wrote:
> What do you think about my idea of having a BEGIN/END_SIGNAL_HANDLER?
> That'd make it much easier to write assertions forbidding palloc, 64bit
> atomics, ...

I must have missed the previous place where you suggested this, but I
think it's a good idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Atomic operations within spinlocks

От
Andres Freund
Дата:
Hi,

On 2020-06-10 07:26:32 -0400, Robert Haas wrote:
> On Fri, Jun 5, 2020 at 8:19 PM Andres Freund <andres@anarazel.de> wrote:
> > Randomly noticed while looking at the code:
> >         uint64          flagbit = UINT64CONST(1) << (uint64) type;
> >
> > that shouldn't be 64bit, right?
> 
> I'm going to admit ignorance here. What's the proper coding rule?

Well, pss_barrierCheckMask member is just 32bit, so it seems odd to
declare the local variable 64bit?

    uint64        flagbit = UINT64CONST(1) << (uint64) type;
...
        pg_atomic_fetch_or_u32(&slot->pss_barrierCheckMask, flagbit);


Greetings,

Andres Freund



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-10 13:37:59 -0400, Robert Haas wrote:
> On Tue, Jun 9, 2020 at 6:54 PM Andres Freund <andres@anarazel.de> wrote:
> > What do you think about my idea of having a BEGIN/END_SIGNAL_HANDLER?
> > That'd make it much easier to write assertions forbidding palloc, 64bit
> > atomics, ...
> 
> I must have missed the previous place where you suggested this, but I
> think it's a good idea.

https://www.postgresql.org/message-id/20200606023103.avzrctgv7476xj7i%40alap3.anarazel.de

It'd be neat if we could do that entirely within pqsignal(). But that'd
require some additional state (I think an array of handlers, indexed by
signum).

Greetings,

Andres Freund



Re: Atomic operations within spinlocks

От
Robert Haas
Дата:
On Thu, Jun 11, 2020 at 1:26 PM Andres Freund <andres@anarazel.de> wrote:
> Well, pss_barrierCheckMask member is just 32bit, so it seems odd to
> declare the local variable 64bit?
>
>         uint64          flagbit = UINT64CONST(1) << (uint64) type;
> ...
>                 pg_atomic_fetch_or_u32(&slot->pss_barrierCheckMask, flagbit);

Oooooops.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-09 17:04:42 -0400, Robert Haas wrote:
> On Tue, Jun 9, 2020 at 3:37 PM Andres Freund <andres@anarazel.de> wrote:
> > Hm. Looking at this again, perhaps the better fix would be to simply not
> > look at the concrete values of the barrier inside the signal handler?
> > E.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers
> > ProcSignalBarrierPending to be set. And then have
> > ProcessProcSignalBarrier do the check that's currently in
> > CheckProcSignalBarrier()?
> 
> That seems like a good idea.

What do you think about 0002?


With regard to the cost of the expensive test in 0003, I'm somewhat
inclined to add that to the buildfarm for a few days and see how it
actually affects the few bf animals without atomics. We can rip it out
after we got some additional coverage (or leave it in if it turns out to
be cheap enough in comparison).


> Also, I wonder if someone would be willing to set up a BF animal for this.

FWIW, I've requested a buildfarm animal id for this a few days ago, but
haven't received a response yet...

Greetings,

Andres Freund

Вложения

Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Mon, Jun 15, 2020 at 9:37 PM Andres Freund <andres@anarazel.de> wrote:
> What do you think about 0002?
>
> With regard to the cost of the expensive test in 0003, I'm somewhat
> inclined to add that to the buildfarm for a few days and see how it
> actually affects the few bf animals without atomics. We can rip it out
> after we got some additional coverage (or leave it in if it turns out to
> be cheap enough in comparison).

I looked over these patches briefly today. I don't have any objection
to 0001 or 0002. I think 0003 looks a little strange: it seems to be
testing things that might be implementation details of other things,
and I'm not sure that's really correct. In particular:

+ /* and that "contended" acquisition works */
+ s_lock(&struct_w_lock.lock, "testfile", 17, "testfunc");
+ S_UNLOCK(&struct_w_lock.lock);

I didn't think we had formally promised that s_lock() is actually
defined or working on all platforms.

More generally, I don't think it's entirely clear what all of these
tests are testing. Like, I can see that data_before and data_after are
intended to test that the lock actually fits in the space allowed for
it, but at the same time, I think empty implementations of all of
these functions would pass regression, as would many horribly or
subtly buggy implementations. For example, consider this:

+ /* test basic operations via the SpinLock* API */
+ SpinLockInit(&struct_w_lock.lock);
+ SpinLockAcquire(&struct_w_lock.lock);
+ SpinLockRelease(&struct_w_lock.lock);

What does it look like for this test to fail? I guess one of those
operations has to fail an assert or hang forever, because it's not
like we're checking the return value. So I feel like the intent of
these tests isn't entirely clear, and should probably be explained
better, at a minimum -- and perhaps we should think harder about what
a good testing framework would look like. I would rather have tests
that either pass or fail and report a result explicitly, rather than
tests that rely on hangs or crashes.

Parenthetically, "cyle" != "cycle".

I don't have any real complaints about the functionality of 0004 on a
quick read-through, but I'm again a bit skeptical of the tests. Not as
much as with 0003, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Alvaro Herrera
Дата:
On 2020-Jun-15, Andres Freund wrote:

> > Also, I wonder if someone would be willing to set up a BF animal for this.
> 
> FWIW, I've requested a buildfarm animal id for this a few days ago, but
> haven't received a response yet...

I did send it out, with name rorqual -- didn't you get that?  Will send
the secret separately.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-16 14:59:19 -0400, Robert Haas wrote:
> On Mon, Jun 15, 2020 at 9:37 PM Andres Freund <andres@anarazel.de> wrote:
> > What do you think about 0002?
> >
> > With regard to the cost of the expensive test in 0003, I'm somewhat
> > inclined to add that to the buildfarm for a few days and see how it
> > actually affects the few bf animals without atomics. We can rip it out
> > after we got some additional coverage (or leave it in if it turns out to
> > be cheap enough in comparison).
>
> I looked over these patches briefly today. I don't have any objection
> to 0001 or 0002.

Cool. I was mainly interested in those for now.


> I think 0003 looks a little strange: it seems to be
> testing things that might be implementation details of other things,
> and I'm not sure that's really correct. In particular:

My main motivation was to have something that runs more often than than
the embeded test in s_lock.c's that nobody ever runs (they wouldn't even
pass with disabled spinlocks, as S_LOCK_FREE isn't implemented).


> + /* and that "contended" acquisition works */
> + s_lock(&struct_w_lock.lock, "testfile", 17, "testfunc");
> + S_UNLOCK(&struct_w_lock.lock);
>
> I didn't think we had formally promised that s_lock() is actually
> defined or working on all platforms.

Hm? Isn't s_lock the, as its comment says, "platform-independent portion
of waiting for a spinlock."?  I also don't think we need to purely
follow external APIs in internal tests.


> More generally, I don't think it's entirely clear what all of these
> tests are testing. Like, I can see that data_before and data_after are
> intended to test that the lock actually fits in the space allowed for
> it, but at the same time, I think empty implementations of all of
> these functions would pass regression, as would many horribly or
> subtly buggy implementations.

Sure, there's a lot that'd pass. But it's more than we had before. It
did catch a bug much quicker than I'd have otherwise found it, FWIW.

I don't think an empty implementation would pass btw, as long as TAS is
defined.

> So I feel like the intent of these tests isn't entirely clear, and
> should probably be explained better, at a minimum -- and perhaps we
> should think harder about what a good testing framework would look
> like.

Yea, we could use something better. But I don't see that happening
quickly, and having something seems better than nothing.


> I would rather have tests that either pass or fail and report a result
> explicitly, rather than tests that rely on hangs or crashes.

That seems quite hard to achieve. I really just wanted to have something
I can do some very basic tests to catch issues quicker.


The atomics tests found numerous issues btw, despite also not testing
concurrency.


I think we generally have way too few of such trivial tests. They can
find plenty "real world" issues, but more importantly make it much
quicker to iterate when working on some piece of code.


> I don't have any real complaints about the functionality of 0004 on a
> quick read-through, but I'm again a bit skeptical of the tests. Not as
> much as with 0003, though.

Without the tests I couldn't even reproduce a deadlock due to the
nesting. So they imo are pretty essential?

Greetings,

Andres Freund



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Tue, Jun 16, 2020 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:
> > I think 0003 looks a little strange: it seems to be
> > testing things that might be implementation details of other things,
> > and I'm not sure that's really correct. In particular:
>
> My main motivation was to have something that runs more often than than
> the embeded test in s_lock.c's that nobody ever runs (they wouldn't even
> pass with disabled spinlocks, as S_LOCK_FREE isn't implemented).

Sure, that makes sense.

> > + /* and that "contended" acquisition works */
> > + s_lock(&struct_w_lock.lock, "testfile", 17, "testfunc");
> > + S_UNLOCK(&struct_w_lock.lock);
> >
> > I didn't think we had formally promised that s_lock() is actually
> > defined or working on all platforms.
>
> Hm? Isn't s_lock the, as its comment says, "platform-independent portion
> of waiting for a spinlock."?  I also don't think we need to purely
> follow external APIs in internal tests.

I feel like we at least didn't use to use that on all platforms, but I
might be misremembering. It seems odd and confusing that we have  both
S_LOCK() and s_lock(), anyway. Differentiating functions based on case
is not great practice.

> Sure, there's a lot that'd pass. But it's more than we had before. It
> did catch a bug much quicker than I'd have otherwise found it, FWIW.
>
> I don't think an empty implementation would pass btw, as long as TAS is
> defined.

Fair enough.

> Yea, we could use something better. But I don't see that happening
> quickly, and having something seems better than nothing.
>
> That seems quite hard to achieve. I really just wanted to have something
> I can do some very basic tests to catch issues quicker.
>
> The atomics tests found numerous issues btw, despite also not testing
> concurrency.
>
> I think we generally have way too few of such trivial tests. They can
> find plenty "real world" issues, but more importantly make it much
> quicker to iterate when working on some piece of code.
>
> Without the tests I couldn't even reproduce a deadlock due to the
> nesting. So they imo are pretty essential?

I'm not telling you not to commit these; I'm just more skeptical of
whether they are the right approach than you seem to be. But that's
OK: people can like different things, and I don't know exactly what
would be better anyway.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-17 10:34:31 -0400, Robert Haas wrote:
> On Tue, Jun 16, 2020 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:
> > > I think 0003 looks a little strange: it seems to be
> > > testing things that might be implementation details of other things,
> > > and I'm not sure that's really correct. In particular:
> > Hm? Isn't s_lock the, as its comment says, "platform-independent portion
> > of waiting for a spinlock."?  I also don't think we need to purely
> > follow external APIs in internal tests.
> 
> I feel like we at least didn't use to use that on all platforms, but I
> might be misremembering.

There's only one definition of S_LOCK, and s_lock is the only spinlock
related user of perform_spin_delay(). So I don't think so?


> It seems odd and confusing that we have  both
> S_LOCK() and s_lock(), anyway. Differentiating functions based on case
> is not great practice.

It's a terrible idea, yes.  Since we don't actually have any non-default
implementations of S_LOCK, perhaps we should just rip it out? It'd
probably be clearer if SpinLockAcquire() would be what uses TAS() and
falls back to s_lock (best renamed to s_lock_slowpath or such).

It'd perhaps also be good to make SpinLockAcquire() a static inline
instead of a #define, so it can be properly attributed in debuggers and
profilers.

Greetings,

Andres Freund



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Wed, Jun 17, 2020 at 2:33 PM Andres Freund <andres@anarazel.de> wrote:
> > It seems odd and confusing that we have  both
> > S_LOCK() and s_lock(), anyway. Differentiating functions based on case
> > is not great practice.
>
> It's a terrible idea, yes.  Since we don't actually have any non-default
> implementations of S_LOCK, perhaps we should just rip it out?

I think we should rip out the conditional nature of the definition and
fix the comments. I don't think I prefer getting rid of it completely.

But then again on the other hand, what's the point of this crap anyway:

#define SpinLockInit(lock)      S_INIT_LOCK(lock)
#define SpinLockAcquire(lock) S_LOCK(lock)
#define SpinLockRelease(lock) S_UNLOCK(lock)
#define SpinLockFree(lock)      S_LOCK_FREE(lock)

This seems like it's straight out of the department of pointless
abstraction layers. Maybe we should remove all of the S_WHATEVER()
stuff and just define SpinLockAcquire() where we currently define
S_LOCK(), SpinLockRelease() where we currently define S_UNLOCK(), etc.

And, as you say, make them static inline functions while we're at it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomic operations within spinlocks)

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> This seems like it's straight out of the department of pointless
> abstraction layers. Maybe we should remove all of the S_WHATEVER()
> stuff and just define SpinLockAcquire() where we currently define
> S_LOCK(), SpinLockRelease() where we currently define S_UNLOCK(), etc.
> And, as you say, make them static inline functions while we're at it.

The macros are kind of necessary unless you want to make s_lock.h
a bunch messier, because we use #ifdef tests on them.

We could get rid of the double layer of macros, sure, but TBH that
sounds like change for the sake of change rather than a useful
improvement.

            regards, tom lane



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Wed, Jun 17, 2020 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
> > This seems like it's straight out of the department of pointless
> > abstraction layers. Maybe we should remove all of the S_WHATEVER()
> > stuff and just define SpinLockAcquire() where we currently define
> > S_LOCK(), SpinLockRelease() where we currently define S_UNLOCK(), etc.
> > And, as you say, make them static inline functions while we're at it.
>
> The macros are kind of necessary unless you want to make s_lock.h
> a bunch messier, because we use #ifdef tests on them.

Where?

> We could get rid of the double layer of macros, sure, but TBH that
> sounds like change for the sake of change rather than a useful
> improvement.

Really? Multiple layers of macros seem like they pretty clearly make
the source code harder to understand. There are plenty of places where
such devices are necessary for one reason or another, but it doesn't
seem like something we ought to keep around for no reason.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomic operations within spinlocks)

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, Jun 17, 2020 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> The macros are kind of necessary unless you want to make s_lock.h
>> a bunch messier, because we use #ifdef tests on them.

> Where?

See the "Default Definitions", down near the end.

>> We could get rid of the double layer of macros, sure, but TBH that
>> sounds like change for the sake of change rather than a useful
>> improvement.

> Really? Multiple layers of macros seem like they pretty clearly make
> the source code harder to understand. There are plenty of places where
> such devices are necessary for one reason or another, but it doesn't
> seem like something we ought to keep around for no reason.

I wouldn't object to making the outer-layer macros in spin.h into static
inlines; as mentioned, that might have some debugging benefits.  But I
think messing with s_lock.h for marginal cosmetic reasons is a foolish
idea.  For one thing, there's no way whoever does it can verify all the
architecture-specific stanzas.  (I don't think we even have all of them
covered in the buildfarm.)

            regards, tom lane



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Wed, Jun 17, 2020 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> See the "Default Definitions", down near the end.

Oh, yeah. I didn't realize you meant just inside this file itself.
That is slightly awkward. I initially thought there was no problem
because there seem to be no remaining non-default definitions of
S_LOCK, but I now see that the other macros still do have some
non-default definitions. Hmm.

> > Really? Multiple layers of macros seem like they pretty clearly make
> > the source code harder to understand. There are plenty of places where
> > such devices are necessary for one reason or another, but it doesn't
> > seem like something we ought to keep around for no reason.
>
> I wouldn't object to making the outer-layer macros in spin.h into static
> inlines; as mentioned, that might have some debugging benefits.  But I
> think messing with s_lock.h for marginal cosmetic reasons is a foolish
> idea.  For one thing, there's no way whoever does it can verify all the
> architecture-specific stanzas.  (I don't think we even have all of them
> covered in the buildfarm.)

It would be a pretty mechanical change to use a separate preprocessor
symbol for the conditional and just define the static inline functions
on the spot. There might be one or two goofs, but if those platforms
are not in the buildfarm, they're either dead and they don't matter,
or someone will tell us what we did wrong. I don't know. I don't have
a huge desire to spend time cleaning up s_lock.h and I do think it's
better not to churn stuff around just for the heck of it, but I'm also
sympathetic to Andres's point that using macros everywhere is
debugger-unfriendly.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomic operations within spinlocks)

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, Jun 17, 2020 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I wouldn't object to making the outer-layer macros in spin.h into static
>> inlines; as mentioned, that might have some debugging benefits.  But I
>> think messing with s_lock.h for marginal cosmetic reasons is a foolish
>> idea.  For one thing, there's no way whoever does it can verify all the
>> architecture-specific stanzas.  (I don't think we even have all of them
>> covered in the buildfarm.)

> It would be a pretty mechanical change to use a separate preprocessor
> symbol for the conditional and just define the static inline functions
> on the spot. There might be one or two goofs, but if those platforms
> are not in the buildfarm, they're either dead and they don't matter,
> or someone will tell us what we did wrong. I don't know. I don't have
> a huge desire to spend time cleaning up s_lock.h and I do think it's
> better not to churn stuff around just for the heck of it, but I'm also
> sympathetic to Andres's point that using macros everywhere is
> debugger-unfriendly.

Sure, but wouldn't making the SpinLockAcquire layer into static inlines be
sufficient to address that point, with no need to touch s_lock.h at all?

            regards, tom lane



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Robert Haas
Дата:
On Thu, Jun 18, 2020 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Sure, but wouldn't making the SpinLockAcquire layer into static inlines be
> sufficient to address that point, with no need to touch s_lock.h at all?

I mean, wouldn't you then end up with a bunch of 1-line functions
where you can step into the function but not through whatever
individual things it does?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: global barrier & atomics in signal handlers (Re: Atomic operations within spinlocks)

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Thu, Jun 18, 2020 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Sure, but wouldn't making the SpinLockAcquire layer into static inlines be
>> sufficient to address that point, with no need to touch s_lock.h at all?

> I mean, wouldn't you then end up with a bunch of 1-line functions
> where you can step into the function but not through whatever
> individual things it does?

Not following your point.  The s_lock.h implementations tend to be either
simple C statements ("*lock = 0") or asm blocks; if you feel a need to
step through them you're going to be resorting to "si" anyway.

I think the main usefulness of doing anything here would be (a) separating
the spinlock infrastructure from callers and (b) ensuring that we have a
declared argument type, and single-evaluation semantics, for the spinlock
function parameters.  Both of those are adequately addressed by fixing
spin.h, IMO anyway.

            regards, tom lane



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-18 12:29:40 -0400, Tom Lane wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
> > On Thu, Jun 18, 2020 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> Sure, but wouldn't making the SpinLockAcquire layer into static inlines be
> >> sufficient to address that point, with no need to touch s_lock.h at all?
> 
> > I mean, wouldn't you then end up with a bunch of 1-line functions
> > where you can step into the function but not through whatever
> > individual things it does?
> 
> Not following your point.  The s_lock.h implementations tend to be either
> simple C statements ("*lock = 0") or asm blocks; if you feel a need to
> step through them you're going to be resorting to "si" anyway.

I agree on that.


I do think it'd be better to not have the S_LOCK macro though (but have
TAS/TAS_SPIN as we do now). And instead have SpinLockAcquire() call
s_lock() (best renamed to something a bit more meaningful). Makes the
code a bit easier to understand (no S_LOCK vs s_lock) and yields simpler
macros.

There's currently no meaningful ifdefs for S_LOCK (in contrast to
e.g. S_UNLOCK), so I don't see it making s_lock.h more complicated.

I think part of the issue here is that the naming of the s_lock exposed
macros is confusing. We have S_INIT_LOCK, TAS, SPIN_DELAY, TAS,
TAS_SPIN, S_UNLOCK, S_LOCK_FREE that are essentially hardware
dependent. But then there's S_LOCK which basically isn't.

It may have made some sense when the file was originally written, if one
imagines that S_ is the only external API, and the rest is
implementation. But given that s_lock() uses TAS() directly (and says
"platform-independent portion of waiting for a spinlock"), and that we
only have one definition of S_UNLOCK that doesn't seem quite right.


> I think the main usefulness of doing anything here would be (a) separating
> the spinlock infrastructure from callers and (b) ensuring that we have a
> declared argument type, and single-evaluation semantics, for the spinlock
> function parameters.  Both of those are adequately addressed by fixing
> spin.h, IMO anyway.

The abstraction point made me grep for includes of s_lock.h. Seems we
have some unnecessary includes of s_lock.h.

lwlock.h doesn't need to include spinlock related things anymore, and
hasn't for years, the spinlocks are replaced with atomics. That seems
pretty obvious. I'm gonna fix that in master, unless somebody thinks we
should do that more widely?

But we also have s_lock.h includes in condition_variable.h and
main.c. Seems the former should instead include spin.h and the latter
include should just be removed?


All of this however makes me wonder whether it's worth polishing
spinlocks instead of just ripping them out. Obviously we'd still need a
fallback for atomics, but it not be hard to just replace the
spinlock use with either "smaller" atomics or semaphores.

Greetings,

Andres Freund



Re: global barrier & atomics in signal handlers (Re: Atomicoperations within spinlocks)

От
Andres Freund
Дата:
Hi,

On 2020-06-16 15:20:11 -0400, Alvaro Herrera wrote:
> On 2020-Jun-15, Andres Freund wrote:
> 
> > > Also, I wonder if someone would be willing to set up a BF animal for this.
> > 
> > FWIW, I've requested a buildfarm animal id for this a few days ago, but
> > haven't received a response yet...
> 
> I did send it out, with name rorqual -- didn't you get that?  Will send
> the secret separately.

That animal is now live. Will take a bit for all branches to report in
though.

Need to get faster storage for my buildfarm animal host...

Greetings,

Andres Freund