Обсуждение: WIP: Make timestamptz_out less slow.

Поиск
Список
Период
Сортировка

WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
Hi,

I recently once more noticed that timestamptz_out is really, really
slow. To quantify that, I created a bunch of similar sized tables:

CREATE TABLE tbl_timestamp AS SELECT NOW() FROM generate_series(1, 100000) a, generate_series(1, 100) b;
CREATE TABLE tbl_int8 AS SELECT 1::bigint FROM generate_series(1,  100000) a, generate_series(1, 100) b;
CREATE TABLE tbl_bytea AS SELECT '   '::bytea FROM generate_series(1, 100000) a, generate_series(1, 100) b;

These all end up being 346MB large.

COPY tbl_bytea TO '/dev/null';
Time: 1173.484 ms
COPY tbl_int8 TO '/dev/null';
Time: 1030.756 ms
COPY tbl_timestamp TO '/dev/null';
Time: 6598.030

(all best of three)

Yes, timestamp outputs more data as a whole, but being 5 times as slow
is still pretty darn insane. To make sure that's not the cause, here's
another table:

CREATE TABLE tbl_timestamptext AS SELECT NOW()::text FROM generate_series(1, 100000) a, generate_series(1, 100) b;
COPY tbl_timestamptext TO '/dev/null';
Time: 1449.554 ms

So it's really just the timestamp code.


Profiling it shows:
  Overhead  Command         Shared Object     Symbol
  -   38.33%  postgres_stock  libc-2.19.so      [.] vfprintf
     - 97.92% vfprintf
          _IO_vsprintf
        - sprintf
           + 70.25% EncodeDateTime
           + 29.75% AppendSeconds.constprop.10
     + 1.11% _IO_default_xsputn
  -    8.22%  postgres_stock  libc-2.19.so      [.] _IO_default_xsputn
     - 99.43% _IO_default_xsputn
        - 90.09% vfprintf
             _IO_vsprintf
           - sprintf
              + 74.15% EncodeDateTime
              + 25.85% AppendSeconds.constprop.10
        + 9.72% _IO_padn
     + 0.57% vfprintf
  +   7.76%  postgres_stock  postgres_stock    [.] CopyOneRowTo

So nearly all the time is spent somewhere inside the sprintf calls. Not
nice.

The only thing I could come up to make the sprintfs cheaper was to
combine them into one and remove some of the width specifiers that
aren't needed. That doesn't buy us very much.

I then proceeded to replace the sprintf call with hand-rolled
conversions. And wow, the benefit is far bigger than I'd assumed:
postgres[7236][1]=# COPY tbl_timestamp TO '/dev/null';
Time: 2430.521 ms

So, by hand-rolling the ISO conversion in EncodeDateTime() we got a
~250% performance improvement. I'd say that's worthwhile.

The attached patch shows what I did. While there's some polishing
possible, as a whole, it's pretty ugly. But I think timestamp data is so
common that it's worth the effort.

Does anybody have a fundamentally nicer idea than the attached to
improvide this?

Greetings,

Andres Freund

Вложения

Re: WIP: Make timestamptz_out less slow.

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> So nearly all the time is spent somewhere inside the sprintf calls. Not
> nice.

What happens if you force use of port/snprintf.c instead of glibc's
version?
        regards, tom lane



Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-07-27 17:31:41 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > So nearly all the time is spent somewhere inside the sprintf calls. Not
> > nice.
> 
> What happens if you force use of port/snprintf.c instead of glibc's
> version?

Good question.

Even worse. 15900.014 ms.

Andres



Re: WIP: Make timestamptz_out less slow.

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2015-07-27 17:31:41 -0400, Tom Lane wrote:
>> What happens if you force use of port/snprintf.c instead of glibc's
>> version?

> Even worse. 15900.014 ms.

Interesting.  So as a separate optimization problem, we might consider
"try to put snprintf.c at least on a par with glibc".  I'm kind of
surprised by this result really, since snprintf.c lacks a lot of the
bells and whistles that are in glibc.
        regards, tom lane



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:

On 28 July 2015 at 09:17, Andres Freund <andres@anarazel.de> wrote:
Hi,

I recently once more noticed that timestamptz_out is really, really
slow. To quantify that, I created a bunch of similar sized tables:

CREATE TABLE tbl_timestamp AS SELECT NOW() FROM generate_series(1, 100000) a, generate_series(1, 100) b;
CREATE TABLE tbl_int8 AS SELECT 1::bigint FROM generate_series(1,  100000) a, generate_series(1, 100) b;
CREATE TABLE tbl_bytea AS SELECT '   '::bytea FROM generate_series(1, 100000) a, generate_series(1, 100) b;

These all end up being 346MB large.

COPY tbl_bytea TO '/dev/null';
Time: 1173.484 ms
COPY tbl_int8 TO '/dev/null';
Time: 1030.756 ms
COPY tbl_timestamp TO '/dev/null';
Time: 6598.030

(all best of three)

Yes, timestamp outputs more data as a whole, but being 5 times as slow
is still pretty darn insane. To make sure that's not the cause, here's
another table:

CREATE TABLE tbl_timestamptext AS SELECT NOW()::text FROM generate_series(1, 100000) a, generate_series(1, 100) b;
COPY tbl_timestamptext TO '/dev/null';
Time: 1449.554 ms

So it's really just the timestamp code.


Profiling it shows:
  Overhead  Command         Shared Object     Symbol
  -   38.33%  postgres_stock  libc-2.19.so      [.] vfprintf
     - 97.92% vfprintf
          _IO_vsprintf
        - sprintf
           + 70.25% EncodeDateTime
           + 29.75% AppendSeconds.constprop.10
     + 1.11% _IO_default_xsputn
  -    8.22%  postgres_stock  libc-2.19.so      [.] _IO_default_xsputn
     - 99.43% _IO_default_xsputn
        - 90.09% vfprintf
             _IO_vsprintf
           - sprintf
              + 74.15% EncodeDateTime
              + 25.85% AppendSeconds.constprop.10
        + 9.72% _IO_padn
     + 0.57% vfprintf
  +   7.76%  postgres_stock  postgres_stock    [.] CopyOneRowTo

So nearly all the time is spent somewhere inside the sprintf calls. Not
nice.

The only thing I could come up to make the sprintfs cheaper was to
combine them into one and remove some of the width specifiers that
aren't needed. That doesn't buy us very much.

I then proceeded to replace the sprintf call with hand-rolled
conversions. And wow, the benefit is far bigger than I'd assumed:
postgres[7236][1]=# COPY tbl_timestamp TO '/dev/null';
Time: 2430.521 ms

So, by hand-rolling the ISO conversion in EncodeDateTime() we got a
~250% performance improvement. I'd say that's worthwhile.

The attached patch shows what I did. While there's some polishing
possible, as a whole, it's pretty ugly. But I think timestamp data is so
common that it's worth the effort.

Does anybody have a fundamentally nicer idea than the attached to
improvide this?

It won't be quite as fast as what you've written, but I think it will be much neater and more likely to be used in other places if we invent a function like pg_ltoa() which returns a pointer to the new end of string.

Also if we're specifying padding with zeros then we can skip the reverse part that's in pg_ltoa(), (normally needed since the numeric string is build in reverse)

The code could then be written as:

str = pg_int2str_pad(str, year, 4);
*str++ = '-';
str = pg_int2str_pad(str, tm->tm_mon, 2);
*str++ = '-';
str = pg_int2str_pad(str, tm->tm_mday, 2);

etc

I've used this method before and found it to be about 10 times faster than snprintf(), but I was reversing the string, so quite likely it be more than 10 times.

I'm interested to see how much you're really gaining by manually unrolling the part that builds the fractional part of the second.

We could just build that part with: (untested)

if (fsec != 0)
{
int fseca = abs(fsec);
while (fseca % 10 == 0 && fseca > 0)
fseca /= 10;
*str++ = '.';
str = pg_int2str(str, fseca);
}

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-07-28 10:59:15 +1200, David Rowley wrote:
> It won't be quite as fast as what you've written, but I think it will be
> much neater and more likely to be used in other places if we invent a
> function like pg_ltoa() which returns a pointer to the new end of string.
> 
> Also if we're specifying padding with zeros then we can skip the reverse
> part that's in pg_ltoa(), (normally needed since the numeric string is
> build in reverse)
> 
> The code could then be written as:
> 
> str = pg_int2str_pad(str, year, 4);
> *str++ = '-';
> str = pg_int2str_pad(str, tm->tm_mon, 2);
> *str++ = '-';
> str = pg_int2str_pad(str, tm->tm_mday, 2);
> 
> etc
> 
> I've used this method before and found it to be about 10 times faster than
> snprintf(), but I was reversing the string, so quite likely it be more than
> 10 times.

Yes, that might be worthwhile to try. Certainly would look less
ugly. Willing to give it a try?

> I'm interested to see how much you're really gaining by manually unrolling
> the part that builds the fractional part of the second.

It seems to help a fair amount, but this really was more a quick POC
than something serious. My theory why it helps is that each loop
iteration is independent of the previous one and thus can take full
advantage of the pipeline.

Regards,

Andres



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:

On 28 July 2015 at 19:10, Andres Freund <andres@anarazel.de> wrote:
On 2015-07-28 10:59:15 +1200, David Rowley wrote:
> It won't be quite as fast as what you've written, but I think it will be
> much neater and more likely to be used in other places if we invent a
> function like pg_ltoa() which returns a pointer to the new end of string.
>
> Also if we're specifying padding with zeros then we can skip the reverse
> part that's in pg_ltoa(), (normally needed since the numeric string is
> build in reverse)
>
> The code could then be written as:
>
> str = pg_int2str_pad(str, year, 4);
> *str++ = '-';
> str = pg_int2str_pad(str, tm->tm_mon, 2);
> *str++ = '-';
> str = pg_int2str_pad(str, tm->tm_mday, 2);
>
> etc
>
> I've used this method before and found it to be about 10 times faster than
> snprintf(), but I was reversing the string, so quite likely it be more than
> 10 times.

Yes, that might be worthwhile to try. Certainly would look less
ugly. Willing to give it a try?


I had a quick try at this and ended up just writing a small test program to see what's faster.

Please excuse the mess of the file, I just hacked it together as quickly as I could with the sole intention of just to get an idea of which is faster and by how much.

For me the output is as follows:

timestamp_out() = 2015-07-29 02:24:33.34 in 3.506000
timestamp_out_old() = 2015-07-29 02:24:33.034 in 64.518000
timestamp_out_af() = 2015-07-29 02:24:33.034 in 2.981000

timestamp_out_old is master's version, the timestamp_out_af() is yours, and timestamp_out() is my one. times are in seconds to perform 100 million calls.

So it appears your version is a bit faster than mine, but we're both about 20 times faster than the current one.
Also mine needs fixed up as the fractional part is not padded the same as yours, but I doubt that'll affect the performance by much.

My view: It's probably not worth going quite as far as you've gone for a handful of nanoseconds per call, but perhaps something along the lines of mine can be fixed up.

Have you thought about what to do when HAVE_INT64_TIMESTAMP is not defined? 

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 
Вложения

Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-07-29 03:10:41 +1200, David Rowley wrote:
> timestamp_out() = 2015-07-29 02:24:33.34 in 3.506000
> timestamp_out_old() = 2015-07-29 02:24:33.034 in 64.518000
> timestamp_out_af() = 2015-07-29 02:24:33.034 in 2.981000
>
> timestamp_out_old is master's version, the timestamp_out_af() is yours, and
> timestamp_out() is my one. times are in seconds to perform 100 million
> calls.

That looks good.

> So it appears your version is a bit faster than mine, but we're both about
> 20 times faster than the current one.
> Also mine needs fixed up as the fractional part is not padded the same as
> yours, but I doubt that'll affect the performance by much.

Worthwhile to finish that bit and try ;)

> My view: It's probably not worth going quite as far as you've gone for a
> handful of nanoseconds per call, but perhaps something along the lines of
> mine can be fixed up.

Yes, I agreee that your's is probably going to be fast enough.

> Have you thought about what to do when HAVE_INT64_TIMESTAMP is not defined?

I don't think it's actually important. The only difference vs float
timestamps is that in the latter case we set fsecs to zero BC.

Unless we want to slow down the common case it seems not unlikely that
we're going to end up with a separate slow path anyway. E.g. neither
your version nor mine handles 5 digit years (which is why I fell back to
the slow path in that case in my patch).

Regards,

Andres



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 29 July 2015 at 03:25, Andres Freund <andres@anarazel.de> wrote:
On 2015-07-29 03:10:41 +1200, David Rowley wrote:
> Have you thought about what to do when HAVE_INT64_TIMESTAMP is not defined?

I don't think it's actually important. The only difference vs float
timestamps is that in the latter case we set fsecs to zero BC.


 I was also thinking that the % 10 won't work when fsec_t is double.

typedef double fsec_t

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-07-29 09:37:26 +1200, David Rowley wrote:
> On 29 July 2015 at 03:25, Andres Freund <andres@anarazel.de> wrote:
> 
> > On 2015-07-29 03:10:41 +1200, David Rowley wrote:
> > > Have you thought about what to do when HAVE_INT64_TIMESTAMP is not
> > defined?
> >
> > I don't think it's actually important. The only difference vs float
> > timestamps is that in the latter case we set fsecs to zero BC.
> >
> >
>  I was also thinking that the % 10 won't work when fsec_t is double.
> 
> typedef double fsec_t

It seems quite possible to move that bit to timestamp2tm and do it
without a dependency on HAVE_INT64_TIMESTAMP. As long as it doesn't slow
down the int timestamp case I'm happy to simplify code in this area.

Greetings,

Andres Freund



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 29 July 2015 at 03:25, Andres Freund <andres@anarazel.de> wrote:
On 2015-07-29 03:10:41 +1200, David Rowley wrote:
> timestamp_out() = 2015-07-29 02:24:33.34 in 3.506000
> timestamp_out_old() = 2015-07-29 02:24:33.034 in 64.518000
> timestamp_out_af() = 2015-07-29 02:24:33.034 in 2.981000
>
> timestamp_out_old is master's version, the timestamp_out_af() is yours, and
> timestamp_out() is my one. times are in seconds to perform 100 million
> calls.

That looks good.

> So it appears your version is a bit faster than mine, but we're both about
> 20 times faster than the current one.
> Also mine needs fixed up as the fractional part is not padded the same as
> yours, but I doubt that'll affect the performance by much.

Worthwhile to finish that bit and try ;)

> My view: It's probably not worth going quite as far as you've gone for a
> handful of nanoseconds per call, but perhaps something along the lines of
> mine can be fixed up.

Yes, I agreee that your's is probably going to be fast enough.

> Have you thought about what to do when HAVE_INT64_TIMESTAMP is not defined?

I don't think it's actually important. The only difference vs float
timestamps is that in the latter case we set fsecs to zero BC.

Unless we want to slow down the common case it seems not unlikely that
we're going to end up with a separate slow path anyway. E.g. neither
your version nor mine handles 5 digit years (which is why I fell back to
the slow path in that case in my patch).

It occurred to me that handling the 5 digit year is quite a simple change to my code:

static char *
pg_uint2str_padding(char *str, unsigned int value, unsigned int padding)
{
char   *start = str;
char   *end = &str[padding];
unsigned int num = value;
//Assert(padding > 0);
*end = '\0';
while (padding--)
{
str[padding] = num % 10 + '0';
num /= 10;
}
/*
* if value was too big for the specified padding then rebuild the whole
* number again without padding. Conveniently pg_uint2str() does exactly
* this.
*/
if (num > 0)
return pg_uint2str(str, value);

return end;
}

We simply just need to check if there was any 'num' left after consuming the given space. If there's any left then just use pg_uint2str().
This keeps things very fast for the likely most common case where the year is 4 digits long.

I've not thought about negative years. The whole function should perhaps take signed ints instead of unsigned.

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 5 August 2015 at 12:51, David Rowley <david.rowley@2ndquadrant.com> wrote:
On 29 July 2015 at 03:25, Andres Freund <andres@anarazel.de> wrote:
On 2015-07-29 03:10:41 +1200, David Rowley wrote:
> timestamp_out() = 2015-07-29 02:24:33.34 in 3.506000
> timestamp_out_old() = 2015-07-29 02:24:33.034 in 64.518000
> timestamp_out_af() = 2015-07-29 02:24:33.034 in 2.981000
>
> timestamp_out_old is master's version, the timestamp_out_af() is yours, and
> timestamp_out() is my one. times are in seconds to perform 100 million
> calls.

That looks good.

> So it appears your version is a bit faster than mine, but we're both about
> 20 times faster than the current one.
> Also mine needs fixed up as the fractional part is not padded the same as
> yours, but I doubt that'll affect the performance by much.

Worthwhile to finish that bit and try ;)

> My view: It's probably not worth going quite as far as you've gone for a
> handful of nanoseconds per call, but perhaps something along the lines of
> mine can be fixed up.

Yes, I agreee that your's is probably going to be fast enough.

> Have you thought about what to do when HAVE_INT64_TIMESTAMP is not defined?

I don't think it's actually important. The only difference vs float
timestamps is that in the latter case we set fsecs to zero BC.

Unless we want to slow down the common case it seems not unlikely that
we're going to end up with a separate slow path anyway. E.g. neither
your version nor mine handles 5 digit years (which is why I fell back to
the slow path in that case in my patch).

It occurred to me that handling the 5 digit year is quite a simple change to my code:

We simply just need to check if there was any 'num' left after consuming the given space. If there's any left then just use pg_uint2str().
This keeps things very fast for the likely most common case where the year is 4 digits long.

I've not thought about negative years. The whole function should perhaps take signed ints instead of unsigned.


I've made a few changes to this to get the fractional seconds part working as it should.

It also now works fine with 5 digit years.

It's still in the form of the test program, but it should be simple enough to pull out what's required from that and put into Postgres.

I've also changed my version of AppendSeconds() so that it returns a pointer to the new end of string. This should be useful as I see some strlen() calls to get the new end of string. It'll easy to remove those now which will further increase performance.

timestamp_out() is the proposed new version
timestamp_out_old() is master's version
timestamp_out_af() is your version

Performance is as follows:

With Clang
david@ubuntu:~/C$ clang timestamp_out.c -o timestamp_out -O2
david@ubuntu:~/C$ ./timestamp_out 
timestamp_out() = 2015-07-29 02:24:33.034 in 0.313686
timestamp_out_old() = 2015-07-29 02:24:33.034 in 5.048472
timestamp_out_af() = 2015-07-29 02:24:33.034 in 0.198147

With gcc
david@ubuntu:~/C$ gcc timestamp_out.c -o timestamp_out -O2
david@ubuntu:~/C$ ./timestamp_out 
timestamp_out() = 2015-07-29 02:24:33.034 in 0.405795
timestamp_out_old() = 2015-07-29 02:24:33.034 in 4.678918
timestamp_out_af() = 2015-07-29 02:24:33.034 in 0.270557

Regards

David Rowley
--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 29 July 2015 at 03:25, Andres Freund <andres@anarazel.de> wrote:
On 2015-07-29 03:10:41 +1200, David Rowley wrote:
> timestamp_out() = 2015-07-29 02:24:33.34 in 3.506000
> timestamp_out_old() = 2015-07-29 02:24:33.034 in 64.518000
> timestamp_out_af() = 2015-07-29 02:24:33.034 in 2.981000
>
> timestamp_out_old is master's version, the timestamp_out_af() is yours, and
> timestamp_out() is my one. times are in seconds to perform 100 million
> calls.

That looks good.

> So it appears your version is a bit faster than mine, but we're both about
> 20 times faster than the current one.
> Also mine needs fixed up as the fractional part is not padded the same as
> yours, but I doubt that'll affect the performance by much.

Worthwhile to finish that bit and try ;)

> My view: It's probably not worth going quite as far as you've gone for a
> handful of nanoseconds per call, but perhaps something along the lines of
> mine can be fixed up.

Yes, I agreee that your's is probably going to be fast enough.


I took a bit of weekend time to finish this one off. Patch attached.

A quick test shows a pretty good performance increase:

create table ts (ts timestamp not null);
insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01 00:00:00', '1 sec'::interval);
vacuum freeze ts;

Master:
david=# copy ts to 'l:/ts.sql';
COPY 31536001
Time: 20444.468 ms

Patched:
david=# copy ts to 'l:/ts.sql';
COPY 31536001
Time: 10947.097 ms

There's probably a bit more to squeeze out of this.
1. EncodeDateTime() could return the length of the string to allow callers to use pnstrdup() instead of pstrdup(), which would save the strlen() call.
2. Have pg_int2str_zeropad() and pg_int2str() skip appending the NUL char and leave this up to the calling function.
3. Make something to replace the sprintf(str, " %.*s", MAXTZLEN, tzn); call.

Perhaps 1 and 3 are worth looking into, but I think 2 is likely too error prone for the small gain we'd get from it.

Also I was not too sure on if pg_int2str() was too similar to pg_ltoa(). Perhaps pg_ltoa() should just be modified to return the end of string?

Thoughts?

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 
Вложения

Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-08-09 12:47:53 +1200, David Rowley wrote:
> I took a bit of weekend time to finish this one off. Patch attached.
> 
> A quick test shows a pretty good performance increase:
> 
> create table ts (ts timestamp not null);
> insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01
> 00:00:00', '1 sec'::interval);
> vacuum freeze ts;
> 
> Master:
> david=# copy ts to 'l:/ts.sql';
> COPY 31536001
> Time: 20444.468 ms
> 
> Patched:
> david=# copy ts to 'l:/ts.sql';
> COPY 31536001
> Time: 10947.097 ms

Yes, that's pretty cool.

> There's probably a bit more to squeeze out of this.
> 1. EncodeDateTime() could return the length of the string to allow callers
> to use pnstrdup() instead of pstrdup(), which would save the strlen() call.
> 2. Have pg_int2str_zeropad() and pg_int2str() skip appending the NUL char
> and leave this up to the calling function.
> 3. Make something to replace the sprintf(str, " %.*s", MAXTZLEN, tzn); call.
> 
> Perhaps 1 and 3 are worth looking into, but I think 2 is likely too error
> prone for the small gain we'd get from it.

I'm inclined to first get the majority of the optimization - as in
somethign similar to the current patch. If we then feel a need to
optimize further we can do that. Otherwise we might end up not getting
the 95% performance improvement in 9.6 because we're playing with the
remaining 5 ;)


> Also I was not too sure on if pg_int2str() was too similar to pg_ltoa().
> Perhaps pg_ltoa() should just be modified to return the end of string?

I don't think the benefits are worth breaking pg_ltoa interface.

>  /*
> - * Append sections and fractional seconds (if any) at *cp.
> + * Append seconds and fractional seconds (if any) at *cp.
>   * precision is the max number of fraction digits, fillzeros says to
>   * pad to two integral-seconds digits.
>   * Note that any sign is stripped from the input seconds values.
> + * Note 'precision' must not be a negative number.
>   */
> -static void
> +static char *
>  AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool fillzeros)
>  {
> +#ifdef HAVE_INT64_TIMESTAMP

Wouldn't it be better to just normalize fsec to an integer in that case?
Afaics that's the only remaining reason for the alternative path?

> +/*
> + * pg_int2str
> + *        Converts 'value' into a decimal string representation of the number.
> + *
> + * Caller must ensure that 'str' points to enough memory to hold the result
> + * (at least 12 bytes, counting a leading sign and trailing NUL).
> + * Return value is a pointer to the new NUL terminated end of string.
> + */
> +char *
> +pg_int2str(char *str, int32 value)
> +{
> +    char *start;
> +    char *end;
> +
> +    /*
> +     * Handle negative numbers in a special way. We can't just append a '-'
> +     * prefix and reverse the sign as on two's complement machines negative
> +     * numbers can be 1 further from 0 than positive numbers, we do it this way
> +     * so we properly handle the smallest possible value.
> +     */
> +    if (value < 0)
> +    {

We could alternatively handle this by special-casing INT_MIN, probably
resulting in a bit less duplicated code.
>  
>  /*
>   *        Per-opclass comparison functions for new btrees.  These are
> diff --git a/src/tools/msvc/build.pl b/src/tools/msvc/build.pl
> index e107d41..1e2dd62 100644
> --- a/src/tools/msvc/build.pl
> +++ b/src/tools/msvc/build.pl
> @@ -61,7 +61,7 @@ elsif ($buildwhat)
>  }
>  else
>  {
> -    system("msbuild pgsql.sln /verbosity:detailed /p:Configuration=$bconf");
> +    system("msbuild pgsql.sln /verbosity:quiet /p:Configuration=$bconf");
>  }

Uh? I assume that's an acciental change?

Greetings,

Andres Freund



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:

On 3 September 2015 at 05:10, Andres Freund <andres@anarazel.de> wrote:
On 2015-08-09 12:47:53 +1200, David Rowley wrote:
> I took a bit of weekend time to finish this one off. Patch attached.
>
> A quick test shows a pretty good performance increase:
>
> create table ts (ts timestamp not null);
> insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01
> 00:00:00', '1 sec'::interval);
> vacuum freeze ts;
>
> Master:
> david=# copy ts to 'l:/ts.sql';
> COPY 31536001
> Time: 20444.468 ms
>
> Patched:
> david=# copy ts to 'l:/ts.sql';
> COPY 31536001
> Time: 10947.097 ms

Yes, that's pretty cool.

> There's probably a bit more to squeeze out of this.
> 1. EncodeDateTime() could return the length of the string to allow callers
> to use pnstrdup() instead of pstrdup(), which would save the strlen() call.
> 2. Have pg_int2str_zeropad() and pg_int2str() skip appending the NUL char
> and leave this up to the calling function.
> 3. Make something to replace the sprintf(str, " %.*s", MAXTZLEN, tzn); call.
>
> Perhaps 1 and 3 are worth looking into, but I think 2 is likely too error
> prone for the small gain we'd get from it.

I see the logic there, but a few weekends ago I modified the patch to implement number 2. This is perhaps one optimization that would be hard to change later since, if we suddenly change pg_int2str() to not append the NUL, then it might break any 3rd party code that's started using them. The more I think about it the more I think allowing those to skip appending the NUL is ok. They're very useful functions for incrementally building strings.

Attached is a patch to implement this, which performs as follows 
postgres=# copy ts to 'l:/ts.sql';
COPY 31536001
Time: 10665.297 ms

However, this version of the patch does change many existing functions in datetime.c so that they no longer append the NUL char... The good news is that these are all static functions, so nothing else should be using them.

I'm happy to leave 1 and 3 for another rainy weekend one day.


> Also I was not too sure on if pg_int2str() was too similar to pg_ltoa().
> Perhaps pg_ltoa() should just be modified to return the end of string?

I don't think the benefits are worth breaking pg_ltoa interface.


Ok let's leave pg_ltoa() alone 
 
>  /*
> - * Append sections and fractional seconds (if any) at *cp.
> + * Append seconds and fractional seconds (if any) at *cp.
>   * precision is the max number of fraction digits, fillzeros says to
>   * pad to two integral-seconds digits.
>   * Note that any sign is stripped from the input seconds values.
> + * Note 'precision' must not be a negative number.
>   */
> -static void
> +static char *
>  AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool fillzeros)
>  {
> +#ifdef HAVE_INT64_TIMESTAMP

Wouldn't it be better to just normalize fsec to an integer in that case?
Afaics that's the only remaining reason for the alternative path?

A special case would need to exist somewhere, unless we just modified timestamp2tm() to multiply fsec by 1 million when HAVE_INT64_TIMESTAMP is not defined, but that changes the function signature. 
 
> +/*
> + * pg_int2str
> + *           Converts 'value' into a decimal string representation of the number.
> + *
> + * Caller must ensure that 'str' points to enough memory to hold the result
> + * (at least 12 bytes, counting a leading sign and trailing NUL).
> + * Return value is a pointer to the new NUL terminated end of string.
> + */
> +char *
> +pg_int2str(char *str, int32 value)
> +{
> +     char *start;
> +     char *end;
> +
> +     /*
> +      * Handle negative numbers in a special way. We can't just append a '-'
> +      * prefix and reverse the sign as on two's complement machines negative
> +      * numbers can be 1 further from 0 than positive numbers, we do it this way
> +      * so we properly handle the smallest possible value.
> +      */
> +     if (value < 0)
> +     {

We could alternatively handle this by special-casing INT_MIN, probably
resulting in a bit less duplicated code.

Those special cases seem to do some weird stuff to the performance characteristics of those functions. I find my method to handle negative numbers to produce much faster code.

I experimented with finding the fastest way to convert an int to a string at the weekend, I did happen to be testing with int64's but likely int32 will be the same.

This is the time taken to convert 30 into "30" 2 billion times.

The closest implementation I'm using in the patch is pg_int64tostr() one. The pg_int64tostr_memcpy() only performs better with bigger numbers / longer strings.

david@ubuntu:~/C$ gcc5.2 tostring.c -o tostring -O2
david@ubuntu:~/C$ ./tostring
pg_int64tostr_memcpy in 13.653252 seconds
pg_int64tostr in 2.042616 seconds
pg_int64tostr_new in 2.056688 seconds
pg_lltoa in 13.604653 seconds

david@ubuntu:~/C$ clang tostring.c -o tostring_clang -O2
david@ubuntu:~/C$ ./tostring_clang 
pg_int64tostr_memcpy in 0.000004 seconds
pg_int64tostr in 2.063335 seconds
pg_int64tostr_new in 2.102068 seconds
pg_lltoa in 3.424630 seconds

clang obviously saw right through my pg_int64tostr_memcpy() and optimized something I'd not intended it to.
Notice how badly pg_lltoa() performs in comparison, especially with gcc.

I'd actually be inclined to consider changing pg_lltoa() to remove the special case and implement it the same as I've proposed we do in pg_int2str(). That's not for this patch though, and would likely need tested with some other compilers.

>
>  /*
>   *           Per-opclass comparison functions for new btrees.  These are
> diff --git a/src/tools/msvc/build.pl b/src/tools/msvc/build.pl
> index e107d41..1e2dd62 100644
> --- a/src/tools/msvc/build.pl
> +++ b/src/tools/msvc/build.pl
> @@ -61,7 +61,7 @@ elsif ($buildwhat)
>  }
>  else
>  {
> -     system("msbuild pgsql.sln /verbosity:detailed /p:Configuration=$bconf");
> +     system("msbuild pgsql.sln /verbosity:quiet /p:Configuration=$bconf");
>  }

Uh? I assume that's an acciental change?


Oops, yes.


Вложения

Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-09-03 16:28:40 +1200, David Rowley wrote:
> > Wouldn't it be better to just normalize fsec to an integer in that case?
> > Afaics that's the only remaining reason for the alternative path?
> >
> > A special case would need to exist somewhere, unless we just modified
> timestamp2tm() to multiply fsec by 1 million when HAVE_INT64_TIMESTAMP is
> not defined, but that changes the function signature.

Sure, but that's a one line #ifdef?

> > We could alternatively handle this by special-casing INT_MIN, probably
> > resulting in a bit less duplicated code.
> >
> 
> Those special cases seem to do some weird stuff to the performance
> characteristics of those functions. I find my method to handle negative
> numbers to produce much faster code.

That's a bit odd.

> I experimented with finding the fastest way to convert an int to a string
> at the weekend, I did happen to be testing with int64's but likely int32
> will be the same.
> 
> This is the time taken to convert 30 into "30" 2 billion times.
> 
> The closest implementation I'm using in the patch is pg_int64tostr() one.
> The pg_int64tostr_memcpy() only performs better with bigger numbers /
> longer strings.
> 
> david@ubuntu:~/C$ gcc5.2 tostring.c -o tostring -O2
> david@ubuntu:~/C$ ./tostring
> pg_int64tostr_memcpy in 13.653252 seconds
> pg_int64tostr in 2.042616 seconds
> pg_int64tostr_new in 2.056688 seconds
> pg_lltoa in 13.604653 seconds
> 
> david@ubuntu:~/C$ clang tostring.c -o tostring_clang -O2
> david@ubuntu:~/C$ ./tostring_clang
> pg_int64tostr_memcpy in 0.000004 seconds
> pg_int64tostr in 2.063335 seconds
> pg_int64tostr_new in 2.102068 seconds
> pg_lltoa in 3.424630 seconds

Are you sure this isn't an optimization artifact where the actual work
is optimized away? Doing 2 billion conversions in 2s kinda implies that
the string conversion is done in ~4 cycles. That seems unrealistically
fast, even for a pipelined cpu.


Greetings,

Andres Freund



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 3 September 2015 at 22:17, Andres Freund <andres@anarazel.de> wrote:
On 2015-09-03 16:28:40 +1200, David Rowley wrote:
> I experimented with finding the fastest way to convert an int to a string
> at the weekend, I did happen to be testing with int64's but likely int32
> will be the same.
>
> This is the time taken to convert 30 into "30" 2 billion times.
>
> The closest implementation I'm using in the patch is pg_int64tostr() one.
> The pg_int64tostr_memcpy() only performs better with bigger numbers /
> longer strings.
>
> david@ubuntu:~/C$ gcc5.2 tostring.c -o tostring -O2
> david@ubuntu:~/C$ ./tostring
> pg_int64tostr_memcpy in 13.653252 seconds
> pg_int64tostr in 2.042616 seconds
> pg_int64tostr_new in 2.056688 seconds
> pg_lltoa in 13.604653 seconds
>
> david@ubuntu:~/C$ clang tostring.c -o tostring_clang -O2
> david@ubuntu:~/C$ ./tostring_clang
> pg_int64tostr_memcpy in 0.000004 seconds
> pg_int64tostr in 2.063335 seconds
> pg_int64tostr_new in 2.102068 seconds
> pg_lltoa in 3.424630 seconds

Are you sure this isn't an optimization artifact where the actual work
is optimized away? Doing 2 billion conversions in 2s kinda implies that
the string conversion is done in ~4 cycles. That seems unrealistically
fast, even for a pipelined cpu.

I think you're right. 
If I change the NUM_TO_STRING in my tostring.c to
 #define NUM_TO_PRINT i%100

I just wanted a way to keep the strings 2 digits long, so that it matches what will happen when building timestamps.

It looks a bit different, but it's still a bit faster than pg_lltoa()

david@ubuntu:~/C$ ./tostring
pg_int64tostr_memcpy in 16.557578 seconds
pg_int64tostr in 8.070706 seconds
pg_int64tostr_new in 7.597320 seconds
pg_lltoa in 12.315339 seconds

david@ubuntu:~/C$ ./tostring_clang 
pg_int64tostr_memcpy in 19.698807 seconds
pg_int64tostr in 12.800270 seconds
pg_int64tostr_new in 14.174052 seconds
pg_lltoa in 14.427803 seconds

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 3 September 2015 at 22:17, Andres Freund <andres@anarazel.de> wrote:
On 2015-09-03 16:28:40 +1200, David Rowley wrote:
> > Wouldn't it be better to just normalize fsec to an integer in that case?
> > Afaics that's the only remaining reason for the alternative path?
> >
> > A special case would need to exist somewhere, unless we just modified
> timestamp2tm() to multiply fsec by 1 million when HAVE_INT64_TIMESTAMP is
> not defined, but that changes the function signature.

Sure, but that's a one line #ifdef?

I had a look at this but I found that the precision is 10 with double timestamps per:

#define MAX_TIME_PRECISION 10
#define TIME_PREC_INV 10000000000.0

So if I do something like:

int fseconds = fsec * TIME_PREC_INV;

In AppendSeconds(), it overflows fseconds.

I could of course make fseconds an int64, but then I'll need to include a 64bit version of pg_int2str(). That does not seem like an improvement.

I'm also starting to not like the name pg_int2str(), It may cause confusion with 2 bytes, instead of convert "2".

Regards

David Rowley
 
--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 3 September 2015 at 05:10, Andres Freund <andres@anarazel.de> wrote:
> +/*
> + * pg_int2str
> + *           Converts 'value' into a decimal string representation of the number.
> + *
> + * Caller must ensure that 'str' points to enough memory to hold the result
> + * (at least 12 bytes, counting a leading sign and trailing NUL).
> + * Return value is a pointer to the new NUL terminated end of string.
> + */
> +char *
> +pg_int2str(char *str, int32 value)
> +{
> +     char *start;
> +     char *end;
> +
> +     /*
> +      * Handle negative numbers in a special way. We can't just append a '-'
> +      * prefix and reverse the sign as on two's complement machines negative
> +      * numbers can be 1 further from 0 than positive numbers, we do it this way
> +      * so we properly handle the smallest possible value.
> +      */
> +     if (value < 0)
> +     {

We could alternatively handle this by special-casing INT_MIN, probably
resulting in a bit less duplicated code.

Hi Andres,

I've made a few updates to the patch to cleanup a few badly written comments, and also to fix a missing Abs() call.
I've also renamed the pg_int2str to become pg_ltostr() to bring it more inline with pg_ltoa.

Attached is also timestamp_delta.patch which changes pg_ltostr() to use the INT_MIN special case that pg_ltoa also uses. I didn't make a similar change to pg_ltostr_zeropad() as I'd need to add even more special code to prepend the correct number of zero before the 2147483648. This zero padding reason was why I originally came up with the alternative way to handle the negative numbers. I had just thought that it was best to keep both functions as similar as possible.

I've not done anything yet to remove the #ifdef HAVE_INT64_TIMESTAMP from AppendSeconds(). The only way I can think to handle this is to just make fsec_t unconditionally an int64 (since with float datetimes the precision is 10 decimal digits after the dec point, this does not fit in int32). I'd go off and do this, but this means I need to write an int64 version of pg_ltostr(). Should I do it this way?

I also did some tests on server grade hardware, and the performance increase is looking a bit better.

create table ts (ts timestamp not null);
insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01 00:00:00', '1 sec'::interval);
vacuum freeze ts;

Master

tpch=# copy ts to '/dev/null';
COPY 31536001
Time: 17071.255 ms

Patched
tpch=# copy ts to '/dev/null';
COPY 31536001
Time: 6402.835 ms

The times don't seem to vary any depending on the attached timestamp_delta.patch

Regards

David Rowley
Вложения

Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-09-12 04:00:26 +1200, David Rowley wrote:
> I've not done anything yet to remove the #ifdef HAVE_INT64_TIMESTAMP from
> AppendSeconds(). The only way I can think to handle this is to just
> make fsec_t unconditionally an int64 (since with float datetimes the
> precision is 10 decimal digits after the dec point, this does not fit in
> int32). I'd go off and do this, but this means I need to write an int64
> version of pg_ltostr(). Should I do it this way?

I don't have time to look into this in detail right now. But isn't that
the wrong view?

The precision with float timestamps is still microseconds which you can
easily express in an integer. That the value can be higher is just
because fsec_t for floating points also includes seconds, right? It
shouldn't be too hard to separate those?

Greetings,

Andres Freund



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 12 September 2015 at 04:24, Andres Freund <andres@anarazel.de> wrote:
On 2015-09-12 04:00:26 +1200, David Rowley wrote:
> I've not done anything yet to remove the #ifdef HAVE_INT64_TIMESTAMP from
> AppendSeconds(). The only way I can think to handle this is to just
> make fsec_t unconditionally an int64 (since with float datetimes the
> precision is 10 decimal digits after the dec point, this does not fit in
> int32). I'd go off and do this, but this means I need to write an int64
> version of pg_ltostr(). Should I do it this way?

I don't have time to look into this in detail right now. But isn't that
the wrong view?


I understand you have a lot on. I've attempted to explain my reasoning below. 
 
The precision with float timestamps is still microseconds which you can
easily express in an integer. That the value can be higher is just
because fsec_t for floating points also includes seconds, right? It
shouldn't be too hard to separate those?


I don't think it's possible to claim that "The precision with float timestamps is still microseconds". I'd say it's more limited to double itself.

As a demo (compiled with float datetime)

postgres=# select '11:59:59.0000000002 PM'::time;
        time
---------------------
 23:59:59.0000000002
(1 row)

Which is 23:59:59 and 0.0002 microseconds, or 0.2 nano seconds, or 200 pico seconds.

This formatted this way by AppendSeconds as MAX_TIME_PRECISION is defined as 10 when compiled with HAVE_INT64_TIMESTAMP undefined.

On the other end of the spectrum:

postgres=# select '999999-01-01 11:59:59.0005'::timestamp;
       timestamp
-----------------------
 999999-01-01 11:59:59
(1 row)

There was no precision left for the fractional seconds here. So I'd say claims of microsecond precision to be incorrect in many regards. 

I could change AppendSeconds() to only format to 6 decimal places, but that would be a behavioral change that I'm certainly not going to argue for. Quite possibly this extra precision with the time type is about the only reason to bother keeping the float time code at all, else what's it good for? 

There's just not enough bits in an int32 to do 0.99 * 10000000000.0. Which is why I asked about int64. It all just seems more hassle than it's worth to get rid of a small special case in AppendSeconds()

The other problem with changing timestamp2tm() to output microseconds is that this would require changes in all functions which call timestamp2tm(), and likely many functions which call those. Many of these would break 3rd party code. I found myself in contrib code when looking at changing it. (Specifically a DecodeDateTime() call in adminpack.c)

I do agree that it's quite ugly that I've only kept TrimTrailingZeros() in float timestamp mode, but at least it is static.

What's the best way to proceed with this:
1. Change AppendSeconds() to always format to 6 decimal places? (Breaks applications which rely on more precision with TIME)
2. Change AppendSeconds() to multiply f_sec by 10000000000.0 (this requires an int64 version of pg_ltostr, but still requires a special case in AppendSeconds())
3. Just keep things as they are with my proposed patch.
4. ?

Are you worried about this because I've not focused on optimising float timestamps as much as int64 timestamps?  Are there many people compiling with float timestamps in the real world?

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

Re: WIP: Make timestamptz_out less slow.

От
Jim Nasby
Дата:
On 9/13/15 2:43 AM, David Rowley wrote:
> Are you worried about this because I've not focused on optimising float
> timestamps as much as int64 timestamps?  Are there many people compiling
> with float timestamps in the real world?

My $0.02: the default was changed some 5 years ago so FP time is 
probably pretty rare now. I don't think it's worth a bunch of extra work 
to speed them up.
-- 
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: WIP: Make timestamptz_out less slow.

От
Andres Freund
Дата:
On 2015-09-14 12:03:31 -0500, Jim Nasby wrote:
> On 9/13/15 2:43 AM, David Rowley wrote:
> >Are you worried about this because I've not focused on optimising float
> >timestamps as much as int64 timestamps?  Are there many people compiling
> >with float timestamps in the real world?
> 
> My $0.02: the default was changed some 5 years ago so FP time is probably
> pretty rare now. I don't think it's worth a bunch of extra work to speed
> them up.

Agreed. That's not why I'm arguing for it. It's about having kind of duplicate
code that's not exercised at al. by any common setup.

Greetings,

Andres Freund



Re: WIP: Make timestamptz_out less slow.

От
Alvaro Herrera
Дата:
Jim Nasby wrote:
> On 9/13/15 2:43 AM, David Rowley wrote:
> >Are you worried about this because I've not focused on optimising float
> >timestamps as much as int64 timestamps?  Are there many people compiling
> >with float timestamps in the real world?
> 
> My $0.02: the default was changed some 5 years ago so FP time is probably
> pretty rare now.

The default was FP for 8.3 and was changed before 8.4.  Anybody who was
running with the default back then and who has pg_upgraded all the way
to current releases is still using floating-point date/times.

> I don't think it's worth a bunch of extra work to speed them up.

Not sure about that.

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 15 September 2015 at 05:52, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Jim Nasby wrote:
> On 9/13/15 2:43 AM, David Rowley wrote:
> >Are you worried about this because I've not focused on optimising float
> >timestamps as much as int64 timestamps?  Are there many people compiling
> >with float timestamps in the real world?
>
> My $0.02: the default was changed some 5 years ago so FP time is probably
> pretty rare now.

The default was FP for 8.3 and was changed before 8.4.  Anybody who was
running with the default back then and who has pg_upgraded all the way
to current releases is still using floating-point date/times.

> I don't think it's worth a bunch of extra work to speed them up.

Not sure about that.


It's not like nothing is improved in float timestamps with this patch, it's only appending the non-zero fractional seconds that I've left alone. Every other portion of the timestamp has been improved.

I made a quick test as a demo. This is compiled with float timestamps.

create table ts (ts timestamp not null);
insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01 00:00:00', '1 sec'::interval);
vacuum freeze ts;
create table ts2 (ts timestamp not null);
insert into ts2 select generate_series('2010-01-01 00:00:00', '2010-01-01 00:00:31.536001', '1 microsecond'::interval);
vacuum freeze ts2;

Master:
copy ts to '/dev/null'; -- Test 1
Time: 18252.128 ms
Time: 18230.650 ms
Time: 18247.622 ms

copy ts2 to '/dev/null'; -- Test 2
Time: 30928.999 ms
Time: 30973.576 ms
Time: 30935.489 ms

Patched

copy ts to '/dev/null'; -- Test 1 (247%)
Time: 7378.243 ms
Time: 7383.486 ms
Time: 7385.675 ms

copy ts2 to '/dev/null'; -- Test 2 (142%)
Time: 21761.047 ms
Time: 21757.026 ms
Time: 21759.621 ms

The patched difference between ts and ts2 can be accounted for by the fact that I didn't find a better way to do:

  if (fillzeros)
  sprintf(cp, "%0*.*f", precision + 3, precision, fabs(sec + fsec));
  else
  sprintf(cp, "%.*f", precision, fabs(sec + fsec));

A fast path exists when the fractional seconds is zero, which is why there's such a variation between the 2 tests.

I did, however spend some time a few weekends ago writing a function to improve this, which is more aimed at improving poor performance of float4out and float8out. It's quite likely if I can get finished one day, it can help improve this case too.

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Re: WIP: Make timestamptz_out less slow.

От
Alvaro Herrera
Дата:
David Rowley wrote:

> It's not like nothing is improved in float timestamps with this patch, it's
> only appending the non-zero fractional seconds that I've left alone. Every
> other portion of the timestamp has been improved.

OK, sounds good enough to me.

> I did, however spend some time a few weekends ago writing a function to
> improve this, which is more aimed at improving poor performance of
> float4out and float8out. It's quite likely if I can get finished one day,
> it can help improve this case too.

Even better, prolly not a blocker for this patch.

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: WIP: Make timestamptz_out less slow.

От
Peter Geoghegan
Дата:
On Fri, Sep 11, 2015 at 9:00 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:
> Attached is also timestamp_delta.patch which changes pg_ltostr() to use the
> INT_MIN special case that pg_ltoa also uses. I didn't make a similar change
> to pg_ltostr_zeropad() as I'd need to add even more special code to prepend
> the correct number of zero before the 2147483648. This zero padding reason
> was why I originally came up with the alternative way to handle the negative
> numbers. I had just thought that it was best to keep both functions as
> similar as possible.

I've started looking at this -- "timestamp_out_speedup_2015-09-12.patch".

> I've not done anything yet to remove the #ifdef HAVE_INT64_TIMESTAMP from
> AppendSeconds(). The only way I can think to handle this is to just make
> fsec_t unconditionally an int64 (since with float datetimes the precision is
> 10 decimal digits after the dec point, this does not fit in int32). I'd go
> off and do this, but this means I need to write an int64 version of
> pg_ltostr(). Should I do it this way?

Have you thought about *just* having an int64 pg_ltostr()? Are you
aware of an appreciable overhead from having int32 callers just rely
on a promotion to int64?

In general, years are 1900-based in Postgres:

struct pg_tm
{...   int         tm_mday;   int         tm_mon;         /* origin 0, not 1 */   int         tm_year;        /*
relativeto 1900 */...
 
};

While it's not your patch's fault, it is not noted by EncodeDateTime()
and EncodeDateOnly() and others that there pg_tm structs are not
1900-based. This is in contrast to multiple functions within
formatting.c, nabstime.c, and timestamp.c (some of which are clients
or clients of clients for this new code). I think that the rule has
been undermined so much that it doesn't make sense to indicate
exceptions directly, though. I think pg_tm should have comments saying
that there are many cases where tm_year is not relative to 1900.

I have a few minor nitpicks about the patch.

No need for "negative number" comment here -- just use
"Assert(precision >= 0)" code:

+ * Note 'precision' must not be a negative number.
+ * Note callers should assume cp will not be NUL terminated. */
-static void
+static char *AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool fillzeros){
+#ifdef HAVE_INT64_TIMESTAMP
+

I would just use a period/full stop here:

+ * Note str is not NUL terminated, callers are responsible for NUL terminating
+ * str themselves.

Don't think you need so many "Note:" specifiers here:

+ * Note: Callers should ensure that 'padding' is above zero.
+ * Note: This function is optimized for the case where the number is not too
+ *      big to fit inside of the specified padding.
+ * Note: Caller must ensure that 'str' points to enough memory to hold the
+        result (at least 12 bytes, counting a leading sign and trailing NUL,
+        or padding + 1 bytes, whichever is larger).
+ */
+char *
+pg_ltostr_zeropad(char *str, int32 value, int32 padding)

Not so sure about this, within the new pg_ltostr_zeropad() function:

+   /*
+    * Handle negative numbers in a special way. We can't just append a '-'
+    * prefix and reverse the sign as on two's complement machines negative
+    * numbers can be 1 further from 0 than positive numbers, we do it this way
+    * so we properly handle the smallest possible value.
+    */
+   if (num < 0)
+   {     ...
+   }
+   else
+   {     ...
+   }

Maybe it's better to just special case INT_MIN and the do an Abs()?
Actually, would any likely caller of pg_ltostr_zeropad() actually care
if INT_MIN was a case that you cannot handle, and assert against? You
could perhaps reasonably make it the caller's problem. Haven't made up
my mind if your timestamp_delta.patch is better than that; cannot
immediately see a problem with putting it on the caller.

More generally, maybe routines like EncodeDateTime() ought now to use
the Abs() macro.

Those are all of my nitpicks. :-)

> I also did some tests on server grade hardware, and the performance increase
> is looking a bit better.
>
> create table ts (ts timestamp not null);
> insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01
> 00:00:00', '1 sec'::interval);
> vacuum freeze ts;

On my laptop, I saw a 2.4X - 2.6X speedup for this case. That seems
pretty nice to me.

Will make another pass over this soon.

-- 
Peter Geoghegan



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 5 November 2015 at 13:20, Peter Geoghegan <pg@heroku.com> wrote:
On Fri, Sep 11, 2015 at 9:00 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:
> Attached is also timestamp_delta.patch which changes pg_ltostr() to use the
> INT_MIN special case that pg_ltoa also uses. I didn't make a similar change
> to pg_ltostr_zeropad() as I'd need to add even more special code to prepend
> the correct number of zero before the 2147483648. This zero padding reason
> was why I originally came up with the alternative way to handle the negative
> numbers. I had just thought that it was best to keep both functions as
> similar as possible.

I've started looking at this -- "timestamp_out_speedup_2015-09-12.patch".

> I've not done anything yet to remove the #ifdef HAVE_INT64_TIMESTAMP from
> AppendSeconds(). The only way I can think to handle this is to just make
> fsec_t unconditionally an int64 (since with float datetimes the precision is
> 10 decimal digits after the dec point, this does not fit in int32). I'd go
> off and do this, but this means I need to write an int64 version of
> pg_ltostr(). Should I do it this way?

Have you thought about *just* having an int64 pg_ltostr()? Are you
aware of an appreciable overhead from having int32 callers just rely
on a promotion to int64?


I'd not thought of that. It would certainly add unnecessary overhead on 32bit machines. 
To be honest, I don't really like the idea of making fsec_t an int64, there are just to many places around the code base that need to be updated. Plus with float datetimes, we'd need to multiple the fractional seconds by 1000000000, which is based on the MAX_TIME_PRECISION setting as defined when HAVE_INT64_TIMESTAMP is undefined.

 
I have a few minor nitpicks about the patch.

No need for "negative number" comment here -- just use
"Assert(precision >= 0)" code:

+ * Note 'precision' must not be a negative number.
+ * Note callers should assume cp will not be NUL terminated.
  */
-static void
+static char *
 AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool fillzeros)
 {
+#ifdef HAVE_INT64_TIMESTAMP
+

I would just use a period/full stop here:

+ * Note str is not NUL terminated, callers are responsible for NUL terminating
+ * str themselves.


Do you mean change the comment to "Note str is not NUL terminated. Callers are responsible for NUL terminating str themselves." ?

 
Don't think you need so many "Note:" specifiers here:

+ * Note: Callers should ensure that 'padding' is above zero.
+ * Note: This function is optimized for the case where the number is not too
+ *      big to fit inside of the specified padding.
+ * Note: Caller must ensure that 'str' points to enough memory to hold the
+        result (at least 12 bytes, counting a leading sign and trailing NUL,
+        or padding + 1 bytes, whichever is larger).
+ */
+char *
+pg_ltostr_zeropad(char *str, int32 value, int32 padding)


Yes, that's a bit ugly. How about just Assert(padding > 0) just like above? That gets rid of one Note:
The optimized part is probably not that important anyway. I just mentioned it to try and reduce the changes of someone using it when the padding is always too small.
 
Not so sure about this, within the new pg_ltostr_zeropad() function:

+   /*
+    * Handle negative numbers in a special way. We can't just append a '-'
+    * prefix and reverse the sign as on two's complement machines negative
+    * numbers can be 1 further from 0 than positive numbers, we do it this way
+    * so we properly handle the smallest possible value.
+    */
+   if (num < 0)
+   {
      ...
+   }
+   else
+   {
      ...
+   }

Maybe it's better to just special case INT_MIN and the do an Abs()?
Actually, would any likely caller of pg_ltostr_zeropad() actually care
if INT_MIN was a case that you cannot handle, and assert against? You
could perhaps reasonably make it the caller's problem. Haven't made up
my mind if your timestamp_delta.patch is better than that; cannot
immediately see a problem with putting it on the caller.


I can make that case work the same as pg_ltoa() for pg_ltostr(), that's not a problem, I did that in the delta patch. With pg_ltostr_zeropad() I'd need to add some corner case code to prepend the correct number of zeros onto -2147483648. This is likely not going to look very nice, and also it's probably going to end up about the same number of lines of code to what's in the patch already. If you think it's better for performance, then I've already done tests to show that it's not better. I really don't think it'll look very nice either. I quite like my negative number handling, since it's actually code that would be exercised 50% of the time ran than 1/ 2^32 of the time, assuming all numbers have an equal chance of being passed.
 
More generally, maybe routines like EncodeDateTime() ought now to use
the Abs() macro.


You might be right about that. Then we can just have pg_ltostr() do unsigned only. The reason I didn't do that is because I was trying to minimise what I was changing in the patch, and I thought it was less likely to make it if I started adding calls to Abs() around the code base.

Perhaps it's good to have pg_ltostr() there so that we have a good fast way to build generic strings in Postgres incrementally. I'd say we'd have a much less reusable function if we only supported unsigned int, as we don't have a database type which is based on unsigned int.
 
Those are all of my nitpicks. :-)


Many thanks for looking at this. Nitpicks are fine :)
 
> I also did some tests on server grade hardware, and the performance increase
> is looking a bit better.
>
> create table ts (ts timestamp not null);
> insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01
> 00:00:00', '1 sec'::interval);
> vacuum freeze ts;

On my laptop, I saw a 2.4X - 2.6X speedup for this case. That seems
pretty nice to me.

Will make another pass over this soon.

Thanks

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
Вложения

Re: WIP: Make timestamptz_out less slow.

От
Peter Geoghegan
Дата:
On Wed, Nov 4, 2015 at 6:30 PM, David Rowley
<david.rowley@2ndquadrant.com> wrote:
>> Have you thought about *just* having an int64 pg_ltostr()? Are you
>> aware of an appreciable overhead from having int32 callers just rely
>> on a promotion to int64?
>
> I'd not thought of that. It would certainly add unnecessary overhead on
> 32bit machines.
> To be honest, I don't really like the idea of making fsec_t an int64, there
> are just to many places around the code base that need to be updated. Plus
> with float datetimes, we'd need to multiple the fractional seconds by
> 1000000000, which is based on the MAX_TIME_PRECISION setting as defined when
> HAVE_INT64_TIMESTAMP is undefined.

OK.

>> I would just use a period/full stop here:
>>
>> + * Note str is not NUL terminated, callers are responsible for NUL
>> terminating
>> + * str themselves.
>>
>
> Do you mean change the comment to "Note str is not NUL terminated. Callers
> are responsible for NUL terminating str themselves." ?

Yes.

> Yes, that's a bit ugly. How about just Assert(padding > 0) just like above?
> That gets rid of one Note:
> The optimized part is probably not that important anyway. I just mentioned
> it to try and reduce the changes of someone using it when the padding is
> always too small.

Cool.

>> Maybe it's better to just special case INT_MIN and the do an Abs()?
>> Actually, would any likely caller of pg_ltostr_zeropad() actually care
>> if INT_MIN was a case that you cannot handle, and assert against? You
>> could perhaps reasonably make it the caller's problem. Haven't made up
>> my mind if your timestamp_delta.patch is better than that; cannot
>> immediately see a problem with putting it on the caller.
>>
>
> I can make that case work the same as pg_ltoa() for pg_ltostr(), that's not
> a problem, I did that in the delta patch. With pg_ltostr_zeropad() I'd need
> to add some corner case code to prepend the correct number of zeros onto
> -2147483648. This is likely not going to look very nice, and also it's
> probably going to end up about the same number of lines of code to what's in
> the patch already. If you think it's better for performance, then I've
> already done tests to show that it's not better. I really don't think it'll
> look very nice either. I quite like my negative number handling, since it's
> actually code that would be exercised 50% of the time ran than 1/ 2^32 of
> the time, assuming all numbers have an equal chance of being passed.

Fair enough.

>> More generally, maybe routines like EncodeDateTime() ought now to use
>> the Abs() macro.
>
> You might be right about that. Then we can just have pg_ltostr() do unsigned
> only. The reason I didn't do that is because I was trying to minimise what I
> was changing in the patch, and I thought it was less likely to make it if I
> started adding calls to Abs() around the code base.

I am very familiar with being conflicted about changing things beyond
what is actually strictly necessary. It's still something I deal with
a lot. One case that I think I have right in recommending commenting
on is the need to centrally document that there are many exceptions to
the 1900-based behavior of struct pg_tm. As I said, we should not
continue to inconsistently locally note the exceptions, even if your
patch doesn't make it any worse.

-- 
Peter Geoghegan



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
On 12 November 2015 at 13:44, Peter Geoghegan <pg@heroku.com> wrote:
On Wed, Nov 4, 2015 at 6:30 PM, David Rowley
<david.rowley@2ndquadrant.com> wrote:
>> Have you thought about *just* having an int64 pg_ltostr()? Are you
>> aware of an appreciable overhead from having int32 callers just rely
>> on a promotion to int64?
>
> I'd not thought of that. It would certainly add unnecessary overhead on
> 32bit machines.
> To be honest, I don't really like the idea of making fsec_t an int64, there
> are just to many places around the code base that need to be updated. Plus
> with float datetimes, we'd need to multiple the fractional seconds by
> 1000000000, which is based on the MAX_TIME_PRECISION setting as defined when
> HAVE_INT64_TIMESTAMP is undefined.

OK.

>> I would just use a period/full stop here:
>>
>> + * Note str is not NUL terminated, callers are responsible for NUL
>> terminating
>> + * str themselves.
>>
>
> Do you mean change the comment to "Note str is not NUL terminated. Callers
> are responsible for NUL terminating str themselves." ?

Yes.

> Yes, that's a bit ugly. How about just Assert(padding > 0) just like above?
> That gets rid of one Note:
> The optimized part is probably not that important anyway. I just mentioned
> it to try and reduce the changes of someone using it when the padding is
> always too small.

Cool.

>> Maybe it's better to just special case INT_MIN and the do an Abs()?
>> Actually, would any likely caller of pg_ltostr_zeropad() actually care
>> if INT_MIN was a case that you cannot handle, and assert against? You
>> could perhaps reasonably make it the caller's problem. Haven't made up
>> my mind if your timestamp_delta.patch is better than that; cannot
>> immediately see a problem with putting it on the caller.
>>
>
> I can make that case work the same as pg_ltoa() for pg_ltostr(), that's not
> a problem, I did that in the delta patch. With pg_ltostr_zeropad() I'd need
> to add some corner case code to prepend the correct number of zeros onto
> -2147483648. This is likely not going to look very nice, and also it's
> probably going to end up about the same number of lines of code to what's in
> the patch already. If you think it's better for performance, then I've
> already done tests to show that it's not better. I really don't think it'll
> look very nice either. I quite like my negative number handling, since it's
> actually code that would be exercised 50% of the time ran than 1/ 2^32 of
> the time, assuming all numbers have an equal chance of being passed.

Fair enough.


Many thanks for spending time on this Peter.
 
>> More generally, maybe routines like EncodeDateTime() ought now to use
>> the Abs() macro.
>
> You might be right about that. Then we can just have pg_ltostr() do unsigned
> only. The reason I didn't do that is because I was trying to minimise what I
> was changing in the patch, and I thought it was less likely to make it if I
> started adding calls to Abs() around the code base.

I am very familiar with being conflicted about changing things beyond
what is actually strictly necessary. It's still something I deal with
a lot. One case that I think I have right in recommending commenting
on is the need to centrally document that there are many exceptions to
the 1900-based behavior of struct pg_tm. As I said, we should not
continue to inconsistently locally note the exceptions, even if your
patch doesn't make it any worse.


Just to confirm, you mean this comment?

int tm_year; /* relative to 1900 */

Please let me know if you disagree, but I'm not sure it's the business of this patch to fix that. If it's wrong now, then it was wrong before my patch, so it should be a separate patch which fixes it.

At this stage I don't quite know what the fix should be, weather it's doing tm->tm_year -= 1900; in timestamp2tm() after the j2date() call, or if it's just removing the misleading comment.

I also don't quite understand why we bother having it relative to 1900 and not just base it on 0.

Is there anything else you see that's pending before it can be marked as ready for committer?

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Re: WIP: Make timestamptz_out less slow.

От
Peter Geoghegan
Дата:
On Sun, Nov 22, 2015 at 2:20 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:
> Just to confirm, you mean this comment?
>
> int tm_year; /* relative to 1900 */
>
> Please let me know if you disagree, but I'm not sure it's the business of
> this patch to fix that. If it's wrong now, then it was wrong before my
> patch, so it should be a separate patch which fixes it.
>
> At this stage I don't quite know what the fix should be, weather it's doing
> tm->tm_year -= 1900; in timestamp2tm() after the j2date() call, or if it's
> just removing the misleading comment.
>
> I also don't quite understand why we bother having it relative to 1900 and
> not just base it on 0.

That's fair. I defer to the judgement of the committer here.

> Is there anything else you see that's pending before it can be marked as
> ready for committer?

Can't think of any reason not to. It's been marked "ready for committer".

Thanks

-- 
Peter Geoghegan



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:

On 29 November 2015 at 14:00, Peter Geoghegan <pg@heroku.com> wrote:
On Sun, Nov 22, 2015 at 2:20 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:
> Just to confirm, you mean this comment?
>
> int tm_year; /* relative to 1900 */
>
> Please let me know if you disagree, but I'm not sure it's the business of
> this patch to fix that. If it's wrong now, then it was wrong before my
> patch, so it should be a separate patch which fixes it.
>
> At this stage I don't quite know what the fix should be, weather it's doing
> tm->tm_year -= 1900; in timestamp2tm() after the j2date() call, or if it's
> just removing the misleading comment.
>
> I also don't quite understand why we bother having it relative to 1900 and
> not just base it on 0.

That's fair. I defer to the judgement of the committer here.

> Is there anything else you see that's pending before it can be marked as
> ready for committer?

Can't think of any reason not to. It's been marked "ready for committer".


Many thanks for reviewing this Peter.
 
--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Re: WIP: Make timestamptz_out less slow.

От
Tom Lane
Дата:
David Rowley <david.rowley@2ndquadrant.com> writes:
[ timestamp_out_speedup_2015-11-05.patch ]

Pushed with a bunch of cosmetic tweaks.
        regards, tom lane



Re: WIP: Make timestamptz_out less slow.

От
David Rowley
Дата:
<p dir="ltr"><br /> On 7/02/2016 4:14 am, "Tom Lane" <<a href="mailto:tgl@sss.pgh.pa.us">tgl@sss.pgh.pa.us</a>>
wrote:<br/> ><br /> > David Rowley <<a
href="mailto:david.rowley@2ndquadrant.com">david.rowley@2ndquadrant.com</a>>writes:<br /> > [
timestamp_out_speedup_2015-11-05.patch]<br /> ><br /> > Pushed with a bunch of cosmetic tweaks.<p
dir="ltr">Great.Thanks for pushing this.<p dir="ltr">David