Re: Integer overflow in timestamp_part()

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Integer overflow in timestamp_part()
Дата
Msg-id 26127.1454459963@sss.pgh.pa.us
обсуждение исходный текст
Ответ на  (Vitaly Burovoy <vitaly.burovoy@gmail.com>)
Ответы Re: Integer overflow in timestamp_part()  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Re: Integer overflow in timestamp_part()  (Vitaly Burovoy <vitaly.burovoy@gmail.com>)
Список pgsql-hackers
[ Please use a useful Subject: line in your posts. ]

Vitaly Burovoy <vitaly.burovoy@gmail.com> writes:
> I've just found a little bug: extracting "epoch" from the last 30
> years before Postgres' "+Infinity" leads an integer overflow:

Hmm.  I do not like the proposed patch much: it looks like it's
throwing away precision too soon, although given that the result of
SetEpochTimestamp can be cast to float exactly, maybe it doesn't matter.

More importantly, I seriously doubt that this is the only issue
for timestamps very close to the INT64_MAX boundary.  An example is
that we're not rejecting values that would correspond to DT_NOBEGIN
or DT_NOEND:

regression=# set timezone = 'PST8PDT';
SET
regression=# select '294277-01-08 20:00:54.775806-08'::timestamptz;          timestamptz           
---------------------------------294277-01-08 20:00:54.775806-08
(1 row)

regression=# select '294277-01-08 20:00:54.775807-08'::timestamptz;timestamptz 
-------------infinity
(1 row)

regression=# select '294277-01-08 20:00:54.775808-08'::timestamptz;timestamptz 
--------------infinity
(1 row)

regression=# select '294277-01-08 20:00:54.775809-08'::timestamptz;
ERROR:  timestamp out of range

Worse yet, that last error is coming from timestamptz_out, not
timestamptz_in; we're accepting a value we cannot store properly.
The converted value has actually overflowed to be equal to
INT64_MIN+1, and then timestamptz_out barfs because it's before
Julian day 0.  Other operations would incorrectly interpret it
as a date in the very far past.  timestamptz_in doesn't throw an
error until several hours later than this; it looks like the
problem is that tm2timestamp() worries about overflow in initially
calculating the converted value, but not about overflow in the
dt2local() rotation, and in any case it doesn't worry about not
producing DT_NOEND.

I'm inclined to think that a good solution would be to create an
artificial restriction to not accept years beyond, say, 100000 AD.
That would leave us with a lot of daylight to not have to worry
about corner-case overflows in timestamp arithmetic.  I'm not sure
though where we'd need to enforce such a restriction; certainly in
timestamp[tz]_in, but where else?
        regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Why is hstore_to_json_loose not like hstore_to_jsonb_loose?
Следующее
От: Noah Misch
Дата:
Сообщение: Re: Raising the checkpoint_timeout limit