Обсуждение: More inaccurate results from numeric pow()

Поиск
Список
Период
Сортировка

More inaccurate results from numeric pow()

От
Dean Rasheed
Дата:
Doing some more testing of the numeric code patched in [1] I noticed
another case where the result is inaccurate -- computing 0.12 ^
-2345.6 gives a very large number containing 2162 digits, but only the
first 2006 correct, while the last 156 digits are wrong.

The reason is this code in power_var():

        /* limit to something that won't cause integer overflow */
        val = Max(val, -NUMERIC_MAX_RESULT_SCALE);
        val = Min(val, NUMERIC_MAX_RESULT_SCALE);

where "val" is the approximate decimal result weight. Here
NUMERIC_MAX_RESULT_SCALE is 2000, so it's clamping the estimated
result weight to 2000, and therefore reducing the rscale in the
subsequent calculations, causing the loss of precision at around 2000
digits.

In fact it's possible to predict exactly how large we need to allow
"val" to become, since the final result is computed using exp_var(),
which accepts inputs up to 6000, so the result weight "val" can be up
to around log10(exp(6000)) ~= 2606 before the final result causes an
overflow.

The obvious fix would be to modify the clamping limits. I think a
better answer though is to replace the clamping code with an overflow
test, immediately throwing an error if "val" is outside the allowed
range, per the attached patch.

This has the advantage that it avoids some expensive computations in
the case where the result will end up overflowing, but more
importantly it means that power_var() isn't so critically dependent on
the limits of exp_var() -- if someone in the future increased the
limits of exp_var() without touching power_var(), and power_var()
clamped to the old range, the problem would resurface. But doing an
overflow test in power_var() instead of clamping "val", it would
either compute an accurate result, or throw an overflow error early
on. There should be no possibility of it returning an inaccurate
result.

Regards,
Dean

[1] http://www.postgresql.org/message-id/CAEZATCV7w+8iB=07dJ8Q0zihXQT1semcQuTeK+4_rogC_zq5Hw@mail.gmail.com

Вложения

Re: More inaccurate results from numeric pow()

От
Tom Lane
Дата:
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
> In fact it's possible to predict exactly how large we need to allow
> "val" to become, since the final result is computed using exp_var(),
> which accepts inputs up to 6000, so the result weight "val" can be up
> to around log10(exp(6000)) ~= 2606 before the final result causes an
> overflow.

> The obvious fix would be to modify the clamping limits. I think a
> better answer though is to replace the clamping code with an overflow
> test, immediately throwing an error if "val" is outside the allowed
> range, per the attached patch.

I don't much care for the hardwired magic number here, especially since
exp_var() does not have its limit expressed as "6000" but as
"NUMERIC_MAX_RESULT_SCALE * 3".  I think you should rephrase the limit
to use that expression, and also add something like this in exp_var():
val = numericvar_to_double_no_overflow(&x);
/* Guard against overflow */
+    /* If you change this limit, see also power_var()'s limit */if (Abs(val) >= NUMERIC_MAX_RESULT_SCALE * 3)
ereport(ERROR,           (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),             errmsg("value overflows numeric
format")));

Seems like a reasonable idea otherwise.
        regards, tom lane



Re: More inaccurate results from numeric pow()

От
Robert Haas
Дата:
On Mon, May 2, 2016 at 1:02 PM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
> Doing some more testing of the numeric code patched in [1] I noticed
> another case where the result is inaccurate -- computing 0.12 ^
> -2345.6 gives a very large number containing 2162 digits, but only the
> first 2006 correct, while the last 156 digits are wrong.

Just out of curiosity, how can you tell?  Where do you get alternate
output to compare against?

Also, I wonder what we think the contract with the user is in cases
like this.  Surely, if we were dealing with floating point numbers,
nobody would expect a calculation like this to be accurate beyond the
first n digits, where n is surely much less than 2006.  I like the
fact that numeric has a lot more precision than any built-in floating
point type, but does it have to get every digit in front of the
decimal point exactly right no matter how many there are?

rhaas=# select tan(pi()::numeric/2), tan(pi()/2);      tan       |         tan
-----------------+----------------------618986325617924 | 1.63312393531954e+16
(1 row)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: More inaccurate results from numeric pow()

От
Dean Rasheed
Дата:
On 2 May 2016 at 19:40, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, May 2, 2016 at 1:02 PM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
>> Doing some more testing of the numeric code patched in [1] I noticed
>> another case where the result is inaccurate -- computing 0.12 ^
>> -2345.6 gives a very large number containing 2162 digits, but only the
>> first 2006 correct, while the last 156 digits are wrong.
>
> Just out of curiosity, how can you tell?  Where do you get alternate
> output to compare against?
>

The easiest way is to use bc, although it's pretty slow for this kind of thing.


> Also, I wonder what we think the contract with the user is in cases
> like this.

My expectation is that the numeric computations should generally
produce results that are correct in all, or nearly all, digits
returned. This is commonly expressed in terms of ULP's --
https://en.wikipedia.org/wiki/Unit_in_the_last_place. An error of a
few ULP's may be OK, but I don't think hundreds of ULP's is OK.
Actually I think this particular issue is mostly of academic interest,
but we went to some effort to get accurate results in the previous
patch, and this is just closing a loophole to hopefully complete that
work.

 Surely, if we were dealing with floating point numbers,
> nobody would expect a calculation like this to be accurate beyond the
> first n digits, where n is surely much less than 2006.  I like the
> fact that numeric has a lot more precision than any built-in floating
> point type, but does it have to get every digit in front of the
> decimal point exactly right no matter how many there are?
>

I would say it should come close. Otherwise we're just returning a lot
of noise. Note that there is a limit to how many digits we will ever
return, so this is manageable.


> rhaas=# select tan(pi()::numeric/2), tan(pi()/2);
>        tan       |         tan
> -----------------+----------------------
>  618986325617924 | 1.63312393531954e+16
> (1 row)
>

That doesn't prove anything, since we haven't implemented tan(numeric)
(and I don't plan to), so the above is actually

select tan((pi()::numeric/2)::float8), tan(pi()/2);

and it's not surprising that the 2 results are wildly different (and
infinitely far away from the correct answer). It's using tan(float8)
in both cases, just with slightly different float8 inputs. There's not
really any reason to expect particularly accurate results from float8
functions in cases like this.

Regards,
Dean



Re: More inaccurate results from numeric pow()

От
Dean Rasheed
Дата:
On 2 May 2016 at 18:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I don't much care for the hardwired magic number here, especially since
> exp_var() does not have its limit expressed as "6000" but as
> "NUMERIC_MAX_RESULT_SCALE * 3".  I think you should rephrase the limit
> to use that expression, and also add something like this in exp_var():

OK, that makes sense. Done that way.

Regards,
Dean