Обсуждение: patch for getXXX methods
this patch addresses the issue of using getXXX recommended and secondary methods Please review and comment. Dave -- Dave Cramer 519 939 0336 ICQ # 14675561
Вложения
Dave Cramer wrote: > this patch addresses the issue of using getXXX recommended and secondary > methods > > Please review and comment. There are lots of whitespace-only changes which makes it harder to see the real changes. -O
Attached, with -cb this time On Fri, 2004-07-09 at 18:34, Oliver Jowett wrote: > Dave Cramer wrote: > > this patch addresses the issue of using getXXX recommended and secondary > > methods > > > > Please review and comment. > > There are lots of whitespace-only changes which makes it harder to see > the real changes. > > -O > > ---------------------------(end of broadcast)--------------------------- > TIP 8: explain analyze is your friend > > > > !DSPAM:40ef1f6e143178639810363! > > -- Dave Cramer 519 939 0336 ICQ # 14675561
Вложения
Dave Cramer wrote: > Attached, with -cb this time Thanks. Comments: How you handle bytes and shorts is inconsistent with how you handle longs; we should consistently do it one way or the other. Since you lose precision going via a double, that probably means the BigInteger approach. The shared conversion/rangecheck logic should be done once in a helper function rather than duplicated -- call the helper with appropriate range info and cast the result. I still don't like silently discarding any fractional portion of the value. -O
Oliver, I don't believe you will lose precision if the number is below MAX_LONG ? When I tested it on my system, I was able to retrieve a double that was equal to MAX_LONG without losing precision. I understand your concern about silently discarding the fractional portion, but I also believe if the user is using this, then they know what they are doing. Time will tell. Dave On Sun, 2004-07-11 at 11:04, Oliver Jowett wrote: > Dave Cramer wrote: > > Attached, with -cb this time > > Thanks. Comments: > > How you handle bytes and shorts is inconsistent with how you handle > longs; we should consistently do it one way or the other. Since you lose > precision going via a double, that probably means the BigInteger approach. > > The shared conversion/rangecheck logic should be done once in a helper > function rather than duplicated -- call the helper with appropriate > range info and cast the result. > > I still don't like silently discarding any fractional portion of the value. > > -O > > > > !DSPAM:40f15727282821451612596! > > -- Dave Cramer 519 939 0336 ICQ # 14675561
Dave Cramer wrote:
> Oliver,
>
> I don't believe you will lose precision if the number is below MAX_LONG
> ? When I tested it on my system, I was able to retrieve a double that
> was equal to MAX_LONG without losing precision.
The attached testcase says otherwise. It produces this output:
> Mismatch: 9223372036854775806 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775805 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775804 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775803 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775802 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775801 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775800 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775799 => 9.223372036854776E18 => 9223372036854775807
[...]
> Mismatch: 9223372036854775296 => 9.223372036854776E18 => 9223372036854775807
> Mismatch: 9223372036854775295 => 9.2233720368547748E18 => 9223372036854774784
> Mismatch: 9223372036854775294 => 9.2233720368547748E18 => 9223372036854774784
and so on.
The problem is that near MAX_LONG you need almost 64 bits of mantissa to
exactly represent the value -- but a double is only a 64-bit value
including space for the exponent. I can't remember the exact split but
from the above it looks like there is around 10 bits of exponent so you
only have ~54 bits for the mantissa -- so you only get a precision of
about +/- 512 when you're dealing with numbers of a magnitude around
MAX_LONG.
-O
public class TestDoublePrecision {
public static void main(String[] args) {
for (long l = Long.MAX_VALUE; l != Long.MIN_VALUE; --l) {
double d = (double)l;
long check = (long)d;
if (check != l)
System.out.println("Mismatch: " + l + " => " + d + " => " + check);
}
}
}
Oliver,
Yes, and this is why I needed to do it for getLong, but I don't think
it's necessary for getInt, getByte, as it is really just to test to see
if it is greater than the max allowed value.
Dave
On Sun, 2004-07-11 at 19:37, Oliver Jowett wrote:
> Dave Cramer wrote:
> > Oliver,
> >
> > I don't believe you will lose precision if the number is below MAX_LONG
> > ? When I tested it on my system, I was able to retrieve a double that
> > was equal to MAX_LONG without losing precision.
>
> The attached testcase says otherwise. It produces this output:
>
> > Mismatch: 9223372036854775806 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775805 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775804 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775803 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775802 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775801 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775800 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775799 => 9.223372036854776E18 => 9223372036854775807
> [...]
> > Mismatch: 9223372036854775296 => 9.223372036854776E18 => 9223372036854775807
> > Mismatch: 9223372036854775295 => 9.2233720368547748E18 => 9223372036854774784
> > Mismatch: 9223372036854775294 => 9.2233720368547748E18 => 9223372036854774784
>
> and so on.
>
> The problem is that near MAX_LONG you need almost 64 bits of mantissa to
> exactly represent the value -- but a double is only a 64-bit value
> including space for the exponent. I can't remember the exact split but
> from the above it looks like there is around 10 bits of exponent so you
> only have ~54 bits for the mantissa -- so you only get a precision of
> about +/- 512 when you're dealing with numbers of a magnitude around
> MAX_LONG.
>
> -O
>
>
> !DSPAM:40f1cf8080747915514021!
>
> ______________________________________________________________________
> public class TestDoublePrecision {
> public static void main(String[] args) {
> for (long l = Long.MAX_VALUE; l != Long.MIN_VALUE; --l) {
> double d = (double)l;
> long check = (long)d;
>
> if (check != l)
> System.out.println("Mismatch: " + l + " => " + d + " => " + check);
> }
> }
> }
>
>
> !DSPAM:40f1cf8080747915514021!
--
Dave Cramer
519 939 0336
ICQ # 14675561
Dave Cramer wrote: > Oliver, > > Yes, and this is why I needed to do it for getLong, but I don't think > it's necessary for getInt, getByte, as it is really just to test to see > if it is greater than the max allowed value. Sure, but my original comment was that I would like to see a consistent approach for all conversions, not one approach for longs and another for the other types. -O
The reason I use the Double.... is because I am assuming it is faster, if this is not true, then there is no reason to use your suggestion. Dave On Mon, 2004-07-12 at 10:36, Oliver Jowett wrote: > Dave Cramer wrote: > > Oliver, > > > > Yes, and this is why I needed to do it for getLong, but I don't think > > it's necessary for getInt, getByte, as it is really just to test to see > > if it is greater than the max allowed value. > > Sure, but my original comment was that I would like to see a consistent > approach for all conversions, not one approach for longs and another for > the other types. > > -O > > > > !DSPAM:40f2a228268032766713856! > > -- Dave Cramer 519 939 0336 ICQ # 14675561
Dave Cramer wrote: > The reason I use the Double.... is because I am assuming it is faster, > if this is not true, then there is no reason to use your suggestion. I'd take code clarity over performance benefit -- it's an uncommon case and the cost of parsing a BigDecimal is likely to be trivial compared to the other work the driver does. -O
Ok, I'll buy that argument. --dc-- On Mon, 2004-07-12 at 11:06, Oliver Jowett wrote: > Dave Cramer wrote: > > The reason I use the Double.... is because I am assuming it is faster, > > if this is not true, then there is no reason to use your suggestion. > > I'd take code clarity over performance benefit -- it's an uncommon case > and the cost of parsing a BigDecimal is likely to be trivial compared to > the other work the driver does. > > -O > > > > !DSPAM:40f2a91b11386775011984! > > -- Dave Cramer 519 939 0336 ICQ # 14675561