Обсуждение: Timestamp precision in Windows and Linux

Поиск
Список
Период
Сортировка

Timestamp precision in Windows and Linux

От
Shruthi A
Дата:
Hi,

The page http://www.postgresql.org/docs/7.2/static/datatype-datetime.html mentions that the resolution of all time and timestamp data types is 1 microsecond.   I have an application that runs on both a Windows (XP with SP2) machine and a Linux (SUSE 10.2) machine.   I saw that on postgres enterprisedb 8.3 installed on both these machines, the default timestamp precision on the former is upto a millisecond and on the latter it is 1 microsecond.

My curiosity is : is this a universal phenomenon ie a basic issue with Windows?  Or could there be some hardware or architectural differences or something else...  
And my problem is: is there any way to enforce a higher precision in Windows?  Because my application badly needs it.

Please help / guide.

Thanks a million,
Shruthi

Re: Timestamp precision in Windows and Linux

От
Tom Lane
Дата:
Shruthi A <shruthi.iisc@gmail.com> writes:
> The page http://www.postgresql.org/docs/7.2/static/datatype-datetime.htmlmentions
> that the resolution of all time and timestamp data types is 1
> microsecond.   I have an application that runs on both a Windows (XP with
> SP2) machine and a Linux (SUSE 10.2) machine.   I saw that on postgres
> enterprisedb 8.3 installed on both these machines, the default timestamp
> precision on the former is upto a millisecond and on the latter it is 1
> microsecond.

I suppose what you're really asking about is not the precision of the
datatype but the precision of now() readings.  You're out of luck ---
Windows just doesn't expose a call to get the wall clock time to better
than 1 msec.

Keep in mind that whatever the Linux machine is returning might be
largely fantasy in the low-order bits, too.

            regards, tom lane