Обсуждение: Date not being parsed as expected

Поиск
Список
Период
Сортировка

Date not being parsed as expected

От
Jean-Christian Imbeault
Дата:
The following insert:

insert into t values('01-10-29')

gives the following result in the DB:

select * from t
      d
------------
  2029-01-10

Why is the first part (01) being parsed as the month? The DateStyle is
set to ISO (the default) so shoudln't the parser see xx-yy-zz as being
year-month-day?

Thanks,

Jc


Re: Date not being parsed as expected

От
Martijn van Oosterhout
Дата:
On Fri, Sep 06, 2002 at 04:24:23PM +0900, Jean-Christian Imbeault wrote:
> The following insert:
>
> insert into t values('01-10-29')
>
> gives the following result in the DB:
>
> select * from t
>       d
> ------------
>   2029-01-10
>
> Why is the first part (01) being parsed as the month? The DateStyle is
> set to ISO (the default) so shoudln't the parser see xx-yy-zz as being
> year-month-day?

Wow. Talk about an ambiguous date! That could be:

1 October 2029       01/10/2029
January 10 2029      10/01/2029
2001-10-29           29/10/2001

I don't think ISO dates are allowed to abbreviate any portion, especially
the year, since that is what makes the date style clear and unambiguous.
Other than the ISO datestyle, is it set to European or US?

HTH,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> There are 10 kinds of people in the world, those that can do binary
> arithmetic and those that can't.

Re: Date not being parsed as expected

От
Jean-Christian Imbeault
Дата:
Martijn van Oosterhout wrote:
>
> Wow. Talk about an ambiguous date!

I know. But I thought that I could force the DB to interpret the date as
Y-M-D, making it unambiguous.

> I don't think ISO dates are allowed to abbreviate any portion, especially
> the year, since that is what makes the date style clear and unambiguous.
> Other than the ISO datestyle, is it set to European or US?

Uh? I thought DateStyle could only be ISO, US, or European? Are you
asking me to set it to ISO AND European?

Jc


Re: Date not being parsed as expected

От
rolf.ostvik@axxessit.no
Дата:
On 2002-09-06 10:19, Martijn van Oosterhout <kleptog@svana.org> wrote:
>On Fri, Sep 06, 2002 at 04:24:23PM +0900, Jean-Christian Imbeault wrote:
>> The following insert:
>> insert into t values('01-10-29')
>> gives the following result in the DB:
>> select * from t
>>       d
>> ------------
>>   2029-01-10
>>
>
>Wow. Talk about an ambiguous date! That could be:
>
>1 October 2029       01/10/2029
>January 10 2029      10/01/2029
>2001-10-29           29/10/2001
>
>I don't think ISO dates are allowed to abbreviate any portion, especially
>the year, since that is what makes the date style clear and unambiguous.

I do not have the specification to look at but from some links which
discuss it i get the understanding that:
'01-10-29' (or '011029') should be intepreted as 29 october 1029. (according to ISO8601.)

Some links:
http://www.merlyn.demon.co.uk/datefmts.htm#8601


Re: Date not being parsed as expected

От
Martijn van Oosterhout
Дата:
On Fri, Sep 06, 2002 at 05:33:58PM +0900, Jean-Christian Imbeault wrote:
> Martijn van Oosterhout wrote:
> >
> > Wow. Talk about an ambiguous date!
>
> I know. But I thought that I could force the DB to interpret the date as
> Y-M-D, making it unambiguous.
>
> > I don't think ISO dates are allowed to abbreviate any portion, especially
> > the year, since that is what makes the date style clear and unambiguous.
> > Other than the ISO datestyle, is it set to European or US?
>
> Uh? I thought DateStyle could only be ISO, US, or European? Are you
> asking me to set it to ISO AND European?

Well, it used to accept it in older versions of postgresql. I think it's
'SET DATE_STYLE ISO,European' or some such. At one stage there was a
"US-ISO" format which was yyyy-dd-mm. Ugh! That's fixed now IIRC.

Try it, who knows.

--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> There are 10 kinds of people in the world, those that can do binary
> arithmetic and those that can't.

Re: Date not being parsed as expected

От
Oliver Elphick
Дата:
On Fri, 2002-09-06 at 09:33, Jean-Christian Imbeault wrote:
> Uh? I thought DateStyle could only be ISO, US, or European? Are you
> asking me to set it to ISO AND European?

DateStyle ISO/European means output in ISO and accept input in European
format.

    bray=# set DateStyle= ISO,European;
    SET VARIABLE
    bray=# select '9/10/02'::date;
        date
    ------------
     2002-10-09
    (1 row)

    bray=# set DateStyle= ISO,US;
    SET VARIABLE
    bray=# select '9/10/02'::date;
        date
    ------------
     2002-09-10
    (1 row)

--
Oliver Elphick                                Oliver.Elphick@lfix.co.uk
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
                 ========================================
     "Behold, I stand at the door, and knock; if any man
      hear my voice, and open the door, I will come in to
      him, and will sup with him, and he with me."
                                   Revelation 3:20


Re: Date not being parsed as expected

От
Jean-Christian Imbeault
Дата:
Oliver Elphick wrote:
>
> DateStyle ISO/European means output in ISO and accept input in European
> format.

Ok. I'm a bit confused. Is there a combination of ISO, European or US
that will allow me to force the DB to accept 01-10-29 as meaning
2001-10-29 (YMD)?

Thanks :)

Jc


Re: Date not being parsed as expected

От
Oliver Elphick
Дата:
On Fri, 2002-09-06 at 10:52, Jean-Christian Imbeault wrote:
> Oliver Elphick wrote:
> >
> > DateStyle ISO/European means output in ISO and accept input in European
> > format.
>
> Ok. I'm a bit confused. Is there a combination of ISO, European or US
> that will allow me to force the DB to accept 01-10-29 as meaning
> 2001-10-29 (YMD)?

I don't think so; ISO order is year first, but ISO format is 4-digit
years.  So 01-10-29 must be a non-ISO format, which means the year is
last.

--
Oliver Elphick                                Oliver.Elphick@lfix.co.uk
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
                 ========================================
     "Behold, I stand at the door, and knock; if any man
      hear my voice, and open the door, I will come in to
      him, and will sup with him, and he with me."
                                   Revelation 3:20


The Last Optimization

От
"Areski Belaid"
Дата:
I have a huge table with 14 field and few million of data...
My application Php/Pg start to be impossible to use.

    Redhat 7.3
    Dual PIII 900Mhz System
    2GB RAM

I did already a some optimization optimization :

    max_connections = 64
    shared_buffers = 32000
    sort_mem = 64336
    fsync = false
                        ---
    echo 128000000 > /proc/sys/kernel/shmmax

    also Vaccum,analyze and Index

                        ---

    This optimization was enough at the beginning but NOT now with some
million of instance.

    So WHAT I CAN DO ??? USE ORACLE ???

    I Think maybe to split my mean table to different table Mean_a Mean_b
... Mean_z ???
    IF it's the way someone where I can find doc or help about howto split
table ???

    I M lost !!! ;)



Areski

Re: The Last Optimization

От
"Mario Weilguni"
Дата:
>     So WHAT I CAN DO ??? USE ORACLE ???

deliver details on the tables an queries used.

>
>     I Think maybe to split my mean table to different table Mean_a Mean_b
> ... Mean_z ???
>     IF it's the way someone where I can find doc or help about howto split
> table ???




Re: The Last Optimization

От
"Gregory Wood"
Дата:
> I did already a some optimization optimization :
>
>     max_connections = 64
>     shared_buffers = 32000
>     sort_mem = 64336
>     fsync = false

Ouch... 64MB of sort_mem? It'll only take 32 sort operations to exhaust your
memory... actually much less since your 2GB isn't likely to be used
exclusively for sorting.

I would start by pushing sort_mem back to 8192 (you might be able to get
away with 16384, but I wouldn't go any higher). Anything else, we'll need to
know what you're doing, schema, queries, explain analyze, etc.

Greg


Re: The Last Optimization

От
"scott.marlowe"
Дата:
On Fri, 6 Sep 2002, Gregory Wood wrote:

> > I did already a some optimization optimization :
> >
> >     max_connections = 64
> >     shared_buffers = 32000
> >     sort_mem = 64336
> >     fsync = false
>
> Ouch... 64MB of sort_mem? It'll only take 32 sort operations to exhaust your
> memory... actually much less since your 2GB isn't likely to be used
> exclusively for sorting.
>
> I would start by pushing sort_mem back to 8192 (you might be able to get
> away with 16384, but I wouldn't go any higher). Anything else, we'll need to
> know what you're doing, schema, queries, explain analyze, etc.

FYI, in testing on my Dual PIII 750 / 512Meg mem box, and long before
memory was exhausted, performance on selects with sorts started going DOWN
after I passed 2048k of sort memory.

I've said it before, and I'll keep saying it, until I get around to
writing a new bit for the performance tuning section of the docs, that if
you're performance tuning, do so incrementally.

Think increasing shared buffers is a good idea?  Then increase it some
small amount (i.e. 4000 to 8000) and re run your queries to see how much
faster they are.  Keep increasing it until you hit the "knee" and then
go back to the last setting before the knee.  That should be the procedure
for any performance tuning.  Plus only change one thing at a time.

More than likely the problem here is no index on the tables.  Run an

explain analyze YOURQUERYHERE

and let us know what it says.


The Last Optimization

От
"Areski Belaid"
Дата:
First Thanks for all of your advice, It's really nice to get so much help...

I follow some advice and after try to do some EXPLAIN ANALYSE on every
queries, I realyse that
a "SELECT with LIMITE" (ie 50 100) is really fast quiet immediate...
BUT That's create the problem in my application is the SELECT COUNT.

Ok, I did some "select count" on few hundred thousand of instance (million
some time)...
The "select count" have to check all of them and it's not the case with
"LIMIT"! Right ?


EXPLAIN ANALYZE SELECT count(*) FROM "Data" WHERE ("IDOrigin" IN ('16',
'20', '21', '18', '13', '17', '15', '19'));


NOTICE:  QUERY PLAN:

Aggregate  (cost=188017.51..188017.51 rows=1 width=0) (actual
time=72071.90..72071.90 rows=1 loops=1)
  ->  Seq Scan on Email  (cost=0.00..185740.10 rows=910965 width=0) (actual
time=15988.85..71825.27 rows=183065 loops=1)
Total runtime: 72072.12 msec


72 secondes for a php/pg application is useless.


So which is the way, I need the "select count" to kwon the globaly number, I
can avoid  of this information...
A cache solution, would be impossible, my search engine is really complex...
So maybe split the table in different other table, but it's going to take
one week of work if I have to change
all the queries...


So, I m a less lost but always without solution, every help would nice...

Best regards, Areski

Re: The Last Optimization

От
"scott.marlowe"
Дата:
Have you run an analyze on your database?  It makes a big difference.

If that doesn't work, do a quick test, set the seq scan to off like so:

set enable_seqscan=off;

and rerun your query.  If that fixes the problem, but it comes back with
seqscan=off, then you might need to edit your $PGDATA/postgresql.conf file
and change a couple of things.  I have been setting random page cost to
1.5 or so lately, as my machine has pretty fast drives for seek times (4
disk raid array) but slow for massive reads (10 Megs a second on each
drive)  Also, lowering cpu_index_tuple_cost will drive the planner towards
using index scans.

the problem in general is that if the planner accidentally picking an
index scan can make a slow query a little slower, but accidentally picking
a sequential scan can make a sub second query into a multi-minute
nightmare wait.

On Fri, 6 Sep 2002, Areski Belaid wrote:

> First Thanks for all of your advice, It's really nice to get so much help...
>
> I follow some advice and after try to do some EXPLAIN ANALYSE on every
> queries, I realyse that
> a "SELECT with LIMITE" (ie 50 100) is really fast quiet immediate...
> BUT That's create the problem in my application is the SELECT COUNT.
>
> Ok, I did some "select count" on few hundred thousand of instance (million
> some time)...
> The "select count" have to check all of them and it's not the case with
> "LIMIT"! Right ?
>
>
> EXPLAIN ANALYZE SELECT count(*) FROM "Data" WHERE ("IDOrigin" IN ('16',
> '20', '21', '18', '13', '17', '15', '19'));
>
>
> NOTICE:  QUERY PLAN:
>
> Aggregate  (cost=188017.51..188017.51 rows=1 width=0) (actual
> time=72071.90..72071.90 rows=1 loops=1)
>   ->  Seq Scan on Email  (cost=0.00..185740.10 rows=910965 width=0) (actual
> time=15988.85..71825.27 rows=183065 loops=1)
> Total runtime: 72072.12 msec
>
>
> 72 secondes for a php/pg application is useless.
>
>
> So which is the way, I need the "select count" to kwon the globaly number, I
> can avoid  of this information...
> A cache solution, would be impossible, my search engine is really complex...
> So maybe split the table in different other table, but it's going to take
> one week of work if I have to change
> all the queries...
>
>
> So, I m a less lost but always without solution, every help would nice...
>
> Best regards, Areski
>


Re: The Last Optimization

От
Bill Gribble
Дата:
On Fri, 2002-09-06 at 12:09, Areski Belaid wrote:
> The "select count" have to check all of them and it's not the case with
> "LIMIT"! Right ?

count() is slow for large tables, period.

If you know that you have some very large tables that need to be counted
frequently, you can make a small table called
"my_counts(tablename TEXT, rows INTEGER)" and update it with INSERT and
DELETE triggers on the tables.  Then, you can define a function which
will read that table rather than executing a query on a large table.

b.g.


Re: Date not being parsed as expected

От
Mario Weilguni
Дата:
Am Freitag, 6. September 2002 09:24 schrieb Jean-Christian Imbeault:
> The following insert:
>
> insert into t values('01-10-29')
>
> gives the following result in the DB:
>
> select * from t
>       d
> ------------
>   2029-01-10
>
> Why is the first part (01) being parsed as the month? The DateStyle is
> set to ISO (the default) so shoudln't the parser see xx-yy-zz as being
> year-month-day?
>

You can easily avoid this when using to_date(), e.g.:
insert into t values (to_date('01-10-29', 'dd-mm-yy'))

(or whatever this date should be). It's a bit more work to write, however you'll be
on the safe side.

Regards,
    Mario Weilguni