Michal Mosiewicz:
> As I observed postgres scanned this database at rate of
> 100kBps(sequentially). Much less than the actuall I/O throughput on this
> machine. Even when I prepared a condition to return no records it also
> scanned it sequentially, while it would cost only 20msec.
Well, now it looks like there is a bug or two:
- 100kBps(sequentially) is way too slow. If you have time, try profileing
(with gprof) this scan. We should be able to do much better than this.
If you can't do it, we might want to put "Improve sequential scan rate"
on the todo list.
- a "select count(*) from x where <some_index_col> <some_qual>"
should use the index.
> Anyhow... I have to admit that similiar question asked to mysql takes...
> mysql> select count(*) from log where dt < 19980209000000 and
> dt>19980208000000;
> +----------+
> | count(*) |
> +----------+
> | 26707 |
> +----------+
> 1 row in set (7.61 sec)
>
> Of course, if I ask it without the index it takes ~3 minutes. That's why
> expected that postgres would make some use of index. (The table is in
> both cases the same).
Just out of curiosity, how long do these queries take in MySQL vs postgreSQL?
Thanks
-dg
David Gould dg@illustra.com 510.628.3783 or 510.305.9468
Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612
- Linux. Not because it is free. Because it is better.