RE: Berkeley DB...
От | Mikheev, Vadim |
---|---|
Тема | RE: Berkeley DB... |
Дата | |
Msg-id | 8F4C99C66D04D4118F580090272A7A23018C01@SECTORBASE1 обсуждение исходный текст |
Ответ на | Berkeley DB... ("Mikheev, Vadim" <vmikheev@SECTORBASE.COM>) |
Ответы |
Re: Berkeley DB...
|
Список | pgsql-hackers |
> Times for the insert loop: > 14 MySQL-MyISAM > 23 PostgreSQL (no fsync) > 53 MySQL-BDB (with fsync -- don't know how to turn it off yet) PostgreSQL 6.5.3; -B 16384; SUN Ultra10 with some IDE disk. 1. with fsync, all inserts in single transaction: 73 sec 2. with fsync, use COPY: 3 sec 3. without fsync, use COPY: 3 sec 4. without fsync, all inserts in single transaction: 71 sec 5. without fsync, each insert in own transaction: 150 sec Do you see difference for INSERT/COPY in 1./2.? Shouldn't we try to speed up our PARSER/PLANNER, keeping in mind that WAL will speed up our storage sub-system?! Also, 4. & 5. show that transaction begin/commit take too long time. Could you run your test for all inserts in single transaction? (If we want to test storage sub-system, let's do our test un-affected by other ones...) > The select: > 0.75 MySQL-MyISAM > 0.77 MySQL-BDB > 2.43 PostgreSQL select a from foo order by a didn't use index in my case, so I've run select a from foo where a >= 0 also. 1. ORDER: 0.74 2. A >= 0 with index: 0.73 3. A >= 0 without index: 0.56 Note that I used -B 16384 (very big pool) and run queries *twice* to get all data into pool. What size of pool did you use? 64 (default) * 8192 = 512Kb, but size of foo is 1.5Mb... 2. & 3. show that index slows data retrieval... as it should -:) Also, does MySQL read table itself if it can get all required columns from index?! I mean - did your query really read *both* index and *table*? PostgreSQL has to read table anyway... Vadim
В списке pgsql-hackers по дате отправления: