Re: more anti-postgresql FUD

Поиск
Список
Период
Сортировка
От Thomas Kellerer
Тема Re: more anti-postgresql FUD
Дата
Msg-id egovdh$tcb$1@sea.gmane.org
обсуждение исходный текст
Ответ на Re: more anti-postgresql FUD  (alexei.vladishev@gmail.com)
Ответы Re: more anti-postgresql FUD  ("Dann Corbit" <DCorbit@connx.com>)
Список pgsql-general
alexei.vladishev@gmail.com wrote on 11.10.2006 16:54:
> Do a simple test to see my point:
>
> 1. create table test (id int4, aaa int4, primary key (id));
> 2. insert into test values (0,1);
> 3. Execute "update test set aaa=1 where id=0;" in an endless loop

As others have pointed out, committing the data is a vital step in when testing
the performance of a relational/transactional database.

What's the point of updating an infinite number of records and never committing
them? Or were you running in autocommit mode?
Of course MySQL will be faster if you don't have transactions. Just as a plain
text file will be faster than MySQL.

You are claiming that this test does simulate the load that your applications
puts on the database server. Does this mean that you never commit data when
running on MySQL?

This test also proves (in my opinion) that any multi-db application when using
the lowest common denominator simply won't perform equally well on all
platforms. I'm pretty sure the same test would also show a very bad performance
on an Oracle server.
It simply ignores the basic optimization that one should do in an transactional
system. (Like batching updates, committing transactions etc).

Just my 0.02€
Thomas

В списке pgsql-general по дате отправления:

Предыдущее
От: "A. Kretschmer"
Дата:
Сообщение: Re: Create Index on Date portion of timestamp
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Backup DB not getting connected