Обсуждение: Creating a correct and real benchmark
Hi, I'm developing a search engine using the postgresql's databas. I've already doing some tunnings looking increase the perform. Now, I'd like of do a realistic test of perfom with number X of queries for know the performance with many queries. What the corret way to do this? Thanks.
> I'm developing a search engine using the postgresql's databas. I've
> already doing some tunnings looking increase the perform.
>
> Now, I'd like of do a realistic test of perfom with number X of queries
> for know the performance with many queries.
>
> What the corret way to do this?
I guess the only way to know how it will perform with your own
application is to benchmark it with queries coming from your own
application. You can create a test suite with a number of typical queries
and use your favourite scripting language to spawn a number of threads and
hammer the database. I find it interesting to measure the responsiveness
of the server while torturing it, simply by measuring the time it takes to
respond to a simple query and graphing it. Also you should not have N
threads issue the exact same queries, because then you will hit a too
small dataset. Introduce some randomness in the testing, for instance.
Benchmarking from another machine makes sure the test client's CPU usage
is not a part of the problem.
PFC wrote: > >> I'm developing a search engine using the postgresql's databas. I've >> already doing some tunnings looking increase the perform. >> >> Now, I'd like of do a realistic test of perfom with number X of queries >> for know the performance with many queries. >> >> What the corret way to do this? > > > > I guess the only way to know how it will perform with your own > application is to benchmark it with queries coming from your own > application. You can create a test suite with a number of typical > queries and use your favourite scripting language to spawn a number of > threads and hammer the database. I find it interesting to measure the > responsiveness of the server while torturing it, simply by measuring > the time it takes to respond to a simple query and graphing it. Also > you should not have N threads issue the exact same queries, because > then you will hit a too small dataset. Introduce some randomness in the > testing, for instance. Benchmarking from another machine makes sure the > test client's CPU usage is not a part of the problem. The other advice on top of this is don't just import a small amount of data. If your application is going to end up with 200,000 rows - then test with 200,000 rows or more so you know exactly how it will handle under "production" conditions.
Thanks for advises :-D. Marcos