Re: concurrent connections is worse than serialization?

Поиск
Список
Период
Сортировка
От Richard Huxton
Тема Re: concurrent connections is worse than serialization?
Дата
Msg-id 200208141822.14573.dev@archonet.com
обсуждение исходный текст
Ответ на Re: concurrent connections is worse than serialization?  (Wei Weng <wweng@kencast.com>)
Список pgsql-sql
On Wednesday 14 Aug 2002 4:29 pm, Wei Weng wrote:
> > [30 connections is much slower than 1 connection 30 times]
>
> Yeah, but the problem is, say I have 20 users using select on the
> database at the same time, and each select takes 10 seconds to finish. I
> really can't queue them up (or the last user will reall have to wait for
> a long time), can I?

No, and you shouldn't have to - Postgresql can handle hundreds of concurrent
connections if needs be.

> > If it's not CPU, is the system going into swap or are you seeing a lot of
> > disk activity?
>
> I did hear a lot of disk noise when I ran the test. How do I tell if the
> "system is going into swap"?

You'd expect disk activity to a degree, it's a database after all. To see how
much and what is happening try one of : vmstat, iostat, top. You should have
at least one of these on whatever system you are using.

> Is there any system settings I can/should change to make this a little
> faster?

There are a number of settings in postgresql.conf - see the online docs (I
think there's a tuning document on techdocs.postgresql.org). Bear in mind
that the default settings for postgresql are very small, so you might want to
look at the sort memory/shared memory settings. Increase values a bit at a
time and see where you get a good balance. See the list archives for lots of
discussion of this.

> |ID    VARCHAR(40) | --> primary key
> |Name  VARCHAR(100)|
> And I did an INSERT INTO foo ('some-unique-guid-here', 'Test Name');

Try using the script multicli_ins.sh below - it creates 10 perl clients each
of which inserts 100 rows in separate transactions. This is about as
inefficient as you can get.

Have a play with different numbers of connections and see if there's a
pattern.

=> select max(ts) - min(ts) from foo;   ?column?
-----------------00:00:06.448863

So - I can insert 1000 using 10 clients in under 6.5 seconds. That's on an
untuned system on a development server (400MHz, 256MB, IDE disk).

=== table definition ===

CREATE TABLE "foo" (       "id" character varying(40),       "name" character varying(100),       "ts" timestamp with
timezone DEFAULT now() 
);
CREATE UNIQUE INDEX foo_key ON foo (id);

=== Perl script: pg_ins_test.pl ===

#!/usr/bin/perl -w
my $id = shift;
open CMD,"| psql -q";
my $key = "${id}AAA";
for (my $i=0; $i<100; $i++) {   print CMD "INSERT INTO foo (id,name) VALUES ('$key','Test name');\n";   $key++;
}
close CMD;
print "Done $id\n";

=== Shellscript: multicli_ins.sh ===

#!/bin/sh
psql -c 'DELETE FROM foo'
for i in A B C D E F G H I J
do ./pg_ins_test.pl $i 2>/dev/null &
done

=== End shellscript ===

- Richard Huxton


В списке pgsql-sql по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: sql function examples requested (! select)
Следующее
От: Masaru Sugawara
Дата:
Сообщение: Re: Need Help for select