Re: How to increase shared mem for PostgreSQL on FreeBSD

Поиск
Список
Период
Сортировка
От Joe Koenig
Тема Re: How to increase shared mem for PostgreSQL on FreeBSD
Дата
Msg-id 3C1A5B1E.4F9E5DEB@jwebmedia.com
обсуждение исходный текст
Ответ на Re: How to increase shared mem for PostgreSQL on FreeBSD  (Francisco Reyes <lists@natserv.com>)
Ответы Re: How to increase shared mem for PostgreSQL on FreeBSD  ("Mr. Shannon Aldinger" <god@yinyang.hjsoft.com>)
Список pgsql-general
The situation I'm dealing with is not ideal by any means. All of the
data is on a different server, and needs to be moved over to mine
nightly. I guess there is a chance I could have a script connect to the
other DB, export all of the data in a tab or CSV format, FTP the info
over to my server and then use COPY to import all of the data. For some
reason I'm more comfortable with the way it is now than with trying to
transfer such a large file. I believe I would need to transfer at least
100MB of files and then run the copy. Since the FTP transfer would be
involved, I can't see that saving me any time. I'll look into it though.
Most of the data being transferred already exists on the current server,
but the DB on the other side provides no way for me to get only the new
information. No dates the record was created or updated or anything.
Like I said, the situation is not at all ideal. By the way, the machine
has 3 18GB Ultra3 SCSI drives. A couple people have mentioned to drop
the indexes during the inserts, which I was previously doing. The
largest (bytes per row, 222,000 rows) table averages around 400 bytes
per row. So If my math is right, I'm looking at close to 80K/sec. Maybe
I'm really doing better than I think. Once again, thanks to everyone who
has replied with comments and advice. I really appreciate it.

Joe

Francisco Reyes wrote:
>
> On Fri, 14 Dec 2001, Joe Koenig wrote:
>
> > concerned with. The whole script takes about 27 minutes to run and
> > inserts somewhere around 700,000 rows.
> ......
> >  When the script is solely doing inserts, it is able
> > to insert around 200 rows per second. The inserts are wrapped in
> > transactions in groups of 5000. It seems that I should be able to insert
> > faster than that with my system (1GHz, 1GB RAM, RAID 5 w/10K 18GB
> > drives). Are there other things I need to be concerned with that will
> > help the speed? my shared_buffers is 15200 and sort_mem is at 8096
> > currently. Also, the script doing the inserts is written in PHP 4.1.0 -
> > could that be slowing me, as well? Thanks,
>
> I think if you were going to be doing such large updates that you would be
> better off using the copy command. As a reference I insert about 2.5
> Million records in the neighborhood of 40 minutes or about 1050 inserts
> per second. The hardware is a 500Mhz pentinum III with 512MB ram, 8000
> buffers on postgresql.conf. The drives are 2 IDE 7,2000 RPM drives on Raid
> 1 configuration.
>
> How many drives do you have?
> If you have 4 drives I would recommend you used Raid 1+0 or if your
> controller can't do it, then use two separate Raid 1 configurations.
>
> Also you could put the logs directory, pg_xlog I believe, in one
> raid set and the rest of the data on the other one. That may help on your
> inserts.
>
> I don't know anything about your data structures, but I think doing part
> of your problem may be that regular inserts update the indexes and I have
> been told copy doesn't. That may be a big part of your overhead. I just do
> a vacuum analyze when I am done copying all the data.
>
> Two quick suggestions:
> -Try dropping your indexes before you start the inserts
> -Try increasing the number of transanctions to a group of 10000.
>
> What is the lenght of your rows been inserted? The rows from the example
> above are about 60 bytes long. Doing a little math: 60 bytes * 1050
> transactions per second = 61K/sec. The problem on my case is definitely
> not bandwith, but random access. I just got a 15K rpm drive today and will be
> putting that into a 1Gz machine with 1GB of RAM.

В списке pgsql-general по дате отправления:

Предыдущее
От: "colm ennis"
Дата:
Сообщение: Re: slow queries on large syslog table
Следующее
От: "colm ennis"
Дата:
Сообщение: Re: slow queries on large syslog table