Обсуждение: More psql problems... >.<

Поиск
Список
Период
Сортировка

More psql problems... >.<

От
Madison Kelly
Дата:
Hi all,

   I have got to say that my first foray into postgresSQL is becoming a
very madening experience... I am sure it is my own fault for not knowing
very much but it seems that everything I have tried so far to improve
performance has in fact made it a lot worse. Now my program dies after
roughly 300 seconds of processing directories and updates take literally
10 time longer than inserts (which are themselves very slow).

   I am sorry for winning... I've been trying to figure this out non
stop for nearly two weeks...

   Anyway, I moved my backup program to another dedicated machine (an
AMD Athlon 1.2GHz (1700+) with 512MB RAM and a Seagate Barracuda 7200.7,
2MB buffer ATA/100 IDE drive). As it stands now I have increased shmmax
to 128MB and in the 'postgresql.conf' I dropped max_connections to 10
and upped shared_buffers to 4096.

   What is happening now is that the program does an 'ls' (system call)
to get a list of the files and directories starting at the root of a
mounted partition. These are read into an array which perl then
processes one at a time. the 'ls' value is searched for in the database
and if it doesn't exist, the values are inserted. If they do exist, they
are updated (at 1/10th the speed). If the file is in fact a directory
perl jumps into it and again reads in it's contents into another array
and processes the one at a time. It will do this until all files or
directories on the partition have been processed.

   My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.
If I disable auto-commit then it dies the first time it runs an insert.
(this is all done on the same table; 'file_dir'). If I add a 'commit'
before each select than a bunch of selects will work (a few dozen) and
then it dies anyway.

   Does this sound at all like a common problem? Thanks for reading my
gripe.

Madison

PS - PostgresSQL 7.4 on Fedora Core 2; indexes on the three columns I
search and my SELECT, UPDATE and INSERT calls are prepared.

Re: More psql problems... >.<

От
Alvaro Herrera
Дата:
On Tue, Jun 22, 2004 at 03:52:03PM -0400, Madison Kelly wrote:

>   What is happening now is that the program does an 'ls' (system call)
> to get a list of the files and directories starting at the root of a
> mounted partition. These are read into an array which perl then
> processes one at a time. the 'ls' value is searched for in the database
> and if it doesn't exist, the values are inserted. If they do exist, they
> are updated (at 1/10th the speed). If the file is in fact a directory
> perl jumps into it and again reads in it's contents into another array
> and processes the one at a time. It will do this until all files or
> directories on the partition have been processed.

So you read the entire filesystem again and again?  Sounds like a
horrible idea to me.  Have you tried using the mtimes, etc?


>   My previous question was performance based, now I just need to get
> the darn thing working again. Like I said, after ~300 seconds perl dies.

Out of memory?  If you save your whole filesystem in a Perl array you
are going to consume a lot of memory.  This is, of course, not Postgres
related, so I'm not sure why you are asking here.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
Jajaja! Solo hablaba en serio!


Re: More psql problems... >.<

От
Madison Kelly
Дата:
Alvaro Herrera wrote:
> On Tue, Jun 22, 2004 at 03:52:03PM -0400, Madison Kelly wrote:
>
>
>>  What is happening now is that the program does an 'ls' (system call)
>>to get a list of the files and directories starting at the root of a
>>mounted partition. These are read into an array which perl then
>>processes one at a time. the 'ls' value is searched for in the database
>>and if it doesn't exist, the values are inserted. If they do exist, they
>>are updated (at 1/10th the speed). If the file is in fact a directory
>>perl jumps into it and again reads in it's contents into another array
>>and processes the one at a time. It will do this until all files or
>>directories on the partition have been processed.
>
>
> So you read the entire filesystem again and again?  Sounds like a
> horrible idea to me.  Have you tried using the mtimes, etc?

   I haven't heard of 'mtimes' before, I'll google for it now.

>>  My previous question was performance based, now I just need to get
>>the darn thing working again. Like I said, after ~300 seconds perl dies.
>
>
> Out of memory?  If you save your whole filesystem in a Perl array you
> are going to consume a lot of memory.  This is, of course, not Postgres
> related, so I'm not sure why you are asking here.

   Running just the perl portion which reads and parses the file system
works fine and fast. It isn't until I make the DB calls that everything
breaks. I know that the DB will slow things down but the amount of
performance loss I am seeing and the flat out breaking of the program
can't be reasonable.

   Besides, postgresSQL should be able to handle 250,000 SELECTs
followed by an UPDATE or INSERT for each on an AMD Athlon 1700+ with
512MB RAM, shouldn't it? Besides, the program is dieing after 5 minutes
when the calls are being commited automatically so the work being done
shouldn't be filling any memory, should it?

Madison

Re: More psql problems... >.<

От
Martijn van Oosterhout
Дата:
On Tue, Jun 22, 2004 at 03:52:03PM -0400, Madison Kelly wrote:
> Hi all,
>
>   I have got to say that my first foray into postgresSQL is becoming a
> very madening experience... I am sure it is my own fault for not knowing
> very much but it seems that everything I have tried so far to improve
> performance has in fact made it a lot worse. Now my program dies after
> roughly 300 seconds of processing directories and updates take literally
> 10 time longer than inserts (which are themselves very slow).

<snip>
>
>   My previous question was performance based, now I just need to get
> the darn thing working again. Like I said, after ~300 seconds perl dies.
> If I disable auto-commit then it dies the first time it runs an insert.
> (this is all done on the same table; 'file_dir'). If I add a 'commit'
> before each select than a bunch of selects will work (a few dozen) and
> then it dies anyway.

What is "dying"? Do you get an error message? From your message there
is nowhere near enough information to give you any good answers. Have
you run VACUUM [FULL|ANALYZE] recently? Are your indexes being used? If
you really want help, post your Per code, queries, database schema,
exmaple output, anything that might indicate what your actual problem.
We are, unfortunatly, not psychic.

We run PostgreSQL with several simultaneous users on several million
rows of data doing a few hundred thousand queries a day and the
database is not the bottleneck.

--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.

Вложения

Re: More psql problems... >.<

От
"Scott Marlowe"
Дата:
On Tue, 2004-06-22 at 13:52, Madison Kelly wrote:
> Hi all,
>
>    I have got to say that my first foray into postgresSQL is becoming a
> very madening experience... I am sure it is my own fault for not knowing
> very much but it seems that everything I have tried so far to improve
> performance has in fact made it a lot worse. Now my program dies after
> roughly 300 seconds of processing directories and updates take literally
> 10 time longer than inserts (which are themselves very slow).

/SNIP

>    My previous question was performance based, now I just need to get
> the darn thing working again. Like I said, after ~300 seconds perl dies.
> If I disable auto-commit then it dies the first time it runs an insert.
> (this is all done on the same table; 'file_dir'). If I add a 'commit'
> before each select than a bunch of selects will work (a few dozen) and
> then it dies anyway.

Is it exactly 300 seconds?  Sounds like a timeout somewhere to me.  Does
perl have one of those?  Or maybe your OS / Shell combo do, or something
like it?  Just guessing here.


Re: More psql problems... >.<

От
Madison Kelly
Дата:
Scott Marlowe wrote:
>>   My previous question was performance based, now I just need to get
>>the darn thing working again. Like I said, after ~300 seconds perl dies.
>>If I disable auto-commit then it dies the first time it runs an insert.
>>(this is all done on the same table; 'file_dir'). If I add a 'commit'
>>before each select than a bunch of selects will work (a few dozen) and
>>then it dies anyway.
>
>
> Is it exactly 300 seconds?  Sounds like a timeout somewhere to me.  Does
> perl have one of those?  Or maybe your OS / Shell combo do, or something
> like it?  Just guessing here.
>

Hi Scott,

   Thanks for replying. In my frustration I missed the obvious; the
web-browser was timing out (web front end)... Darn that makes me feel
foolish! Anyway, I went for a walk to relax and calm down and now I can
get back to working on performance.

   Madison