Low level socket locking...
От | M Simms |
---|---|
Тема | Low level socket locking... |
Дата | |
Msg-id | 199808031735.SAA20880@argh.demon.co.uk обсуждение исходный текст |
Ответы |
Re: [SQL] Low level socket locking...
|
Список | pgsql-sql |
Hi I am writing a C program to work as an interface between postgres and a web page. This page will get a LOT of activity. The system is to be set up like this: Web page, CGI generated | Connection to handler process via unix socket | Handler process, that has a postgres connection open. This is set up in this way because I need to do many similar jobs with the database, but obviously, every time the web page is accessed, a new copy of the cgi script is created, and if I were to create a new connection to the database every time this would a) mean spawning LOTS of copies of the postgres backend, very slow, and b) I wouldnt get the advantages of any caching that is done. Thus I send a simple message from the CGI to the handler process, which then interracts with the database, and returns the result. Now, my problem is this. The page being accessed is likely, in time, to get enough hits that as the jobs are handled one at a time by the handler process (by the nature of the postgres C librray functions), it is very feasable that a backlog could form. I was planning on simply fork()ing the handler process. The database would be open on each child, and I could happily process them all easilly. However it occurred to me that I do not know how the socket code between the postgres libraries and the postgres backend is handled? Is there a socket locking process in place, to prevent two children passing data down the same socket at the same time, thus causing garbled data to be received at the other end? If I fork() and send data from one of say, three child processes, am I guarenteed to get the data back at the same child? (this all assumes that I open the database connection and then fork() after) Thanx in advance for any info
В списке pgsql-sql по дате отправления: