Re: [ADMIN] Limits on Tables?
От | James Olin Oden |
---|---|
Тема | Re: [ADMIN] Limits on Tables? |
Дата | |
Msg-id | 81Jul27.174353edt.35713@gateway.lee.k12.nc.us обсуждение исходный текст |
Ответ на | Re: [ADMIN] Limits on Tables? (Gerald Brandt <gbr@hvdc.ca>) |
Список | pgsql-admin |
Gerald Brandt wrote: > Hi James, > > Actually, it all COULD be placed into the base tables, and in fact it was > before I separated them. I just thought that I would get a speed increase > by more cleanly separating the data. Then I sat down and thought about it > some more, and now I'm not sure. Hence the question... > > Gerald I think I understand now. As far as the original peformance question you asked, I am not in a position to answer it, but as a software tester (one of the many hats I wear) I would suggest creating some sort of script or program that goes in loop and generates the SQL code to generate 500 sets of these tables. Then I would write another program to fill the tables with pseudo random data. Then I would write another program to do various inserts, and selects againsts this data, only being the devious tester that I am, I would create a shell script or perl script perhaps that would kick off about a hundred of these, maybe a thousand if the system would handle (actually if it doesn't that would be good, because you kind of know a limit to your system, but if its a live system...) and during all this, I would be generating logs to keep track of the time it takes to do all these transactions. I might at that point generate another script to pull all this data together and see if I could make something out of all the data...I might even put the data in a postgreSQL database. Anyway, that's a lot of work, but it is a sure way to find out just what will happen before you spend a lot of time developing a polished user app that bases its internals on this. It would probably be fun even, but...it certainly would at least eat up an afternoon...james
В списке pgsql-admin по дате отправления: