Re: huge price database question..
| От | David Kerr |
|---|---|
| Тема | Re: huge price database question.. |
| Дата | |
| Msg-id | 4F6949E6.8030703@mr-paradox.net обсуждение исходный текст |
| Ответ на | Re: huge price database question.. (Jim Green <student.northwestern@gmail.com>) |
| Список | pgsql-general |
On 03/20/2012 07:26 PM, Jim Green wrote: > On 20 March 2012 22:21, David Kerr<dmk@mr-paradox.net> wrote: > >> I'm imagining that you're loading the raw file into a temporary table that >> you're going to use to >> process / slice new data data into your 7000+ actual tables per stock. > > Thanks! would "slice new data data into your 7000+ actual tables per > stock." be a relatively quick operation? well, it solves the problem of having to split up the raw file by stock symbol. From there you can run multiple jobs in parallel to load individual stocks into their individual table which is probably faster than what you've got going now. It would probably be faster to load the individual stocks directly from the file but then, as you said, you have to split it up first, so that may take time.
В списке pgsql-general по дате отправления: