Re: An idea for parallelizing COPY within one backend
| От | Brian Hurt |
|---|---|
| Тема | Re: An idea for parallelizing COPY within one backend |
| Дата | |
| Msg-id | 47C59570.2000209@janestcapital.com обсуждение исходный текст |
| Ответ на | Re: An idea for parallelizing COPY within one backend (Andrew Dunstan <andrew@dunslane.net>) |
| Список | pgsql-hackers |
Andrew Dunstan wrote: > > > Florian G. Pflug wrote: > >> >>> Would it be possible to determine when the copy is starting that >>> this case holds, and not use the parallel parsing idea in those cases? >> >> >> In theory, yes. In pratice, I don't want to be the one who has to >> answer to an angry user who just suffered a major drop in COPY >> performance after adding an ENUM column to his table. >> >> > > I am yet to be convinced that this is even theoretically a good path > to follow. Any sufficiently large table could probably be partitioned > and then we could use the parallelism that is being discussed for > pg_restore without any modification to the backend at all. Similar > tricks could be played by an external bulk loader for third party data > sources. > I was just floating this as an idea- I don't know enough about the backend to know if it was a good idea or not, it sounds like "not". Brian
В списке pgsql-hackers по дате отправления: