Re: Using multi-row technique with COPY
От | Simon Riggs |
---|---|
Тема | Re: Using multi-row technique with COPY |
Дата | |
Msg-id | 1133170776.2906.360.camel@localhost.localdomain обсуждение исходный текст |
Ответ на | Re: Using multi-row technique with COPY (Martijn van Oosterhout <kleptog@svana.org>) |
Ответы |
Re: Using multi-row technique with COPY
|
Список | pgsql-hackers |
On Mon, 2005-11-28 at 09:40 +0100, Martijn van Oosterhout wrote: > On Sun, Nov 27, 2005 at 05:45:31PM -0500, Tom Lane wrote: > > Simon Riggs <simon@2ndquadrant.com> writes: > > > COPY FROM can read in sufficient rows until it has a whole block worth > > > of data, then get a new block and write it all with one pair of > > > BufferLock calls. > > > > > Comments? > Whatever happened to that idea to build as entire datafile with COPY or > some external tool and simply copy it into place and update the > catalog? What's wrong with tuning the server to do this? Zapping the catalog as a normal operation is the wrong approach if you want a robust system. All actions on the catalog must be under tight control. Most other RDBMS support a "fast path" loader, but all of them include strong hooks into the main server to maintain catalog correctly. That is one approach, but it requires creation of an external API - which seems more work, plus a security risk. Copying data in a block at a time is the basic technique all use. I never discuss implementing features that other RDBMS have for any other reason than than a similar use case exists for both. There are many features where PostgreSQL is already ahead. Best Regards, Simon Riggs
В списке pgsql-hackers по дате отправления: