Re: very very slow inserts into very large table
| От | Mark Thornton | 
|---|---|
| Тема | Re: very very slow inserts into very large table | 
| Дата | |
| Msg-id | 5004687B.8080908@optrak.com обсуждение исходный текст  | 
		
| Ответ на | Re: very very slow inserts into very large table (Claudio Freire <klaussfreire@gmail.com>) | 
| Ответы | 
                	
            		Re: very very slow inserts into very large table
            		
            		 | 
		
| Список | pgsql-performance | 
On 16/07/12 20:08, Claudio Freire wrote: > On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton <mthornton@optrak.com> wrote: >> 4. The most efficient way for the database itself to do the updates would be >> to first insert all the data in the table, and then update each index in >> turn having first sorted the inserted keys in the appropriate order for that >> index. > Actually, it should create a temporary index btree and merge[0] them. > Only worth if there are really a lot of rows. > > [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf I think 93 million would qualify as a lot of rows. However does any available database (commercial or open source) use this optimisation. Mark
В списке pgsql-performance по дате отправления: