Re: [WIP] Effective storage of duplicates in B-tree index.
От | Aleksander Alekseev |
---|---|
Тема | Re: [WIP] Effective storage of duplicates in B-tree index. |
Дата | |
Msg-id | 20160129184733.2ca9026a@fujitsu обсуждение исходный текст |
Ответ на | Re: [WIP] Effective storage of duplicates in B-tree index. (Anastasia Lubennikova <a.lubennikova@postgrespro.ru>) |
Ответы |
Re: [WIP] Effective storage of duplicates in B-tree index.
|
Список | pgsql-hackers |
I tested this patch on x64 and ARM servers for a few hours today. The only problem I could find is that INSERT works considerably slower after applying a patch. Beside that everything looks fine - no crashes, tests pass, memory doesn't seem to leak, etc. > Okay, now for some badness. I've restored a database containing 2 > tables, one 318MB, another 24kB. The 318MB table contains 5 million > rows with a sequential id column. I get a problem if I try to delete > many rows from it: > # delete from contacts where id % 3 != 0 ; > WARNING: out of shared memory > WARNING: out of shared memory > WARNING: out of shared memory I didn't manage to reproduce this. Thom, could you describe exact steps to reproduce this issue please?
В списке pgsql-hackers по дате отправления: