Re: [WIP] Effective storage of duplicates in B-tree index.

Поиск
Список
Период
Сортировка
От Anastasia Lubennikova
Тема Re: [WIP] Effective storage of duplicates in B-tree index.
Дата
Msg-id 56B09751.2020607@postgrespro.ru
обсуждение исходный текст
Ответ на Re: [WIP] Effective storage of duplicates in B-tree index.  (Thom Brown <thom@linux.com>)
Ответы Re: [WIP] Effective storage of duplicates in B-tree index.  (Thom Brown <thom@linux.com>)
Список pgsql-hackers

29.01.2016 20:43, Thom Brown:
> On 29 January 2016 at 16:50, Anastasia Lubennikova
> <a.lubennikova@postgrespro.ru>  wrote:
>> 29.01.2016 19:01, Thom Brown:
>>> On 29 January 2016 at 15:47, Aleksander Alekseev
>>> <a.alekseev@postgrespro.ru>  wrote:
>>>> I tested this patch on x64 and ARM servers for a few hours today. The
>>>> only problem I could find is that INSERT works considerably slower after
>>>> applying a patch. Beside that everything looks fine - no crashes, tests
>>>> pass, memory doesn't seem to leak, etc.
>> Thank you for testing. I rechecked that, and insertions are really very very
>> very slow. It seems like a bug.
>>
>>>>> Okay, now for some badness.  I've restored a database containing 2
>>>>> tables, one 318MB, another 24kB.  The 318MB table contains 5 million
>>>>> rows with a sequential id column.  I get a problem if I try to delete
>>>>> many rows from it:
>>>>> # delete from contacts where id % 3 != 0 ;
>>>>> WARNING:  out of shared memory
>>>>> WARNING:  out of shared memory
>>>>> WARNING:  out of shared memory
>>>> I didn't manage to reproduce this. Thom, could you describe exact steps
>>>> to reproduce this issue please?
>>> Sure, I used my pg_rep_test tool to create a primary (pg_rep_test
>>> -r0), which creates an instance with a custom config, which is as
>>> follows:
>>>
>>> shared_buffers = 8MB
>>> max_connections = 7
>>> wal_level = 'hot_standby'
>>> cluster_name = 'primary'
>>> max_wal_senders = 3
>>> wal_keep_segments = 6
>>>
>>> Then create a pgbench data set (I didn't originally use pgbench, but
>>> you can get the same results with it):
>>>
>>> createdb -p 5530 pgbench
>>> pgbench -p 5530 -i -s 100 pgbench
>>>
>>> And delete some stuff:
>>>
>>> thom@swift:~/Development/test$ psql -p 5530 pgbench
>>> Timing is on.
>>> psql (9.6devel)
>>> Type "help" for help.
>>>
>>>
>>>    ➤ psql://thom@[local]:5530/pgbench
>>>
>>> # DELETE FROM pgbench_accounts WHERE aid % 3 != 0;
>>> WARNING:  out of shared memory
>>> WARNING:  out of shared memory
>>> WARNING:  out of shared memory
>>> WARNING:  out of shared memory
>>> WARNING:  out of shared memory
>>> WARNING:  out of shared memory
>>> WARNING:  out of shared memory
>>> ...
>>> WARNING:  out of shared memory
>>> WARNING:  out of shared memory
>>> DELETE 6666667
>>> Time: 22218.804 ms
>>>
>>> There were 358 lines of that warning message.  I don't get these
>>> messages without the patch.
>>>
>>> Thom
>> Thank you for this report.
>> I tried to reproduce it, but I couldn't. Debug will be much easier now.
>>
>> I hope I'll fix these issueswithin the next few days.
>>
>> BTW, I found a dummy mistake, the previous patch contains some unrelated
>> changes. I fixed it in the new version (attached).
> Thanks.  Well I've tested this latest patch, and the warnings are no
> longer generated.  However, the index sizes show that the patch
> doesn't seem to be doing its job, so I'm wondering if you removed too
> much from it.

Huh, this patch seems to be enchanted) It works fine for me. Did you 
perform "make distclean"?
Anyway, I'll send a new version soon.
I just write here to say that I do not disappear and I do remember about 
the issue.
I even almost fixed the insert speed problem. But I'm very very busy 
this week.
I'll send an updated patch next week as soon as possible.

Thank you for attention to this work.

-- 
Anastasia Lubennikova
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Magnus Hagander
Дата:
Сообщение: Re: Add links to commit fests to patch summary page
Следующее
От: Kouhei Kaigai
Дата:
Сообщение: Re: Way to check whether a particular block is on the shared_buffer?