Re: [HACKERS] Write Ahead Logging for Hash Indexes
| От | Amit Kapila |
|---|---|
| Тема | Re: [HACKERS] Write Ahead Logging for Hash Indexes |
| Дата | |
| Msg-id | CAA4eK1LMkoT+5qDdWGV4QVdVpEp=oBc39D7STsJzpyitCLUDtA@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: [HACKERS] Write Ahead Logging for Hash Indexes (Stephen Frost <sfrost@snowman.net>) |
| Ответы |
Re: [HACKERS] Write Ahead Logging for Hash Indexes
|
| Список | pgsql-hackers |
On Wed, Mar 15, 2017 at 12:53 AM, Stephen Frost <sfrost@snowman.net> wrote:
> * Tom Lane (tgl@sss.pgh.pa.us) wrote:
>> Stephen Frost <sfrost@snowman.net> writes:
>> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:
>> >> It's true that as soon as we need another overflow page, that's going to
>> >> get dropped beyond the 2^{N+1}-1 point, and the *apparent* size of the
>> >> index will grow quite a lot. But any modern filesystem should handle
>> >> that without much difficulty by treating the index as a sparse file.
>>
>> > Uh, last I heard we didn't allow or want sparse files in the backend
>> > because then we have to handle a possible out-of-disk-space failure on
>> > every write.
>>
>> For a hash index, this would happen during a bucket split, which would
>> need to be resilient against out-of-disk-space anyway.
>
> We wouldn't attempt to use the area of the file which is not yet
> allocated except when doing a bucket split?
>
That's right.
> If that's the case then
> this does seem to at least be less of an issue, though I hope we put in
> appropriate comments about it.
>
I think we have sufficient comments in code especially on top of
function _hash_alloc_buckets().
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: