Обсуждение: [GSoC08]some detail plan of improving hash index
Hi, hackers.<br /><br />I'm reading the source codes of hash and reviewing Neil's old patch of improving hash index.<br />Hereis some detail plan. I'm trying to adjust Neil's patch to the current version of PostgreSQL first. I'm not quite familarwith the code yet, so please make some comment.<br /><br />* Phase 1. Just store hash value instead of hash keys<br/><br />First, define a macro to make it optional.<br /><br />Second, add a new function _hash_form_item to constructIndexTuple with hash code to replace index_form_tuple uaed in hash access method. It seems easy since we did'ntneed to deal with TOAST.<br /><br />Third, modify _hash_checkqual. We can first compare the hash value; if it's thesame, we compare the real key value.<br />Also, HashScanOpaqueData adds an element hashso_sk_hash to hold scan key's hashvalue to support scan function.<br /><br />Finally, it seems the system catalog pg_amop also need to be modified.<br/>In Neil's patch, he set the amopreqcheck to be True. <br />In the documents, it means index hit must be recheckedin the document. But I'm not so clear. Does it just means we need to recheck the value when use _hash_chechqual? <br /><br /><br />* Phase 2. Sort the hash value when insert into the bucket and use binary search whenscan<br />Add a function _hash_binsearch to support binary search in a bucket;<br />It involved in all functions whenwe need to search, insert and delete. <br /><br />* Phase 3. When it's necessary, store the real value.<br />When weinsert a new index tuple , for example tp , to a bucket, we can check whether there's the same hash code.<br />If there'salready an index tuple with such a hash code, we store both the hash code and real key of tp.<br /> Alternatively,we add a flag to represent the tuple stores a real value or just hash code. It seems a little complex.<br /><br/>Phase 1 seems extremely easy. I'm trying to do it first. <br />Additionally, I need a benchmark to test the performance.It seems there's some tools list in <a href="http://wiki.postgresql.org/wiki/Performances_QA_testing">http://wiki.postgresql.org/wiki/Performances_QA_testing</a> .Any advice?<br clear="all" /><br />-- <br />Have a good day;-)<br />Best Regards,<br />Xiao Meng<br /><br />━━━━━━━━━━━━━━━━━━━━━━━<br/>Data and Knowledge Engineering Research Center<br />Harbin Institute of Technology, China<br/>Gtalk: <a href="mailto:mx.cogito@gmail.com" target="_blank">mx.cogito@gmail.com</a><br /> MSN: <a href="mailto:cnEnder@live.com"target="_blank">cnEnder@live.com</a><br /><a href="http://xiaomeng.yo2.cn" target="_blank"></a>
Hi Xiao Meng, I am glad that you are making some progress. I have added a couple of comments below. Your phased approach is a good way to get it in a position for testing. I had a very basic test for creation time, query time for a simple lookup, and index size that I would like to re-run when you have a proto-type working. Regards, Ken On Fri, May 16, 2008 at 10:42:05AM +0800, Xiao Meng wrote: > Hi, hackers. > > I'm reading the source codes of hash and reviewing Neil's old patch of > improving hash index. > Here is some detail plan. I'm trying to adjust Neil's patch to the current > version of PostgreSQL first. I'm not quite familar with the code yet, so > please make some comment. > > * Phase 1. Just store hash value instead of hash keys > > First, define a macro to make it optional. > Good. > Second, add a new function _hash_form_item to construct IndexTuple with hash > code to replace index_form_tuple uaed in hash access method. It seems easy > since we did'nt need to deal with TOAST. > > Third, modify _hash_checkqual. We can first compare the hash value; if it's > the same, we compare the real key value. I think the changes to the system catalog cause this to happen automatically for an access method with the re-check flag set. You just need to return all of the tuples that satisfy the hash1 == hash2 criteria and the system will check them against the heap. This will need to be done for support of a unique index, but that should wait until we have demonstrated the performance of the new approach. > Also, HashScanOpaqueData adds an element hashso_sk_hash to hold scan key's > hash value to support scan function. > > Finally, it seems the system catalog pg_amop also need to be modified. > In Neil's patch, he set the amopreqcheck to be True. > In the documents, it means index hit must be rechecked in the document. But > I'm not so clear. Does it just means we need to recheck the value when use > _hash_chechqual? This means that the system will perform the re-check for you so you do not have to access the heap to check for yourself. > > > * Phase 2. Sort the hash value when insert into the bucket and use binary > search when scan > Add a function _hash_binsearch to support binary search in a bucket; > It involved in all functions when we need to search, insert and delete. I would wait on this piece, or at least make it a separate option so we can test whether or not the overhead is a worthwhile trade-off performance-wise. If we can make a smaller bucket-size work, than for a bucket size of a cacheline or two just reading the entire bucket and re-writing should be faster. It may be of value for buckets with many items with the same hash value. > > * Phase 3. When it's necessary, store the real value. > When we insert a new index tuple , for example tp , to a bucket, we can > check whether there's the same hash code. > If there's already an index tuple with such a hash code, we store both the > hash code and real key of tp. I would always store the hash code and not the value. One of the big wins is the reduction in index size to improve the ability to index very large items and tables efficiently. The btree index already handles the case of storing the actual value in the index. Since a hash code is a non-unique mapping, you will always need to check the value in the heap. So let the system do that and then the index does not need to carry that overhead. > Alternatively, we add a flag to represent the tuple stores a real value or > just hash code. It seems a little complex. > See above. > Phase 1 seems extremely easy. I'm trying to do it first. > Additionally, I need a benchmark to test the performance. It seems there's > some tools list in http://wiki.postgresql.org/wiki/Performances_QA_testing . > Any advice? > > -- > Have a good day;-) > Best Regards, > Xiao Meng > > ????????????????????????????????????????????????????????????????????? > Data and Knowledge Engineering Research Center > Harbin Institute of Technology, China > Gtalk: mx.cogito@gmail.com > MSN: cnEnder@live.com > <http://xiaomeng.yo2.cn>
Xiao, > Phase 1 seems extremely easy. I'm trying to do it first. > Additionally, I need a benchmark to test the performance. It seems > there's some tools list in > http://wiki.postgresql.org/wiki/Performances_QA_testing . Any advice? For a simple test, pgbench is actually going to be pretty good for hash index since it's mostly primary key access. You also might want to write your own unit tests using pgunittest, because you want to test the following: bulk load, both COPY and INSERT single-row updates, inserts and deletes batch update by key batch update by other index batch delete by key batch delete by other index concurrent index updates (64 connections insert/deleting concurrently) You can compare all of the above against b-tree and unindexed columns. For a hard-core benchmark, I'd try EAStress (SpecJAppserver Lite) -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco
On Fri, 16 May 2008, Josh Berkus wrote: > For a hard-core benchmark, I'd try EAStress (SpecJAppserver Lite) This reminds me...Jignesh had some interesting EAStress results at the East conference I was curious to try and replicate more publicly one day. Now that there are some initial benchmarking servers starting to become available, it strikes me that this would make a good test case to run on some of those periodically. I don't have a spare $2K for a commercial license right now, but there's a cheap ($250) non-profit license for EAStress around. That might be a useful purchase for one of the PG non-profits to make one day though. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
<p><font size="2">Greg Smith wrote<br /><br /> > On Fri, 16 May 2008, Josh Berkus wrote:<br /> ><br /> > > Fora hard-core benchmark, I'd try EAStress (SpecJAppserver Lite)<br /> ><br /> > This reminds me...Jignesh had someinteresting EAStress results at the<br /> > East conference I was curious to try and replicate more publicly one day.<br/> > Now that there are some initial benchmarking servers starting to become<br /> > available, it strikes methat this would make a good test case to run on<br /> > some of those periodically. I don't have a spare $2K for acommercial<br /> > license right now, but there's a cheap ($250) non-profit license for<br /> > EAStress around. That might be a useful purchase for one of the PG<br /> > non-profits to make one day though.<br /> ><br /><br/> I (an individual, not me ex-cathedra) could pony up the geld for such a license if it is useful; let me know if soand where to do so.<br /><br /> Greg Williamson<br /> Senior DBA<br /> DigitalGlobe<br /><br /> Confidentiality Notice:This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidentialand privileged information and must be protected in accordance with those provisions. Any unauthorized review,use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender byreply e-mail and destroy all copies of the original message.<br /><br /> (My corporate masters made me say this.)<br /></font>
I have the add-on kit ready for anybody to try once the spec kit is available. The add-on kit basically has the schema /tunings for PostgreSQL and also how to deploy it quickly with Glassfish app server and also pre-tuned app server so we dont spend much time tuning the app server and focus on db server. -Jignesh Greg Smith wrote: > On Fri, 16 May 2008, Josh Berkus wrote: > >> For a hard-core benchmark, I'd try EAStress (SpecJAppserver Lite) > > This reminds me...Jignesh had some interesting EAStress results at the > East conference I was curious to try and replicate more publicly one > day. Now that there are some initial benchmarking servers starting to > become available, it strikes me that this would make a good test case > to run on some of those periodically. I don't have a spare $2K for a > commercial license right now, but there's a cheap ($250) non-profit > license for EAStress around. That might be a useful purchase for one > of the PG non-profits to make one day though. > > -- > * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD >