On 12/21/14, 12:42 AM, Amit Kapila wrote:
> On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost <sfrost@snowman.net <mailto:sfrost@snowman.net>> wrote:
> a. Instead of passing value array, just pass tuple id, but retain the
> buffer pin till master backend reads the tuple based on tupleid.
> This has side effect that we have to retain buffer pin for longer
> period of time, but again that might not have any problem in
> real world usage of parallel query.
>
> b. Instead of passing value array, pass directly the tuple which could
> be directly propagated by master backend to upper layer or otherwise
> in master backend change some code such that it could propagate the
> tuple array received via shared memory queue directly to frontend.
> Basically save the one extra cycle of form/deform tuple.
>
> Both these need some new message type and handling for same in
> Executor code.
>
> Having said above, I think we can try to optimize this in multiple
> ways, however we need additional mechanism and changes in Executor
> code which is error prone and doesn't seem to be important at this
> stage where we want the basic feature to work.
Would b require some means of ensuring we didn't try and pass raw tuples to frontends? Other than that potential
wrinkle,it seems like less work than a.
...
> I think there are mainly two things which can lead to benefit
> by employing parallel workers
> a. Better use of available I/O bandwidth
> b. Better use of available CPU's by doing expression evaluation
> by multiple workers.
...
> In the above tests, it seems to me that the maximum benefit due to
> 'a' is realized upto 4~8 workers
I'd think a good first estimate here would be to just use effective_io_concurrency.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com