Hello hackers,
I think i'm at the right place to ask this question.
Based on your experience and the fact that you have written the Postgres code,
can you tell what a rough break-down - in your opinion - is for the time the
database spends time just "fetching and writing " stuff to memory and the
actual computation. The reason i ask this is because off-late there has been a
push to put reconfigurable hardware on processor cores. What this means is that
database writers can possibly identify the compute-intensive portions of the
code and write hardware accelerators and/or custom instructions and offload
computation to these hardware accelerators which they would have programmed
onto the FPGA.
There is not much utility in doing this if there aren't considerable compute-
intensive operations in the database (which i would be surprise if true ). I
would suspect joins, complex queries etc may be very compute-intensive. Please
correct me if i'm wrong. Moreover, if you were told that you have a
reconfigurable hardware which can perform pretty complex computations 10x
faster than the base, would you think about synthesizing it directly on an fpga
and use it ?
I'd be more than glad to hear your guesstimates.
Thanks alot !
Hamza