Re: Should we optimize the `ORDER BY random() LIMIT x` case?
От | Vik Fearing |
---|---|
Тема | Re: Should we optimize the `ORDER BY random() LIMIT x` case? |
Дата | |
Msg-id | c724e28b-3888-4e8a-8187-a5802d226f2d@postgresfriends.org обсуждение исходный текст |
Ответ на | Re: Should we optimize the `ORDER BY random() LIMIT x` case? (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Should we optimize the `ORDER BY random() LIMIT x` case?
Re: Should we optimize the `ORDER BY random() LIMIT x` case? |
Список | pgsql-hackers |
On 16/05/2025 15:01, Tom Lane wrote: > Aleksander Alekseev <aleksander@timescale.com> writes: >> If I'm right about the limitations of aggregate functions and SRFs >> this leaves us the following options: >> 1. Changing the constraints of aggregate functions or SRFs. However I >> don't think we want to do it for such a single niche scenario. >> 2. Custom syntax and a custom node. >> 3. To give up > Seems to me the obvious answer is to extend TABLESAMPLE (or at least, some > of the tablesample methods) to allow it to work on a subquery. Isn't this a job for <fetch first clause>? Example: SELECT ... FROM ... JOIN ... FETCH SAMPLE FIRST 10 ROWS ONLY Then the nodeLimit could do some sort of reservoir sampling. There are several enhancements to <fetch first clause> coming down the pipe, this could be one of them. -- Vik Fearing
В списке pgsql-hackers по дате отправления: