Re: postgres_fdw has insufficient support for large object

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: postgres_fdw has insufficient support for large object
Дата
Msg-id CA+TgmoZR2Uem1nodu0Y9xTtYfreSrYL-owPvp45RTBGxwun4vw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: postgres_fdw has insufficient support for large object  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Mon, May 23, 2022 at 2:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> The big picture here is that Postgres is a hodgepodge of features
> that were developed at different times and with different quality
> standards, over a period that's now approaching forty years.
> Some of these features interoperate better than others.  Large
> objects, in particular, are largely a mess with a lot of issues
> such as not having a well-defined garbage collection mechanism.

Well, in one sense, the garbage mechanism is pretty well-defined:
objects get removed when you explicitly remove them. Given that
PostgreSQL has no idea that the value you store in your OID column has
any relationship with the large object that is identified by that OID,
I don't see how it could work any other way. The problem isn't really
that the behavior is unreasonable or even badly-designed. The real
issue is that it's not what people want.

I used to think that what people wanted was something like TOAST.
After all, large objects can be a lot bigger than toasted values, and
that size limitation might be a problem for some people. But then I
realized that there's a pretty important behavioral difference: when
you fetch a row that contains an OID that happens to identify a large
object, you can look at the rest of the row and then decide whether or
not you want to fetch the large object. If you just use a regular
column, with a data type of text or bytea, and store really big values
in there, you don't have that option: the server sends you all the
data whether you want it or not. Similarly, on the storage side, you
can't send the value to the server a chunk at a time, which means you
have to buffer the whole value in memory on the client side first,
which might be inconvenient.

I don't think that allowing larger toasted values would actually be
that hard. We couldn't do it with varlena, but we could introduce a
new negative typlen that corresponds to some new representation that
permits larger values. That would require sorting out various places
where we randomly limit things to 1GB, but I think that's pretty
doable. However, I'm not sure that would really solve any problem,
because who wants to malloc(1TB) in your application, and then
probably again in libpq, to schlep that value to the server -- and
then do the same thing in reverse when you get the value back? Without
some notion of certain values that are accessed via streaming rather
than monolithically, I can't really imagine getting to a satisfying
place.

I realize I've drifted away from the original topic a bit. I just
think it's interesting to think about what a better mechanism might
look like.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: [RFC] Improving multi-column filter cardinality estimation using MCVs and HyperLogLog
Следующее
От: Justin Pryzby
Дата:
Сообщение: Re: ccache, MSVC, and meson