Re: Millions of tables

Поиск
Список
Период
Сортировка
От Stuart Bishop
Тема Re: Millions of tables
Дата
Msg-id CADmi=6NOA+YbzBTuFmmuQ7Kw61YkAPpYS14w9McjpPivwz-p7g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Millions of tables  (Greg Spiegelberg <gspiegelberg@gmail.com>)
Список pgsql-performance


On 26 September 2016 at 20:51, Greg Spiegelberg <gspiegelberg@gmail.com> wrote:

An alternative if you exhaust or don't trust other options, use a foreign data wrapper to access your own custom storage. A single table at the PG level, you can shard the data yourself into 8 bazillion separate stores, in whatever structure suites your read and write operations (maybe reusing an embedded db engine, ordered flat file+log+index, whatever).


However even 8 bazillion FDW's may cause an "overflow" of relationships at the loss of having an efficient storage engine acting more like a traffic cop.  In such a case, I would opt to put such logic in the app to directly access the true storage over using FDW's.

I mean one fdw table, which shards internally to 8 bazillion stores on disk. It has the sharding key, can calculate exactly which store(s) need to be hit, and returns the rows and to PostgreSQL it looks like 1 big table with 1.3 trillion rows. And if it doesn't do that in 30ms you get to blame yourself :)


--

В списке pgsql-performance по дате отправления:

Предыдущее
От: Greg Spiegelberg
Дата:
Сообщение: Re: Millions of tables
Следующее
От: Greg Spiegelberg
Дата:
Сообщение: Re: Millions of tables