Обсуждение: Storing large documents - one table or partition by doc?
I’m storing thousands of independent documents each containing around 20k rows. The larger the document, the more likely it is to be active with inserts and updates (1000s/day). The most common read query is to get all the rows for a single document (100s/day). It will be supporting real-time collaboration but with strong-consistency for a simple schema so not well-suited to dedicated "document databases" that assume schema-less & eventual consistency. I won’t have great hardware/budget so need to squeeze the most out of the least.
My question is whether to put all documents into a single huge table or partition by document?
The documents are independent so its purely a performance question. Its too many tables for postgresql partitioning support but I don’t get any benefit from a master table and constraints. Handling partitioning in application logic is effectively zero cost.
I know that 1000s of tables is regarded as an anti-pattern but I can only see the performance and maintenance benefits of one table per independent document e.g. fast per-table vacuum, incremental schema updates, easy future sharding. A monster table will require additional key columns and indexes that don’t have any value beyond allowing the documents to sit in the same table.
The only downsides seem to be the system level per-table overhead but I only see that as a problem if I have a very long tail of tiny documents. I'd rather solve that problem if it occurs than manage an all-eggs-in-one-basket monster table.
Is there anything significant I am missing in my reasoning? Is it mostly a “relational purist” perspective that argues against multiple tables? Should I be looking at alternative tech for this problem?
From: Dev Nop Sent: Friday, September 23, 2016 3:12 AM
I’m storing thousands of independent documents each containing around 20k rows. The larger the document, the more likely it is to be active with inserts and updates (1000s/day). The most common read query is to get all the rows for a single document (100s/day). It will be supporting real-time collaboration but with strong-consistency for a simple schema so not well-suited to dedicated "document databases" that assume schema-less & eventual consistency. I won’t have great hardware/budget so need to squeeze the most out of the least.
My question is whether to put all documents into a single huge table or partition by document?
The documents are independent so its purely a performance question. Its too many tables for postgresql partitioning support but I don’t get any benefit from a master table and constraints. Handling partitioning in application logic is effectively zero cost.
I know that 1000s of tables is regarded as an anti-pattern but I can only see the performance and maintenance benefits of one table per independent document e.g. fast per-table vacuum, incremental schema updates, easy future sharding. A monster table will require additional key columns and indexes that don’t have any value beyond allowing the documents to sit in the same table.
The only downsides seem to be the system level per-table overhead but I only see that as a problem if I have a very long tail of tiny documents. I'd rather solve that problem if it occurs than manage an all-eggs-in-one-basket monster table.
Is there anything significant I am missing in my reasoning? Is it mostly a “relational purist” perspective that argues against multiple tables? Should I be looking at alternative tech for this problem?
The one factor I haven't fully resolved is how much a caching layer in front of the database changes things.
Thanks for your help.
---------------------------------
This is, to me, a very standard, almost classic, relational pattern, and one that a relational engine handles extremely well, especially the consistency and locking needed to support lots of updates. Inserts are irrelevant unless the parent record must be locked to do so…that would be a bad design.
Imagine a normal parent-child table pair, 1:M, with the 20k rows per parent document in the child table. Unless there’s something very bizarre about the access patterns against that child table, those 20k rows per document would not normally all be in play for every user on every access throughout that access (it’s too much data to show on a web page, for instance). Even so, at “100s” of large queries per day, it’s a trivial load unless each child row contains a large json blob…which doesn’t jive with your table description.
So with proper indexing, I can’t see where there will be a performance issue. Worst case, you create a few partitions based on some category, but the row counts you’re describing don’t yet warrant it. I’m running a few hundred million rows in a new “child” table on a dev server (4 cores/16gb ram) with large json documents in each row and it’s still web page performant on normal queries, using a paging model (say 20 full rows per web page request). The critical pieces, hardware-wise, are memory (buy as much as you can afford) and using SSDs (required, IMO). It’s much harder to create measurable loads on the CPUs. Amazon has memory optimized EC2 instances that support that pattern (with SSD storage).
Are there other issues/requirements that are creating other performance concerns that aren’t obvious in your initial post?
Mike Sofen (Synthetic Genomics)
On 9/23/16 7:14 AM, Mike Sofen wrote: > So with proper indexing, I can’t see where there will be a performance > issue. Table bloat could become problematic. If there is a pattern where you can predict which documents are likely to be active (say, documents that have been modified in the last 10 days), then you can keep all of those in a set of tables that is fairly small, and keep the remaining documents in a set of "archive" tables. That will help reduce bloat in the large archive tables. Before putting in that extra work though, I'd just try the simple solution and see how well it works. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461
On 9/24/16 6:33 AM, Dev Nop wrote: > This means that the applications are sensitive to the size of ids. A > previous incarnation used GUIDs which was a brutal overhead for large > documents. If GUIDs *stored in a binary format* were too large, then you won't be terribly happy with the 24 byte per-row overhead in Postgres. What I would look into at this point is using int ranges and arrays to greatly reduce your overhead: CREATE TABLE ...( document_version_id int NOT NULL REFERENCES document_version , document_line_range int4range NOT NULL , document_lines text[] NOT NULL , EXCLUDE USING gist( document_version_id =, document_line_range && ) ); That allows you to store the lines of a document as an array of values, ie: INSERT INTO ... VALUES( 1 , '[11-15]' , '[11:15]={line11,line12,line13,line14,line15}' ); Note that I'm using explicit array bounds syntax to make the array bounds match the line numbers. I'm not sure that's a great idea, but it is possible. > My nightmares are of a future filled with hours of down-time caused by > struggling to restore a gargantuan table from a backup due to a problem > with just one tiny document or schema changes that require disconnecting > all clients for hours when instead I could ignore best practice, create > 10k tables and process them iteratively and live in a utopia where I > never have 100% downtime only per document unavailability. At some size you'd certainly want partitioning. The good news is that you can mostly hide partitioning from the application and other database logic, so there's not a lot of incentive to set it up immediately. You can always do that after the fact. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461
I’m storing thousands of independent documents each containing around 20k rows. The larger the document, the more likely it is to be active with inserts and updates (1000s/day). The most common read query is to get all the rows for a single document (100s/day).
It will be supporting real-time collaboration but with strong-consistency for a simple schema so not well-suited to dedicated "document databases" that assume schema-less & eventual consistency. I won’t have great hardware/budget so need to squeeze the most out of the least.
My question is whether to put all documents into a single huge table or partition by document?
The documents are independent so its purely a performance question. Its too many tables for postgresql partitioning support but I don’t get any benefit from a master table and constraints. Handling partitioning in application logic is effectively zero cost.
I know that 1000s of tables is regarded as an anti-pattern but I can only see the performance and maintenance benefits of one table per independent document e.g. fast per-table vacuum, incremental schema updates, easy future sharding. A monster table will require additional key columns and indexes that don’t have any value beyond allowing the documents to sit in the same table.
The only downsides seem to be the system level per-table overhead but I only see that as a problem if I have a very long tail of tiny documents. I'd rather solve that problem if it occurs than manage an all-eggs-in-one-basket monster table.
Is there anything significant I am missing in my reasoning?
If GUIDs *stored in a binary format* were too large, then you won't be terribly happy with the 24 byte per-row overhead in Postgres.
What I would look into at this point is using int ranges and arrays to greatly reduce your overhead:
CREATE TABLE ...(
document_version_id int NOT NULL REFERENCES document_version
, document_line_range int4range NOT NULL
, document_lines text[] NOT NULL
, EXCLUDE USING gist( document_version_id =, document_line_range && )
);
Thanks! Some new things for me to learn about there. Had to read "Range Types: Your Life Will Never Be The Same" - lol. https://wiki.postgresql.org/
To check I understand what you are proposing: the current version and history is stored in the same table. Each line is referred to by a sequential line number and then lines are stored in sequential chunks with range + array. The gist index is preventing any insert with the same version & line range. This sounds very compact for a static doc but doesn't it mean lines must be renumbered on inserts/moves?
If GUIDs *stored in a binary format* were too large, then you won't be terribly happy with the 24 byte per-row overhead in Postgres.Heh. In this case the ids have a life outside the database in various text formats.What I would look into at this point is using int ranges and arrays to greatly reduce your overhead:
CREATE TABLE ...(
document_version_id int NOT NULL REFERENCES document_version
, document_line_range int4range NOT NULL
, document_lines text[] NOT NULL
, EXCLUDE USING gist( document_version_id =, document_line_range && )
);
Thanks! Some new things for me to learn about there. Had to read "Range Types: Your Life Will Never Be The Same" - lol. https://wiki.postgresql.org/images/7/73/Range-types- pgopen-2012.pdf
To check I understand what you are proposing: the current version and history is stored in the same table. Each line is referred to by a sequential line number and then lines are stored in sequential chunks with range + array. The gist index is preventing any insert with the same version & line range. This sounds very compact for a static doc but doesn't it mean lines must be renumbered on inserts/moves?
Please CC the mailing list so others can chime in or learn... On 9/26/16 3:26 AM, Dev Nop wrote: > What I would look into at this point is using int ranges and arrays > to greatly reduce your overhead: > CREATE TABLE ...( > document_version_id int NOT NULL REFERENCES document_version > , document_line_range int4range NOT NULL > , document_lines text[] NOT NULL > , EXCLUDE USING gist( document_version_id =, document_line_range && ) > ); > > > Thanks! Some new things for me to learn about there. Had to read "Range > Types: Your Life Will Never Be The Same" - lol. > https://wiki.postgresql.org/images/7/73/Range-types-pgopen-2012.pdf > > To check I understand what you are proposing: the current version and > history is stored in the same table. Each line is referred to by a > sequential line number and then lines are stored in sequential chunks > with range + array. The gist index is preventing any insert with the > same version & line range. This sounds very compact for a static doc but You've got it correct. > doesn't it mean lines must be renumbered on inserts/moves? Yes, but based on your prior descriptions I was assuming that was what you wanted... weren't you basically suggesting storing one line per row? There's certainly other options if you want full tracking of every change... for example, you could store every change as some form of a diff, and only store the full document every X number of changes. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461