Re: How to get good performance for very large lists/sets?
| От | Jim Nasby |
|---|---|
| Тема | Re: How to get good performance for very large lists/sets? |
| Дата | |
| Msg-id | 543317E0.8030403@BlueTreble.com обсуждение исходный текст |
| Ответ на | How to get good performance for very large lists/sets? (Richard Frith-Macdonald <richard.frith-macdonald@brainstorm.co.uk>) |
| Список | pgsql-general |
On 10/6/14, 3:02 AM, Richard Frith-Macdonald wrote:
> I'm wondering if anyone can help with advice on how to manage large lists/sets of items in a postgresql database.
>
> I have a database which uses multiple lists of items roughly like this:
>
> CREATE TABLE List (
> ID SERIAL,
> Name VARCHAR ....
> );
>
> and a table containing individual entries in the lists:
>
> CREATE TABLE ListEntry (
> ListID INT, /* Reference the List table */
> ItemID INT /* References an Item table */
> ) ;
> CREATE UNIQUE INDEX ListEntryIDX ON ListEntry(ListID, ItemID);
BTW, performance-wise, your best bet might be to forget about using a listentry table (BTW, I recommend not using
CamelCasefor database object naming) and instead put an array in the list table:
CREATE TABLE list(
list_id serial PRIMARY KEY
, list_name varchar NOT NULL UNIQUE
, list_items int[] NOT NULL||||
);
I think there's an extension/add-on that would let you enforce referrential integrity between list_items and the items
table,but I can't find it now.
--
Jim Nasby, Data Architect, Blue Treble
Data in Trouble? Get it in Treble! http://BlueTreble.com
В списке pgsql-general по дате отправления: