Re: How Big is Too Big for Tables?

Поиск
Список
Период
Сортировка
От Terry Fielder
Тема Re: How Big is Too Big for Tables?
Дата
Msg-id 4C506A5D.6040003@ashtonwoodshomes.com
обсуждение исходный текст
Ответ на How Big is Too Big for Tables?  (Bill Thoen <bthoen@gisnet.com>)
Список pgsql-general
If all the table files are the same structure, its really not hard, just a UNION clause.

Indeed, one can even create a VIEW that leverages that union clause to simplify the code that needs to grab from the multiple tables.

As far as indexes, "single table" COULD be OK if you throw enough hardware at it.  But if the data changes a lot and vacumming/index rebuilding is not keeping up, well it could get degraded performance even with high end hardware.

Let's look at your indexes, are they to be of 3-4 columns or less?  Likely you will be OK.  If there are several or more columns, your indexes will be massive and then performance drops off with increased paging on even just index usage.

NOTE:
If you compile the data into a SINGLE table, you could always break up your table into smaller tables using SELECT INTO statements that grab by state.  Then your queries that assume a single table for all states need to be tweaked to use union or (even better) tweaked to use a VIEW that already implements a union.

If a lot of querying would use the UNION'd view, you probably want to avoid that.  If its not very often, or "OK to wait a little bit longer", the union will allow you to break up the data with probably only minor impact when you need multiple states reported together.

You likely probably might almost sort of maybe be best to do a test case on your hardware first, even if dummy meaningless data populated by a script, it will give you a measurement of your expected performance that is much more meaningful then my ramble above.  :)

Terry


Terry Fielder
terry@greatgulfhomes.com
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


Bill Thoen wrote:
I'm building a national database of agricultural information and one of the layers is a bit more than a gigabyte per state. That's 1-2 million records per state, with a mult polygon geometry, and i've got about 40 states worth of data. I trying to store everything in a single PG table. What I'm concerned about is if I combine every state into one big table then will performance will be terrible, even with indexes? On the other hand, if I store the data in several smaller files, then if a user zooms in on a multi-state region,  I've got  to build or find a much more complicated way to query multiple files.

So I'm wondering, should I be concerned with building a single national size table (possibly 80-100 Gb) for all these records, or should I keep the files smaller and hope there's something like ogrtindex out there for PG tables? what do you all recommend in this case? I just moved over to Postgres to handle big files, but I don't know its limits. With a background working with MS Access and bitter memories of what happens when you get near Access'  two gigabyte database size limit, I'm a little nervous of these much bigger files. So I'd appreciate anyone's advice here.

TIA,
- Bill Thoen

В списке pgsql-general по дате отправления:

Предыдущее
От: Sandeep Srinivasa
Дата:
Сообщение: Re: Which CMS/Ecommerce/Shopping cart ?
Следующее
От: Bryan Hinton
Дата:
Сообщение: Re: How Big is Too Big for Tables?