Simon Stiefel wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi people,
>
> I want to migrate some old mysql-databases to postgresql.
> With this step I want to optimize some database structures.
>
> I have a (mysql-) database with all zip-codes and cities in germany.
> As there are a lot of them I decided to split them in more tables at that time.
> So now there are 10 tables with zip-codes (those starting with '0' in one table, those starting with '1' in another
table,and so on).
>
> I also have all streets of germany with their corresponding zip code.
> Like the zip-code tables the streets are also splitted in 10 tables (same scheme as above).
> My question now is, whether to keep that structure or to throw them together in two big tables?
> For accessing that data it would be easier to make two tables, but I don't know what about performance (cause the
street-tablewould have something about one million tuples)?
You are going to have nightmares for your queries then.
How much rows you will have in these two tables ?
With postgres Tables of milions rows with the right index are queried in
few millisecond.
Regards
Gaetano Mendola