Обсуждение: Version Control?
Hi- Any general tips on using version control (CVS, SVN) while doing database design? My thought was to do a text-mode dump (including populated code tables) from PGAdmin. How do people do this? -- Peter Fein pfein@pobox.com 773-575-0694 Basically, if you're not a utopianist, you're a schmuck. -J. Feldman
Up until the database goes into production, keep files: schema.sql (table creation), views.sql, functions.sql triggers.sql trigfunctions.sql in cvs/svn. Afterwards any changes to the schema are in change01.sql, change02.sql,... The change scripts hold the alter table statements for schema changes. They must be cumulative. Ideally you'd have corresponding undochange01.sql but that is icing. Never let anyone change the database without creating the appropriate change script. --elein On Thu, Jun 09, 2005 at 03:25:14PM -0500, Peter Fein wrote: > Hi- > > Any general tips on using version control (CVS, SVN) while doing > database design? My thought was to do a text-mode dump (including > populated code tables) from PGAdmin. > > How do people do this? > > -- > Peter Fein pfein@pobox.com 773-575-0694 > > Basically, if you're not a utopianist, you're a schmuck. -J. Feldman > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq >
Peter Fein wrote: > Hi- > > Any general tips on using version control (CVS, SVN) while doing > database design? My thought was to do a text-mode dump (including > populated code tables) from PGAdmin. > > How do people do this? > Currently we just store a dump of the data structure. However, what I think is really needed is a specialist diff tool which works out the commands needed to move from one schema to another. That would be *extremely* useful, but would also probably require a separate implementation for each database backend. There's also the problem of configuration data: you might want some rows to be version-controlled too because they contain lookup validation data which is vital to the operation of the system, rather than being user data. It's an interesting problem which I've not yet seen a usable solution to. I've been looking for an OSS project to either start or contribute to. Maybe that's one itch that I might consider scratching... -- Russ.
How would you handle the migration of the data with these user scripts? Dump it to a temp table? On 6/9/05, elein <elein@varlena.com> wrote: > Up until the database goes into production, > keep files: schema.sql (table creation), > views.sql, functions.sql triggers.sql trigfunctions.sql > in cvs/svn. > > Afterwards any changes to the schema are in > change01.sql, change02.sql,... > > The change scripts hold the alter table statements > for schema changes. They must be cumulative. > Ideally you'd have corresponding undochange01.sql > but that is icing. > > Never let anyone change the database without creating > the appropriate change script. > > --elein > > On Thu, Jun 09, 2005 at 03:25:14PM -0500, Peter Fein wrote: > > Hi- > > > > Any general tips on using version control (CVS, SVN) while doing > > database design? My thought was to do a text-mode dump (including > > populated code tables) from PGAdmin. > > > > How do people do this? > > > > -- > > Peter Fein pfein@pobox.com 773-575-0694 > > > > Basically, if you're not a utopianist, you're a schmuck. -J. Feldman > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 5: Have you checked our extensive FAQ? > > > > http://www.postgresql.org/docs/faq > > > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org >
Russ Brown wrote on 09.06.2005 23:12: > Currently we just store a dump of the data structure. However, what I > think is really needed is a specialist diff tool which works out the > commands needed to move from one schema to another. That would be > *extremely* useful, but would also probably require a separate > implementation for each database backend. > I have just implemented such a feature in my SQL Tool. It will output the difference between two schemas as an XML file which in turn can be transformed in the correct SQL scripts. The tool can be downloaded from http://www.sql-workbench.net (You are looking for the WbDiff command) An (very basic) XSLT to transform the XML output into a PG SQL script is also available there (look in the XSLT section). You might want to use the latest development build because I have done some tweaks and enhancements to the output. This is a first implementation (and mainly tested with Oracle). If you have feedback or suggestions on how to improve it, feel free to contact me at: support (at) sql (dash) workbench (dot) net Best regards Thomas
Thomas Kellerer wrote: > Russ Brown wrote on 09.06.2005 23:12: > >> Currently we just store a dump of the data structure. However, what I >> think is really needed is a specialist diff tool which works out the >> commands needed to move from one schema to another. That would be >> *extremely* useful, but would also probably require a separate >> implementation for each database backend. >> > > I have just implemented such a feature in my SQL Tool. It will output > the difference between two schemas as an XML file which in turn can be > transformed in the correct SQL scripts. > > The tool can be downloaded from http://www.sql-workbench.net > (You are looking for the WbDiff command) > Very interesting. I'll have a closer look at this tomorrow morning at work. Thanks! -- Russ.
PG Lightning Admin also has function version control. http://www.amsoftwaredesign.com Russ Brown wrote: > Thomas Kellerer wrote: > >> Russ Brown wrote on 09.06.2005 23:12: >> >>> Currently we just store a dump of the data structure. However, what >>> I think is really needed is a specialist diff tool which works out >>> the commands needed to move from one schema to another. That would >>> be *extremely* useful, but would also probably require a separate >>> implementation for each database backend. >>> >> >> I have just implemented such a feature in my SQL Tool. It will output >> the difference between two schemas as an XML file which in turn can >> be transformed in the correct SQL scripts. >> >> The tool can be downloaded from http://www.sql-workbench.net >> (You are looking for the WbDiff command) >> > > Very interesting. I'll have a closer look at this tomorrow morning at > work. > > Thanks! >
On Thu, Jun 09, 2005 at 10:12:25PM +0100, Russ Brown wrote: > Peter Fein wrote: > >Hi- > > > >Any general tips on using version control (CVS, SVN) while doing > >database design? My thought was to do a text-mode dump (including > >populated code tables) from PGAdmin. > > > >How do people do this? > > > > Currently we just store a dump of the data structure. However, what I > think is really needed is a specialist diff tool which works out the > commands needed to move from one schema to another. That would be > *extremely* useful, but would also probably require a separate > implementation for each database backend. I've spent quite a lot of time thinking about this and it's certainly not trivial, and probably _hard_. If anyone has even half-functional code to do it I'd be interested in looking at it. > It's an interesting problem which I've not yet seen a usable solution > to. I've been looking for an OSS project to either start or contribute > to. Maybe that's one itch that I might consider scratching... Yeah... Cheers, Steve
On Thu, Jun 09, 2005 at 04:16:46PM -0500, John Browne wrote: > How would you handle the migration of the data with these user > scripts? Dump it to a temp table? > If your scripts are correct, you should be able to load your base scripts and apply each change script in order and have the result be the exact same database schema. If they are not, checkpoint with a schema dump and start again with the change scripts. Of course getting the scripts wrong is against the point of the whole exercise, but it is not easy and requires vigilance. --elein ============================================================= elein@varlena.com Varlena, LLC www.varlena.com (510)655-2584(o) (510)543-6079(c) PostgreSQL Consulting, Support & Training PostgreSQL General Bits http://www.varlena.com/GeneralBits/ ============================================================== I have always depended on the [QA] of strangers. > > On 6/9/05, elein <elein@varlena.com> wrote: > > Up until the database goes into production, > > keep files: schema.sql (table creation), > > views.sql, functions.sql triggers.sql trigfunctions.sql > > in cvs/svn. > > > > Afterwards any changes to the schema are in > > change01.sql, change02.sql,... > > > > The change scripts hold the alter table statements > > for schema changes. They must be cumulative. > > Ideally you'd have corresponding undochange01.sql > > but that is icing. > > > > Never let anyone change the database without creating > > the appropriate change script. > > > > --elein > > > > On Thu, Jun 09, 2005 at 03:25:14PM -0500, Peter Fein wrote: > > > Hi- > > > > > > Any general tips on using version control (CVS, SVN) while doing > > > database design? My thought was to do a text-mode dump (including > > > populated code tables) from PGAdmin. > > > > > > How do people do this? > > > > > > -- > > > Peter Fein pfein@pobox.com 773-575-0694 > > > > > > Basically, if you're not a utopianist, you're a schmuck. -J. Feldman > > > > > > ---------------------------(end of broadcast)--------------------------- > > > TIP 5: Have you checked our extensive FAQ? > > > > > > http://www.postgresql.org/docs/faq > > > > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 6: Have you searched our list archives? > > > > http://archives.postgresql.org > > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster >
On 6/9/05, elein <elein@varlena.com> wrote: > On Thu, Jun 09, 2005 at 04:16:46PM -0500, John Browne wrote: > > How would you handle the migration of the data with these user > > scripts? Dump it to a temp table? > > > > If your scripts are correct, you should be able to load > your base scripts and apply each change script in order > and have the result be the exact same database schema. > > If they are not, checkpoint with a schema dump and start > again with the change scripts. Of course getting the > scripts wrong is against the point of the whole exercise, > but it is not easy and requires vigilance. > The big complexity for me is that the the database schema's state should be stored along with the code that uses it: i.e. in CVS or Subversion or whatever with the code. That way you have a consistent snapshot of your complete system database at any given point in time (minus the data itself). Developers will need to re-dump the schema whenever they make a change to the datbase and commit it along with everything else, but that's easily scriptable. Writing individual 'patch' scripts is fine for linear development, but breaks down when dealing with a development environment that involves branching. If two branches make changes to the database, each's patch file would be written against the original version, which may not be the case once the other patch has been apllied. What is needed is a tool which will compare any two revisions of the schema and generate a patch file that performs the migration. This would obviously have to be pretty damn clever. Amongs the difficulties would be ensuring that the patch applies changes in the correct order (e.g. add column before adding foreign key). It's hard, but I don't believe it's impossible. -- Russ
Russ Brown wrote: > On 6/9/05, elein <elein@varlena.com> wrote: > >>On Thu, Jun 09, 2005 at 04:16:46PM -0500, John Browne wrote: >> >>>How would you handle the migration of the data with these user >>>scripts? Dump it to a temp table? >>> >> >>If your scripts are correct, you should be able to load >>your base scripts and apply each change script in order >>and have the result be the exact same database schema. >> >>If they are not, checkpoint with a schema dump and start >>again with the change scripts. Of course getting the >>scripts wrong is against the point of the whole exercise, >>but it is not easy and requires vigilance. >> > > > The big complexity for me is that the the database schema's state > should be stored along with the code that uses it: i.e. in CVS or > Subversion or whatever with the code. That way you have a consistent > snapshot of your complete system database at any given point in time > (minus the data itself). Developers will need to re-dump the schema > whenever they make a change to the datbase and commit it along with > everything else, but that's easily scriptable. > > Writing individual 'patch' scripts is fine for linear development, but > breaks down when dealing with a development environment that involves > branching. If two branches make changes to the database, each's patch > file would be written against the original version, which may not be > the case once the other patch has been apllied. What is needed is a > tool which will compare any two revisions of the schema and generate a > patch file that performs the migration. This is interesting... You'd want to be able to generate either a bunch of CREATEs to create a schema from scratch or a 'patch' of ALTER commands to move b/w arbitrary revisions or to a working copy (ie, a live DB). This implies you need to store an intermediate (non-SQL) representation in your repository (like the output of WBDiff mentioned previously). What's unusual is that your SQL-generating tool would need to checkout *both* versions of the representations to generate the patch. I can't think of any other problems that have this requirement - usually, you're relying on your version control software to generate such diffs for you. > This would obviously have to be pretty damn clever. Amongs the > difficulties would be ensuring that the patch applies changes in the > correct order (e.g. add column before adding foreign key). It's hard, > but I don't believe it's impossible. As an uninformed, off-the-wall idea, could one compare snapshots of the system tables to generate these diffs? I know next-to-nothing about these, but it seems like they'd contain the info you'd need. This still doesn't solve the problem of identifying which tables should have their *data* included For that, I suppose one could create a table for storing such metadata, or abuse COMMENT. It'd also be nice to be able to specify the ability to exclude specified objects entirely - I'm thinking of 3rd party modules (tsearch2, say) that one would want to install by hand (to get the latest version or whatever). I recently got a script to do something similar for MySQL by parsing the output of 'show create table'. It's pretty simple & not full-featured. I need to check with the author before posting it though. If anyone is inclined to work on this, I might be able to route a few dollars their way... please contact me off list. -- Peter Fein pfein@pobox.com 773-575-0694 Basically, if you're not a utopianist, you're a schmuck. -J. Feldman
On Jun 10, 2005, at 11:38 AM, Peter Fein wrote: > >> This would obviously have to be pretty damn clever. Amongs the >> difficulties would be ensuring that the patch applies changes in the >> correct order (e.g. add column before adding foreign key). It's hard, >> but I don't believe it's impossible. > > As an uninformed, off-the-wall idea, could one compare snapshots of the > system tables to generate these diffs? I know next-to-nothing about > these, but it seems like they'd contain the info you'd need. I think it would be a requirement; I'm not convinced it could be done just from comparing table definitions. For example, using the table definitions only, how could you distinguish between renaming the last column in the table versus dropping the column and adding a new one of the same type. The target table would be the same, but the semantics of getting there are not. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL
John DeSoi wrote: > > On Jun 10, 2005, at 11:38 AM, Peter Fein wrote: > >> >>> This would obviously have to be pretty damn clever. Amongs the >>> difficulties would be ensuring that the patch applies changes in the >>> correct order (e.g. add column before adding foreign key). It's hard, >>> but I don't believe it's impossible. >> >> >> As an uninformed, off-the-wall idea, could one compare snapshots of the >> system tables to generate these diffs? I know next-to-nothing about >> these, but it seems like they'd contain the info you'd need. > > > I think it would be a requirement; I'm not convinced it could be done > just from comparing table definitions. For example, using the table > definitions only, how could you distinguish between renaming the last > column in the table versus dropping the column and adding a new one of > the same type. The target table would be the same, but the semantics of > getting there are not. > That's an excellent point that I certainly hadn't thought of. Indeed, if you look at the output from the general text-based diff programs, they only output additions or removals. If you change a line it gets marked as two separate changes: a removal and an addition. My ideal is still to be able to create a 'diff' between any two arbitrary revisions of a schema. Therefore, the schema must only contain the state of the database, and not transitional information (since the transition is undefined before the 'diff' takes place). There would need to be some other way to identify the two columns as being the same historically. This could be something in the comment (though I don't like the idea of enforcing metadata in comments) or using some technique such as an OID (though I can't remember if things like columns have OIDs, and it's not particularly portable either). Perhaps the problem is best served by two files: one containing the schema snapshot itself, and the other containing a 'manifest' which lists each entity in the schema along with a unique identifier for each entity. Then you have the problem of how you assign each identifier, since many people may be working on the same schema on different branches. Just picking the next number wouldn't work as you'd get clashes. It would have to be some sort of GUID. An added benefit of this 'manifest' file is that it would be a good place to store additional meta-data on things like whether a given table's data should be dumped or not. (thinks for a while...) Now that I think about it, why stick with the schema dump at all? We could (for example) define a generic XML schema on which the manifest is based which describes everything, from the structure, meta data and the actual required data itself to a list of tables that should be ignored completely. You'd have to provide a script to update the file from a database, and when it encounters something like a column rename it could ask you if you actually did drop and re-add the column or if it was a rename. Depending on the answer you give the column either gets a new GUID or keeps the old one. So taking that example, you do your commit and somebody updates their working copy and sees that the database file has changed. They run the diff command to see what they need to do to update their version and it outputs the DROP COLUMN/ADD COLUMN or RENAME statement that they need, along with a couple of configuration inserts that you did too. This certainly is a far more complex problem than I originally thought it was. I'd like any solution to be able to work in any version control system, and to be applicable to any database engine (my employer is a MySQL house, while I personally prefer Postgres for my personal stuff, so I'm in favour of it being completely portable). Another important design criteria for me would be ease of use. This is a version-control stage on top of the existing version control work that a developer needs to do, and if it's a real hassle they aren't going to bother to do it properly. It therefore needs to be able to do things as automatically as possible, for example by connecting to a database and generating the manifest file directly. I also think a requirement would have to be that a database would need a dedicated table (or view) to store its current version number. The manifest file could specify where to find this. It would be required to know which revision you need to 'diff' against to upgrade the database to the latest version. For extra brownie points the diff command could output the update statement to set the version to the new version automatically, so you don't forget to do it. As I say it's a hard problem, but I certainly don't think it's impossible. -- Russ.
On Fri, Jun 10, 2005 at 10:38:52AM -0500, Peter Fein wrote: > This is interesting... You'd want to be able to generate either a bunch > of CREATEs to create a schema from scratch or a 'patch' of ALTER > commands to move b/w arbitrary revisions or to a working copy (ie, a > live DB). This implies you need to store an intermediate (non-SQL) > representation in your repository (like the output of WBDiff mentioned > previously). I would think you would keep the CREATE scripts version controlled just as you do with your source code. It's text, after all that evolves linearly. You would tag them appropriately, say -1.0, -1.1, whatever. From those you can always generate two databases of version x and version x+y where X is the version in deployment and X+Y the schema version you want to migrate deployment to. From *there* one would want to run a tool create_migration_script.sh --from=db1 --to=db2 which produces a patch script with appropriate ALTER statements. I *don't* think the patch scripts need to be kept in the versioning system. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
Russ, On Jun 10, 2005, at 2:34 PM, Russ Brown wrote: > This certainly is a far more complex problem than I originally thought > it was. I'd like any solution to be able to work in any version > control system, and to be applicable to any database engine (my > employer is a MySQL house, while I personally prefer Postgres for my > personal stuff, so I'm in favour of it being completely portable). > > Another important design criteria for me would be ease of use. This is > a version-control stage on top of the existing version control work > that a developer needs to do, and if it's a real hassle they aren't > going to bother to do it properly. It therefore needs to be able to do > things as automatically as possible, for example by connecting to a > database and generating the manifest file directly. > I think what this amounts to is basically doing the task as it has traditionally been accomplished (and already suggested in the thread). There is a baseline schema and any group of developers needs to share a version controlled file to transform the schema into a new version. This could involve schema and data modifications. As you say, unless there is a huge win for developers no one will take the time to learn some other specification method (i.e. some type of XML manifest). I think the best approach is a tool that takes the baseline schema, new schema, and current transform script and then generates the SQL needed for the next revision. The developer would simply need to review the generated code, adjust it (if necessary), and then commit it as the next revision. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL
Karsten Hilbert wrote: > On Fri, Jun 10, 2005 at 10:38:52AM -0500, Peter Fein wrote: > > >>This is interesting... You'd want to be able to generate either a bunch >>of CREATEs to create a schema from scratch or a 'patch' of ALTER >>commands to move b/w arbitrary revisions or to a working copy (ie, a >>live DB). This implies you need to store an intermediate (non-SQL) >>representation in your repository (like the output of WBDiff mentioned >>previously). > > > I would think you would keep the CREATE scripts version > controlled just as you do with your source code. It's text, > after all that evolves linearly. You would tag them > appropriately, say -1.0, -1.1, whatever. From those you can > always generate two databases of version x and version x+y > where X is the version in deployment and X+Y the schema > version you want to migrate deployment to. > > From *there* one would want to run a tool > > create_migration_script.sh --from=db1 --to=db2 > > which produces a patch script with appropriate ALTER > statements. > > I *don't* think the patch scripts need to be kept in the > versioning system. I wasn't suggesting that they should be - quite the opposite in fact. It seems cleaner to generate the patch from an intermediate representation (which is what would be stored), rather than having to instantiate a DB to create the patch, which is what you seem to be suggesting. You'd also be able to transform the intermediate rep. into SQL to create a DB from scratch. See also the posts about distinguishing ALTER from DROP/CREATE for columns for problems with storing plain CREATE scripts. Come on, we can't be the first people with this problem? Version control's been around for what, 20, 30 years? Somebody must have tried to do this before, even if it's with something other than Postgres... -- Peter Fein pfein@pobox.com 773-575-0694 Basically, if you're not a utopianist, you're a schmuck. -J. Feldman
John DeSoi wrote: > Russ, > > On Jun 10, 2005, at 2:34 PM, Russ Brown wrote: > >> This certainly is a far more complex problem than I originally thought >> it was. I'd like any solution to be able to work in any version >> control system, and to be applicable to any database engine (my >> employer is a MySQL house, while I personally prefer Postgres for my >> personal stuff, so I'm in favour of it being completely portable). >> >> Another important design criteria for me would be ease of use. This is >> a version-control stage on top of the existing version control work >> that a developer needs to do, and if it's a real hassle they aren't >> going to bother to do it properly. It therefore needs to be able to do >> things as automatically as possible, for example by connecting to a >> database and generating the manifest file directly. >> > > I think what this amounts to is basically doing the task as it has > traditionally been accomplished (and already suggested in the thread). > There is a baseline schema and any group of developers needs to share a > version controlled file to transform the schema into a new version. This > could involve schema and data modifications. > > As you say, unless there is a huge win for developers no one will take > the time to learn some other specification method (i.e. some type of XML > manifest). I think the best approach is a tool that takes the baseline > schema, new schema, and current transform script and then generates the > SQL needed for the next revision. The developer would simply need to > review the generated code, adjust it (if necessary), and then commit it > as the next revision. > Ah, I don't think I explained clearly enough how I meant for the XML manifest to be used. In my mind, I don't see that the developer should need to manually edit or create the file at all, except perhaps for resolving conflicts when doing a merge. The idea I have is that the manifest is first created by feeding a raw SQL schema into some program, which creates the manifest. The file is then updated by either feeding another schema into the program along with the generated manifest, or by having the program connect to the database directly and getting at the information that way. Either way, the manifest file gets updated by the program, not the developer. The program is also used to compare between two different manifest files (e.g. two revisions of the same file) and produces the SQL required to make the migration. This can only work if the file containing the database state is more descriptive than a raw SQL schema dump is, as we've already identified that there is no way of differentiating between a DROP/CREATE and an ALTER. If developer interaction can be kept to a minimum, the system should make it easier for DBAs to keep track of the changes that developers are making in development, and make it more unlikely that changes will be left out of rollouts. -- Russ.
Peter Fein wrote: > > Come on, we can't be the first people with this problem? Version > control's been around for what, 20, 30 years? Somebody must have tried > to do this before, even if it's with something other than Postgres... > That's what really surprises me too! I've looked around quite a bit and have only really found systems which are built into DB administration programs that have their own version storage system, and sometimes only work if you make the changes in that application. From my point of view, any database versioning system would need to be able to work with the same version control system that I use for my code, so I can extract a fully-working system from any point in the past. I'd be extremely happy if somebody finds such a system that is already written! -- Russ.
> From my point of view, any database versioning system would need to be > able to work with the same version control system that I use for my > code, so I can extract a fully-working system from any point in the past. > > I'd be extremely happy if somebody finds such a system that is already > written! As far as I know it doesn't. What we do is keep revision control of schema files and individual functions within subversion. The schema and functions are kept within the main development tree. Sincerely, Joshua D. Drake >
Peter Fein wrote: > As an uninformed, off-the-wall idea, could one compare snapshots of the > system tables to generate these diffs? I know next-to-nothing about > these, but it seems like they'd contain the info you'd need. Here's another nutty idea: Could one create a (carefully designed) audit table on the system tables themselves? You'd need to exclude irrelevant stuff (stats or whatever) & I'd have no idea about performance impact. Dumping & transforming the audit would basically give you a script that runs through all the actions done to a schema. I'm not 100% how to use this for an update script though. I'm not sure a straight diff of actions will work - it seems like the results may be order dependent in some cases and you might need to infer undos. Anyway, it's interesting... -- Peter Fein pfein@pobox.com 773-575-0694 Basically, if you're not a utopianist, you're a schmuck. -J. Feldman
On Sat, Jun 11, 2005 at 02:47:39PM +0100, Russ Brown wrote: > I'd be extremely happy if somebody finds such a system that is already > written! I've never seen such a system, and one is sorely needed. If someone were to write one that worked well against multiple databases and didn't cost a fortune they could probably make a good amount of $$ with it. BTW, I believe some of the really high-end database tools will do what you're looking for, but those typically start at a couple grand or more. -- Jim C. Nasby, Database Consultant decibel@decibel.org Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"
On Sat, Jun 11, 2005 at 11:31:02AM -0500, Peter Fein wrote: > Peter Fein wrote: > > As an uninformed, off-the-wall idea, could one compare snapshots of the > > system tables to generate these diffs? I know next-to-nothing about > > these, but it seems like they'd contain the info you'd need. > > Here's another nutty idea: Could one create a (carefully designed) audit > table on the system tables themselves? You'd need to exclude irrelevant > stuff (stats or whatever) & I'd have no idea about performance impact. > > Dumping & transforming the audit would basically give you a script that > runs through all the actions done to a schema. I'm not 100% how to use > this for an update script though. I'm not sure a straight diff of > actions will work - it seems like the results may be order dependent in > some cases and you might need to infer undos. Anyway, it's interesting... If triggers were supported on system tables, then yes, you could easily create an audit record. And yes, such a record should make it easy to generate a patch file. Personally, I've always worked in such a way that development databases tend to be re-created frequently, from a set of definition files. When it's time to create patch code, I'll diff the different versions of the files and generate a patch file based on that. Of course it'd be really handy if I could load up two databases (one with the old schema, one with the new) and have some tool compare the two and generate a diff and a patch file. Of course it might not catch everything (such as droping a column and then re-adding it with the same definition), but it would work fine in 99.99% of the cases. -- Jim C. Nasby, Database Consultant decibel@decibel.org Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"