[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Large File Revision Control??

From: Greg A. Woods
Subject: Re: Large File Revision Control??
Date: Thu, 23 Aug 2001 19:14:50 -0400 (EDT)

[ On Thursday, August 23, 2001 at 14:31:11 (-0700), BCC wrote: ]
> Subject: Re: Large File Revision Control??
> Actually I know a fair amount about db replication.  The paradigm I
> described was simplified for the question.  The reason we cannot use
> replication successfully in this case is our master is also our
> development machine.

That's not a reason.  That's a VERY poor excuse that reveals some
glaring problems in your development process!

>  The data is often imported in in the form of text
> files, tables are manually deleted on occasion (this is not my doing I
> can tell you!).

Yeah, those kinds of things happen on development machines.  So what?

> Replication is wonderful for mirroring data on a moment to moment basis,
> but not for a freeze of the db at a particular (developer determined)
> moment.

Ah!  Now you're possibly getting to the real issue!

You do not want to even contemplate trying to replicate your DB in order
to propogate schema changes, not by any means (i.e. not by DB migration
and not by copying the raw DB files).

>  Our developers want to hack and slash at the db, then when they
> feel it is ready, migrate it out to the 'slaves'.

Sure, that's fine (provided you've got some tracking, etc. I guess), but
the mechanism of the "migration" is something you've got to think much
harder about!

>  MySQL replication
> just could not handle duplicating all the hacks and slashes, and
> replication stopped.

I'm not surprised.

However it seems they (or at least you) need to learn more about your
own software process and how to manage change in your applications.

> PostgreSQL and Oracle are not options ( not my decision there ).

Your loss, not mine!  ;-)

> If you still think replication is best for this kind of environment, I
> would really love to hear how to do it...

Nope, not now that I know a bit more about the issue....

I suspect the right thing to do is to learn to use a proper manageable
upgrade process to migrate schema and constant data changes out to the
production machines.

This is really the wrong forum, but I'll give a quick dump of my
thoughts anyway.....

Depending on the nature of the production DBs and applications it may be
simply a matter of dumping the schema and constant data, dumping the
user data on the production machines, then using a reload procedure to
put the user data back together with the new schema and constant data.
On the other hand you might have to write live migration tools to tweak
a running database.

If the DB contents really are absolutely 100% completely static then
what I'd recommend is building a staging server to which the development
system would be migrated to as desired and then either use replication
to push the changes out to the production machines.

If you have firm enough control over all the production instances and
you can take them down temporarily on a scheduled basis; and if the DB
contents really are absolutely 100% completely static; then you might
try going from the staging server to the production machines using rsync
(or some other file transfer scheme), making sure everything's shut down
and quiescent for the interval of the transfer of course.

No matter what I suspect you really need a staging server, if for
nothing else than to test on!

                                                        Greg A. Woods

+1 416 218-0098      VE3TCP      <address@hidden>     <address@hidden>
Planix, Inc. <address@hidden>;   Secrets of the Weird <address@hidden>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]