info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: handling config files


From: Peter Smulders
Subject: Re: handling config files
Date: Wed, 13 Mar 2002 19:07:12 +0000

Hi list,

(I apologise upfront for responding to something ancient. Also, this
message relates more to configuration management than to CVS, but I
think it will still fit the majority of the audience. Think of this
message as enhancing the list archives)

> Raptor <address@hidden> wrote:
> I want to ask U how u handle configuration files..what I have in mind...Say 
> we have a
> development server and production server.. but the difference is that for 
> example u
> store uploaded images on the Dev server at directory
> /sites/upload/images
> but on production server it is on :
> /disk2/upload/images

> [etc]

I think I have dealt with this very situation. The problem (in a larger
perspective) is that you have a need for a built product in various
environments, but it is developed from one place.

Other than plain "making things work with the right paths" there is this
issue: if you maintain different files for different environments,
especially if one is "Test" and one is "Live", you might fix something
in one file, find that it works happily in Test and then find it breaks
in Live. Testing purists might say that by that method, you invalidate
your test environment, because you are not testing there what you will
deploy later (upon passing the test).


My way of solving this problem was twofold:

1. Synchronise environments
2. Use the build process


--1--

(This applies much more to UNIX than anywhere else, but it is the
principle that counts.)

What we did was lift the entire built platform onto a higher level: the
DocumentRoot (this was a webserver environment) was a symlink, pointing
to another symlink. The first one allowed us to deploy and built
completely, take down a webserver, delete-and-recreate a symlink and
bring the server back up. This minimised server downtime.

The second symlink was more important; it was placed at the root of the
file system.

This setup can be replicated (because every UNIX machine has a root
directory) on pretty much every other machine in use. That took care of
almost all path dependencies.

Note that your sysadmins may balk at this. If so, the same effect can be
achieved by creating a specific directory path that can be replicated on
all the other machines you use (dev, test, uat, etc)

For utilities used in the build process, we made sure that they were all
available in the same path. Specifically, this required a handfull of
symlinks in /bin and /usr/local/bin.

Another common problem is different servers and thus, different host
names. I have seen and used two very effective ways of getting around
that:

1. Use host names in your code and use user-specific host files to
switch between servers. So a user (say, a member of a test team) would
have one host file with www.yoursite.com pointing to the live IP address
and another with that hostname pointing to the IP address of the test
server.

A different take on this is to use your own DNS for this. Advantage:
switch all users from a central point without having to touch any user
machines. Disadvantage: for (Windows) users to switch between Live and
Test required a reboot (which for us was just too inconvenient).

2. Use port numbers. We used this with Dynamo installations, but the
same effect can be achieved with other app and web servers. A "Live" URL
would then be http://www.somesite.co.uk/. The dev, test, staging, uat
and other URLs would all be something like:
http://www.somesite.co.uk:2883/, with 2883 being the port number used.
This requires access to one central IP address that all the differently
(port)numbered servers would listen on, but that can often be arranged.


Overall, the principle is to minimise differences between environments,
hoping that it reduces the complexity and amount of problems that will
impact on source code control.


--2--

As Greg already mentioned: use the build system. I very much agree that
that is the place to solve this kind of problem. In your build system,
the use of MakeFiles or something simpler, homegrown and/or less
threatening is highly recommended.

The main principle to strive for is to only apply differentiation where
it is needed. I should admit that I have never done more than very
carefully amend an existing MakeFile, but as I understand it, you can
use "make" to transform one big file, with generic and specific sections
into a target-specific file (or maybe even more than one file). In some
cases, the same effect can be achieved with something XML-ish, any
scripting language with a lot of case statements or even a well-aimed
sed one-liner.

Some complexity might be avoided by isolating target-specific
information; maybe write some code that will provide a generic interface
based on a site-specific configuration, etc. Symlinks come to mind as
well: you could have you build system put symlinks in place and have
everything else point at those.

Another thing I've done (but it needs a lot of supporting error checking
to make it safe) is to use a "Customise" shell script, that would find
out what target it was being run on and run a series of in-place-editing
commands.(1)

--2 1/2--

I have previously sinned against the multiple-config-file idea, but it
was nevertheless very effective. (and I admit it has bitten me once or
twice as well)

In my case, the problem was that there were different "classes" of
machines, with config options differing across different groupings of
machines. This could have been solved with a solution described above,
but we chose for quick-and-easy maintenance, trading against a higher
risk of (human) error.

For every config file involved, there would be multiple "flavours",
distinguished by their extension. The build system would look across the
entire tree and compare extensions to the build target. (for example,
for the Live target, it would look for files with a ".LIVE" extension.
Those files would be renamed to the file name without the extension.

This provided a very flexible way of controlling even wildly differently
configured machines, with a built-in default for platforms that didn't
need customisation.


------------

Hope this helps some people, if only for inspiration.

Cheers,

Schmolle


(1) for those not experienced in the joys of vi: you can send editing
commands to "ex" (the editor vi is built on top of) on a command line,
which will edit a file in place. Example:

$ (echo "/TAG/s/TAG/SiteSpecificTag/"; echo "w") | ex -
/some/path/somefile.conf

This will look in the file somefile.conf in directory /some/path for the
first occurence of the three characters T, A and G and replace those
with SiteSpecificTag. It will also save the changes.

By chaining the "echo" commands, you can do quite extensive editing from
a script. It helps a lot to be well versed in vi and ex, as well as
regular expressions.

The same can probably be done with Perl, but I wouldn't actually know
how.



-- 
Brain for hire. Find out about it on:
http://www.schmolleworld.demon.co.uk/index.html
e: address@hidden  m: +44 (0)7980 511 8283
h: +44 (0)207 923 0769 w: +44 (0)207 653 2708



reply via email to

[Prev in Thread] Current Thread [Next in Thread]