Re: Overview of PG project infrastructure?
От | Dave Page |
---|---|
Тема | Re: Overview of PG project infrastructure? |
Дата | |
Msg-id | 937d27e10810090137g13e1aab6u54fc088eb1cdacb0@mail.gmail.com обсуждение исходный текст |
Ответ на | Overview of PG project infrastructure? (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-www |
On Thu, Oct 9, 2008 at 1:17 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > I'm gearing up to give a talk about the organization and management of > the Postgres project. One thing I'd like to spend a slide or two on is > the project infrastructure --- how much hardware have we got, where is > it located, who runs it, what's the history, what interesting management > challenges have there been? I have some vague knowledge about the mail > list and cvs servers but not about anything else. If anyone's got any > information like that in their back pocket, I'd much appreciate whatever > you can tell me. The webarchive I'll send you offlist (which Safari on your Mac should be able to read) is the top-level list of physical servers and VMs that we run, along with brief descriptions. Some possibly useful general notes: - We run most services inside FreeBSD jails, allowing us to easily backup and move any given services to a different host machine. We also run a VMware instance on a RHEL box which hosts a couple of legacy linux servers (developer.pgadmin.org and our nagios install) and XP workstation for building win32 packages. - Servers run a simple auto-backup/IDS system which periodically takes copies of key files on each system and commits them to a subversion repo, giving us backups, change history (with 1 hour granularity), and an email when anything changes. - A Nagios installation monitors the entire network (something like 330 services across ~50 hosts). As and when new problems occur, Nagios checks may be added to catch future occurances - for example, the progress of the archives search indexer is now monitored following a case where it started hanging on some messages in an encoding it didn't like. Stefan's presentation at http://wiki.postgresql.org/images/2/2e/Fosdem08_pg_infra.pdf may be useful. - The website is served from multiple static servers and one dynamic (wwwmaster). Content/code is pulled from SVN onto wwwmaster, whilst dynamically built pages are generated from a backend database. A spider crawls the site periodically, generating a static copy of the site (different parts of the site are regenerated on different timings). The static copy is pushed out to the static servers using rsync. A monitoring system tracks the freshness of each static server and will disable (within 5 minutes) any that fall behind or go offline. - The ftp site is mirrored onto ~80 servers. These are monitored daily for freshness by the 'MirrorBot' and will be disabled if they become more than 48 hours out of date. The MirrorBot manages the DNS for the mirrors automatically, and will keep the owners informed of any problems, as well as automatically removing a mirror that remains in an error state for more than 30 days. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com
В списке pgsql-www по дате отправления: