Skip to main content

linux - Apache - High Availability



I'm looking for a way to setup Apache as high-availability. The idea is to have a cluster of 2+ Apache servers serving the same websites. I can have the IP address of each server set up with round-robin DNS so that each request is randomly sent to one of the servers in the cluster (I'm not too concerned with load-balancing just yet, though that may come into play later on).



I already have it set up and working with multiple Apache VM servers (spread across multiple physical servers) serving websites, and round-robin DNS, and this works fine. The SQL database is set up using MariaDB in a high-availability cluster, the web data (HTML, JS, PHP scripts, images, other assets) are stored within LizardFS, and the sessions are stored in a shared location as well. This all works well until one of the servers in the cluster becomes inaccessible for whatever reason. Then a percentage of the requests (roughly the number of downed servers divided by the number of total servers in the cluster) are unanswered. Here are the options I've considered:



Automatic DNS Updates




Have some process that monitors the functionality of the web servers, and removes any downed servers from DNS. This has two issues:




  • First, even though we can set our TTL to some very low number (like 5
    seconds), I've heard that a handful of DNS servers will enforce a
    minimum TTL higher than ours. And, some browsers (namely Chrome)
    will cache DNS for no less than 60 seconds regardless of TTL
    settings. So even though we're good on our end, some clients may not
    be able to reach sites for some time in the event of a DNS update.



  • Second, the program that monitors the functionality of the cluster
    and updates DNS records becomes a new single point of failure. We
    may be able to get around this by having more than one monitor spread
    across multiple




systems, because if they both detect a problem and they both make the same DNS changes, then that shouldn't cause any issues.



uCarp/Heartbeat



Make the IP addresses that are accessed and in round-robin DNS virtual, and have them reassigned to up servers from down servers in the case that a server goes down. For instance, server1's VIP is 192.168.0.101 and server2's VIP is 192.168.0.102. If server1 goes down, then 192.168.1.102 becomes an additional IP on server2. This has two issues:





  • First, to my knowledge, uCarp/Heartbeat monitors their peers
    specifically for inaccessibility, for instance, if the peer can't be
    pinged. When that happens, it takes over the IP of the downed peer.
    This is an issue because there are more reasons a web server may not
    be able to serve requests other than just being inaccessible on the
    network. Apache may have crashed, a config error may exist, or some
    other reason. I would want the criteria to be "the server isn't
    serving pages as required" rather than "the server isn't pingable".
    I don't think I can define that in uCarp/Heartbeat.



  • Second, this doesn't work across data centers, because each set of
    servers across data centers has different blocks of IP addresses. I
    can't have a virtual IP float between data centers. The requirement
    to function across data centers (yes, my distributed file system and
    database cluster are available across data centers) isn't required,
    but it would be a nice plus.




Question




So, any thoughts on how to deal with this? Basically, the holy grail of high availability: No single points of failures (either in the server, load balancer, or the data center), and virtually no downtime in the event of a switch over.


Answer



When I want HA and load sharing, I use keepalived and configure it with two VIPs. By default, VIP1 is assigned to server1 and VIP2 is assigned to server2. When any server is down, the other server takes both VIPs.



Keepalived will take care of HA by watching the other server. If a server is not reachable or any interface is down, it changes to FAULT state. VIP will be taken by other server. To monitor your service, you can use track_script option.



If you want to add another cluster in another data center, you can add two more servers and do the same configuration. Now, you can load-share traffic between data centers using DNS round-robin. No DNS update is required in this case.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able