Skip to main content

High Availability with 3 servers: To virtualise or not?



We're changing hosts for our SAAS app (IIS+MSSQL) and have an opportunity to redesign the infrastructure. Either stick with what we have (which works well) or virtualise with vSphere.



Current:




2x Web/DB Servers
Each have IIS/MSSQL installed. Windows Network Load Balancing to distribute traffic between the 2 nodes with a virtual IP address & MSSQL Mirroring with automatic failover for the DB.



1x MSSQL Witness Server (small VM)



If one server fails, NLB reroutes traffic to the other node and MSSQL automatically fails over. There's maybe 40 seconds downtime while NLB redirects.



Possible:



2x vSphere Hosts





  • Firewall VM – 1 vCPU, 512MB RAM, 20GB HDD

  • Web Server VM – 1 vCPU, 2GB RAM, 50GB HDD

  • DB Server VM – 2 vCPU, 4GB RAM, 100GB HDD



1x CentOS Linux SAN (mounted as NFS shares)



Concerns are not enough resources for the DB & web. Currently the Web/DB server makes full use of the node and only has to share a node if one fails. What if the SAN fails? Was advised that the VMs HDD would reside on the Hosts themselves with the SAN acting as a redundant store. I presume this solution uses VMware High Availability - data loss for the DB is unacceptable. Should instead there be 2x DB VM machines with MSSQL mirroring set up but running on different host nodes?




EDIT: Pros of virtualisation are the ability to clone machines, easily move to new hardware, able to separate out DB/Web servers. Any comments on this?



Any help would be GREATLY appreciated!


Answer



With vSphere, the SAN becomes a (theoretical, because good SANs have built-in redundancy) single point of failure; but you need to place VM disks there if you want to be able to move the VMs between hosts (there is no way this can be done with local storage on the hosts).



Also, your current solution protects you from problems inside the servers: should f.e. the O.S. of one of them become damaged, the other server would remain online; if instead your only DB VM had a problem, you'd just lose it.



I would suggest using both solutions: build a virtualization environment with the two hosts, and then place redundant virtual machines inside them in order to be able to handle faults at the OS/application level. But if your hardware resources are limited and can't handle that, then just stick with the current solution.



Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...