Skip to main content

routing - Amazon AWS: SSL in our testing environment



My company has a running business web service on AWS which uses a wildcard SSL certificate (works for *.example.com). Multiple partners have subdomains and need SSL (https://partner1.example.com, https://partner2.example.com, etc.) and we serve either HTML or JSON responses to them.



Our production site has a web-facing ELB that has our SSL certificate installed. THe ELB balances and terminates SSL to a cluster of production server instances within our VPC. Our web app knows how to serve the correct partner HTML/JSON based on the sub-domain name.




I want to replicate this for our test environments (e.g. QA, Staging, Demo) as simply as possible. But test and production environments can't be the same server instances; I need to know that I won't kill production if I mess up on our test environment.



Ideally, the same ELB that handles production traffic could somehow route traffic to my test servers, perhaps if they used a known IP address or DNS subdomain? Am I correct that this isn't possible?



I suppose I could set up an ELB with the SSL cert for each test environment, each "balancing" to one server, but this seems overly complicated, and I would guess expensive too.



All instances, production and test, run within our VPC. But currently only the production cluster has SSL set up on the ELB (which terminates SSL) and passes along straight HTTP requests to the instances themselves.



The test servers accept HTTP. I have public elastic IPs and DNS names set up (staging.example.com, qa.example.com, demo.example.com) ... but they are not protected by SSL.




There's almost no critical or customer data on test server instances, and they are secured and patched properly as any web-facing server should be (I hope!!). Still, I would like SSL not only for additional security, but also to have my test environment match my production environment as closely as possible.



In all but the case of our staging server, the other environments are supported by a single instance (e.g. demo and qa both on the same server) with Apache named virtual host configurations. So, for test, lots of sites, but only a few servers.



Is there a way that I can still have my wildcard SSL certificate installed only on my load balancer (or perhaps one more), or some other configuration that allows me to detect and route the HTTP request to the right test server, based on IP or domain name (or even an HTTP header)?



I know this is a complicated question. I understand AWS, security, and networking pretty well, but I am still trying to figure out routing and even internal DNS within the VPC environment, so may be ignorant of my options.


Answer



Honestly? I'd say you're overthinking this. Just use multiple ELBs and put the *.example.com certs on all of them. This isn't really cost prohibitive. In your entire stack, the ELB is probably going to be one of the cheapest components. This also allows you to spin up stacks from the same CloudFormation template or provisioning system. You can use the cert on as many ELBs as you'd like.




We faced a problem very similar to yours and decided to do exactly what I describe above. We ended up saving money because the ELB just isn't a significant predictor of our monthly cost, and it'd have taken far more time in salary to hack up a solution, even if one was possible. With ELBs you only pay for what you use traffic-wise, so QA and demo environments just aren't going to be anywhere near the cost of production load. In the example described in that link, 100GB of traffic in a month is $18. That's like three Starbucks coffees.



One added benefit: It's also way easier to debug. No need to worry about weird trafficking rules or the like.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able