Skip to main content

Multiple SSL on NGINX with shared settings?

I have an NGINX development server. I have a large number
of configuration directives which drive various features of the server.



I want to be able to access the server via SSL.
The problem is that I may access the server from different domain names. For example,
inside my LAN I might use 192.168.1.100, but on the Internet (via NAT forwarding) I'd
use my home domain name, or in some specific instances the server's external IP
address.




Since SSL depends on the
hostname the client requested, I want to be able to generate and serve up multiple SSL
certificates based on how the server is being accessed. For example, one certificate's
CN would be "https://192.168.1.100" while another's would be " href="https://www.example.com" rel="nofollow
noreferrer">https://www.example.com" and yet another's might be " href="https://12.34.56.78" rel="nofollow
noreferrer">https://12.34.56.78".



I
think it may be possible to accomplish this by duplicating server blocks like
this:



server {
listen
443 ssl;
server_name 192.168.1.10;
ssl_certificate
/etc/nginx/192.168.1.10.crt;
ssl_certificate_key
/etc/nginx/192.168.1.10.pem;

location / {
root
/var/www/root;
index index.html index.htm;

}
}
server {
listen 443 ssl;
server_name
www.example.com;
ssl_certificate /etc/nginx/www.example.com.crt;

ssl_certificate_key /etc/nginx/www.example.com.pem;

location /
{
root /var/www/root;
index index.html index.htm;

}
}
...


The
problem is that I have several (i.e. more than 10)
location blocks inside my server configuration because I am
testing multiple configurations, environments and web applications on the same server.
Most of these locations include either a FastCGI pass through and/or an
alias or rewrite
directive.




Duplicating all of the
location blocks is not only tedious but could lead to
inconsistencies if I forget to update every single one.



In addition, I have also planned to possibly
use this environment in the future in such a way that would actually use subdomains,
each with different location blocks. So now we end up with something like
this:




  • www.example.com uses
    location parameter set
    1

  • 12.34.56.78 uses location
    parameter set 1

  • test.example.com uses
    location parameter set
    2

  • 192.168.1.100 uses location
    parameter set
    2




As long as I
don't use SSL, this isn't an issue since the server will always serve out of the
"default" server block. But SSL seems to require a separate
server block for each domain name in order to serve the unique
certificates...



Ideally, what I'd like is some
kind of "named grouping/include" where I could write all the
location blocks in a separate section and then
include them within the server blocks. My
"ideal" solution if it
existed:



config config1
{
location / {
root /var/www/root;
index index.html
index.htm;
}

location /testapp1 {
include
fastcgi_params;
fastcgi_split_path_info ^(/testapp1)(.*)$;

fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass
unix:/run/testapp1.sock;
}
}
config config2 {

location / {
root /var/www/root2;

index index.html
index.htm;
}
location /testapp2 {
include
fastcgi_params;
fastcgi_split_path_info ^(/testapp2)(.*)$;

fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass
unix:/run/testapp2.sock;
}
}


server
{
listen 443 ssl;
server_name 192.168.1.100;

ssl_certificate /etc/nginx/192.168.1.10.crt;
ssl_certificate_key
/etc/nginx/192.168.1.10.pem;
include config
config1;
}
server {
listen 443 ssl;

server_name www.example.com;

ssl_certificate
/etc/nginx/www.example.com.crt;
ssl_certificate_key
/etc/nginx/www.example.com.pem;
include config
config1;
}
server {
listen 443 ssl;

server_name test.example.com;
ssl_certificate
/etc/nginx/test.example.com.crt;
ssl_certificate_key
/etc/nginx/test.example.com.pem;
include config
config2;

}
...

Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...