Skip to main content

nginx domains using SSL cert listening but inaccessible



I have multiple domains and multiple domains using SSL certs running under nginx. All domains are basically using the exact same config, substituting for names of course, except for the HTTPS-enabled domains which I have the SSL settings specified. Between these two domains the config is also the same for SSL with exception of file names for keys and such. Each website is also running on it's own dedicated IP. (all of them)



All my non-SSL sites are working just fine. I can access them without any problems. All my SSL sites get a 521 error from CloudFlare. (Strict SSL is on, just fyi)




One of the domains I had previously set up had been working just fine. Even if I remove the other SSL-enabled domain it still doesn't work now. The only config change I made was adding a new domain that was also using a SSL cert. When I test the config with nginx it says everything is fine. When I check netstat I can see those IPs in the listening over 443. I don't see any errors in /var/log/syslog or nginx's access and error logs.



Main nginx.conf



user  nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;



events {
worker_connections 1024;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;


log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;


#gzip on;

include /etc/nginx/conf.d/*.conf;
}


Example SSL site conf in /etc/nginx/conf.d



server {

listen [IPv6 address]:443;
server_name domain;
ssl on;

ssl_certificate /etc/nginx/domain.crt;
ssl_certificate_key /etc/nginx/domain.key;

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;


ssl_protocols TLSv1.2;

ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=31536000;

#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;


location / {
root /srv/domain/www;
index index.html index.htm index.php;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;

location = /50x.html {
root /usr/share/nginx/html;
}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
# root html;
fastcgi_pass unix:/run/domain.sock;
fastcgi_index index.php;

fastcgi_param SCRIPT_FILENAME /srv/domain/www$fastcgi_script_name;
include fastcgi_params;
}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}

}

Answer



Ok, so here's what the problem ended up being. I assumed that if you enabled Full SSL (Strict) that CloudFlare would always connect to your website over HTTPS. Obviously that's not how it works. If you try to visit a site using HTTP CloudFlare will still connect to your server over HTTP.



All I had to do was add a page rule on each domain to force the redirection to HTTPS.



So it wasn't nginx at all, it was just me being dumb.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

ubuntu - Monitoring CPU, Mem, disk, on a single server

I've been looking for a simple starter solution for monitoring my [currently] single server hosted solution. Other than Nagios and similar, are there other good (simple) solutions people are using? Answer Everything depends on what you want. For example Munin is very simple, you can install and configure it in less then 10 minutes (on one server), it can sends alarms, make graphs from monitoring cpu, mem. apache connections, eaccellerator, disk io and many many more (it has many plugins). But if you are planning in future get some more machines, munin may not be enough. For example in munin you cant monitor state of individual processes, can't monitor changes in files (for security purpose). So if you wanna only see what is the utilization of basics parameters on your server and don't plan to buy some more servers Munin is what you are looking for, but if you wanna be alarmed when some of your service is down, take more control on what is happeninig on...

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...