Skip to main content

Posts

Showing posts from August, 2018

ssl certificate mismatch issue and nginx

I have a website with an SSL cert. Let's say it's called example.com , I have a rewrite rule set to redirect any request to href="http://example.com" rel="nofollow noreferrer">http://example.com to go to https://example.com/ this works great. But, every now and then I get someone who types https://www.example.com/ and they get an SSL certificate mismatch. As the cert is only for example.com. What's the best way to do the re-direct first, before the server throws the ssl certificate to the web browser?

ssl certificate mismatch issue and nginx

I have a website with an SSL cert. Let's say it's called example.com , I have a rewrite rule set to redirect any request to http://example.com to go to https://example.com/ this works great. But, every now and then I get someone who types https://www.example.com/ and they get an SSL certificate mismatch. As the cert is only for example.com. What's the best way to do the re-direct first, before the server throws the ssl certificate to the web browser?

Sharing two SSL certificates with as wildcard and root domains in nginx (same server)

itemprop="text"> I have purchased a wildcard certificate and a single certificate for my domain structure which is: app.example.com => single *.app.example.com => wildcard Both these routes should point to the same project directory on same server Note: RapidSSL Support said that I had to purchase two as unlike classic domains example.com, my one is app.example.com, so that wildcard won't care my 3 decimal root Now, I need to setup in nginx two different certificates. My default one (which works for my wildcard), however not 'app.example.com' as its ssl certificate is not included. server { listen 443 ssl default_server; listen [::]:443 ssl default_server; ssl on; ssl_certificate /etc/ssl/ssl_certificate.cer; // this is my wildcard cert ssl_certificate_key

Sharing two SSL certificates with as wildcard and root domains in nginx (same server)

I have purchased a wildcard certificate and a single certificate for my domain structure which is: app.example.com => single *.app.example.com => wildcard Both these routes should point to the same project directory on same server Note: RapidSSL Support said that I had to purchase two as unlike classic domains example.com, my one is app.example.com, so that wildcard won't care my 3 decimal root Now, I need to setup in nginx two different certificates. My default one (which works for my wildcard), however not 'app.example.com' as its ssl certificate is not included. server { listen 443 ssl default_server; listen [::]:443 ssl default_server; ssl on; ssl_certificate /etc/ssl/ssl_certificate.cer; // this is my wildcard cert ssl_certificate_key /etc/ssl/private.key; root /var/www/example/public; index index.php index.html index.htm; server_name .app.example.com; location / { try_files $uri $uri/ /index.php?$query_string;

ssh - Is this server hacked or just login attempts ? See log

itemprop="text"> Can someone tell what does this mean? I tried a command like lastb to see last user logins and I see some strange logins from China (server is EU, I am in EU). I was wondering if these could be login attempts or successfull logins? These seem to be very old and usually I lock port 22 to my IPs only, I think I had the port open for a while, last log is in July. root ssh:notty 222.92.89.xx Sat Jul 9 12:26 - 12:26 (00:00) root ssh:notty 222.92.89.xx Sat Jul 9 12:04 - 12:04 (00:00) oracle ssh:notty 222.92.89.xx Sat Jul 9 11:43 - 11:43 (00:00) gary ssh:notty 222.92.89.xx Sat Jul 9 11:22 - 11:22 (00:00) root ssh:notty 222.92.89.xx Sat Jul 9 11:01 - 11:01 (00:00) gt05 ssh:notty 222.92.89.xx Sat Jul 9 10:40 - 10:40 (00:00) admin ssh:notty 222.92.89.xx Sat Jul 9 10:18 - 10:18 (00:00) cl

ssh - Is this server hacked or just login attempts ? See log

Can someone tell what does this mean? I tried a command like lastb to see last user logins and I see some strange logins from China (server is EU, I am in EU). I was wondering if these could be login attempts or successfull logins? These seem to be very old and usually I lock port 22 to my IPs only, I think I had the port open for a while, last log is in July. root ssh:notty 222.92.89.xx Sat Jul 9 12:26 - 12:26 (00:00) root ssh:notty 222.92.89.xx Sat Jul 9 12:04 - 12:04 (00:00) oracle ssh:notty 222.92.89.xx Sat Jul 9 11:43 - 11:43 (00:00) gary ssh:notty 222.92.89.xx Sat Jul 9 11:22 - 11:22 (00:00) root ssh:notty 222.92.89.xx Sat Jul 9 11:01 - 11:01 (00:00) gt05 ssh:notty 222.92.89.xx Sat Jul 9 10:40 - 10:40 (00:00) admin ssh:notty 222.92.89.xx Sat Jul 9 10:18 - 10:18 (00:00) Answer lastb only shows login failures . Use last to see successful logins.

Is this round-about migration to SBS2011 from 2008 feasible?

itemprop="text"> I run my own consulting software shop, and I'm currently running SBS 2008 Premium. I'm not utilizing the second server at the moment, but when I finally get to SBS 2011 Premium I'd like to. The server is already maxed at 4GB of ram, and I have six drives, one for the C: and the rest are part of a RAID 5 which is where all user data is stored (Sql dbs, Exchange data, file shares). I'd like to get some new hardware that will eventually run SBS 2011. My plan is to get the new hardware which will include disks for a RAID 5 and hardware virtualization support. I'll then install the second Windows 2008 Server license that comes with SBS 2008. I'd like to then move the current SBS server from physical hardware to a virtual on the new secondary server. This is one area I'm not sure of. Is it pos

Is this round-about migration to SBS2011 from 2008 feasible?

I run my own consulting software shop, and I'm currently running SBS 2008 Premium. I'm not utilizing the second server at the moment, but when I finally get to SBS 2011 Premium I'd like to. The server is already maxed at 4GB of ram, and I have six drives, one for the C: and the rest are part of a RAID 5 which is where all user data is stored (Sql dbs, Exchange data, file shares). I'd like to get some new hardware that will eventually run SBS 2011. My plan is to get the new hardware which will include disks for a RAID 5 and hardware virtualization support. I'll then install the second Windows 2008 Server license that comes with SBS 2008. I'd like to then move the current SBS server from physical hardware to a virtual on the new secondary server. This is one area I'm not sure of. Is it possible, and also what about the data on the RAID disks? Will that pose a problem? Later when I have enough to buy the SBS2011, I'd upgrade the secondary 2008 serve

performance - Load testing nginx inside AWS

I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just

performance - Load testing nginx inside AWS

I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

http - Apache prefork module. Processes not being forked under heavy load

I have an apache prefork module http server running on linux machine. The machine has 8GB RAM. I have following in my /etc/httpd/conf/httpd.conf: prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 512 MaxClients 512 MaxRequestsPerChild 4000 The problem is that no more child processes are getting forked after 256 and the requests are getting queued. I can see the number of child processes stuck at 256 under heavy load. The average memory of a httpd process is aboout 3.69 MB.

http - Apache prefork module. Processes not being forked under heavy load

I have an apache prefork module http server running on linux machine. The machine has 8GB RAM. I have following in my /etc/httpd/conf/httpd.conf: StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 512 MaxClients 512 MaxRequestsPerChild 4000 The problem is that no more child processes are getting forked after 256 and the requests are getting queued. I can see the number of child processes stuck at 256 under heavy load. The average memory of a httpd process is aboout 3.69 MB.

raid - What will happen if encountered a URE?

itemprop="text"> About hdd URE, I knew these points: For some reasons, when harddisk reading a sector that the FEC(Foward Error Correction data) could not correct the errors on that sector, we encontered an URE. The rating we encountered an URE is very low, but still exists. When reconstructing a RAID 5 array, sometimes it will happened and the reconstruction progress will stop. But I still have some questions: If there is a single disk, what will happened? Hardware/file system report an error and we lost a file? Or we got the file with wrong data? Will rewrite some data to that URE sector could turn the sector become normal? Or must we use some utilities provieded by HDD manufactory and remap another reserve sector? If it happened when we mirror/re-mirror a RAID 1/10 array, what will the RAID co

raid - What will happen if encountered a URE?

About hdd URE, I knew these points: For some reasons, when harddisk reading a sector that the FEC(Foward Error Correction data) could not correct the errors on that sector, we encontered an URE. The rating we encountered an URE is very low, but still exists. When reconstructing a RAID 5 array, sometimes it will happened and the reconstruction progress will stop. But I still have some questions: If there is a single disk, what will happened? Hardware/file system report an error and we lost a file? Or we got the file with wrong data? Will rewrite some data to that URE sector could turn the sector become normal? Or must we use some utilities provieded by HDD manufactory and remap another reserve sector? If it happened when we mirror/re-mirror a RAID 1/10 array, what will the RAID controller do? Stop the mirror progress? Or just copy the uncorrect data to another disk? Thanks for the answer, question 1&2 is solved. But the 3rd question I mean if encountered URE when converting a single

tftp - Error loading my BCD when trying to PXE boot to Windows PE

itemprop="text"> I'm trying to set up an ubuntu server with pxelinux, so I can boot Windows PE using PXE. On the client machine, I can see that pxelinux itself works, but the next screen is this: src="https://i.stack.imgur.com/avHqa.png" alt="error"> /> Here is what I did: Step 1: Installed tftpd-hpa and dhcp3 on the server. The server is a fresh ubuntu server x86 virtual machine. Static IP is 192.168.26.0 . Samba server is installed. dhcpd.conf contains subnet 192.168.26.0 netmask 255.255.255.0 { range 192.168.26.10 192.168.26.40; filename "pxelinux.0"; next-server 192.168.26.0; } I have verified that TFTP and DHCP work. Step 2: Downloaded pxelinux.0 from href="http://archive.ubuntu.com/ubuntu/dists

tftp - Error loading my BCD when trying to PXE boot to Windows PE

I'm trying to set up an ubuntu server with pxelinux, so I can boot Windows PE using PXE. On the client machine, I can see that pxelinux itself works, but the next screen is this: Here is what I did: Step 1: Installed tftpd-hpa and dhcp3 on the server. The server is a fresh ubuntu server x86 virtual machine. Static IP is 192.168.26.0 . Samba server is installed. dhcpd.conf contains subnet 192.168.26.0 netmask 255.255.255.0 { range 192.168.26.10 192.168.26.40; filename "pxelinux.0"; next-server 192.168.26.0; } I have verified that TFTP and DHCP work. Step 2: Downloaded pxelinux.0 from the ubuntu repository . Put it in the tftpboot directory and created pxelinux.cfg/default with these contents: DEFAULT winpe PROMPT 0 TIMEOUT 300 MENU TITLE PXE LABEL winpe MENU LABEL Windows PE KERNEL Boot/pxeboot.0 I've tried using Wdsnbp.0 (-> Wdsnbp.com), instead of pxeboot.0 (-> pxeboot.com) made no difference. I want to make a real menu with ubuntu options later.

exchange - New email domain - SMTP 550 5.7.1 Unable to relay

I have just added MX record for mail.example.com. Previously there was MX record for example.com. we have default email address-domain @example.com - and it is working fine for send and receive. we need to get some admin-level email addresses working that ends with @mail.example.com, for example: webmaster@mail.example.com postmaster@mail.example.com admin@mail.example.com administrator@mail.example.com so we added the MX record and pointing to same server. I tested sending email to webmaster@mail.example.com and postmaster@mail.example.com from inside domain and I am able to receive emails to them. but from outside, I am not able to send email to those address. it says Message not delivered Your message couldn't be delivered to webmaster@mail.example.com because the remote server is misconfigured. See technical details below for more information. Th

exchange - New email domain - SMTP 550 5.7.1 Unable to relay

I have just added MX record for mail.example.com. Previously there was MX record for example.com. we have default email address-domain @example.com - and it is working fine for send and receive. we need to get some admin-level email addresses working that ends with @mail.example.com, for example: webmaster@mail.example.com postmaster@mail.example.com admin@mail.example.com administrator@mail.example.com so we added the MX record and pointing to same server. I tested sending email to webmaster@mail.example.com and postmaster@mail.example.com from inside domain and I am able to receive emails to them. but from outside, I am not able to send email to those address. it says Message not delivered Your message couldn't be delivered to webmaster@mail.example.com because the remote server is misconfigured. See technical details below for more information. The response from the remote server was: 550 5.7.1 Unable to relay now, i see there are some suggestion to create new receive

networking - How to make Apache output packets through a certain network interface when connected to VPN?

itemprop="text"> I have an apache server that works perfect until I connect to VPN and then all connections to server time out. Now to my understanding the issue is tun0 becomes the default output interface hence apache gets confused as how to send packets out, so I tried to fix it using control groups by marking packets going out from apache and redirecting them through eth0 as described in this href="https://superuser.com/a/1048913/320129">SU answer , but it doesn't work anymore after I upgraded my Ubuntu OS to version 16.04. This is my network diagram: href="https://i.stack.imgur.com/7BH2S.jpg" rel="nofollow noreferrer"> src="https://i.stack.imgur.com/7BH2S.jpg" alt="enter image description here"> And here's my network details: me@mypc:~

networking - How to make Apache output packets through a certain network interface when connected to VPN?

I have an apache server that works perfect until I connect to VPN and then all connections to server time out. Now to my understanding the issue is tun0 becomes the default output interface hence apache gets confused as how to send packets out, so I tried to fix it using control groups by marking packets going out from apache and redirecting them through eth0 as described in this SU answer , but it doesn't work anymore after I upgraded my Ubuntu OS to version 16.04. This is my network diagram: And here's my network details: me@mypc:~$ ip route list 0.0.0.0/1 via 10.132.1.5 dev tun0 default via 192.168.0.1 dev eth0 proto static metric 100 10.132.1.1 via 10.132.1.5 dev tun0 10.132.1.5 dev tun0 proto kernel scope link src 10.132.1.6 123.4.5.6 via 192.168.0.1 dev eth0 234.5.6.7 via 192.168.0.1 dev eth0 128.0.0.0/1 via 10.132.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.6 metric 100 me@mypc:~$

raid - Slow writes Dell Server R720

I have a Dell R720 Server with PERC H310 hardware raid 5 and 4 x Seagate Cheetah 15K.7 ST3300657SS 300GB hard drives. Windows Server 2008 R2. The reads look good, but the writes are painfully slow. Running the Atto Disk Benchmark, I'm seeing read speeds of >500 MB/sec at transfer sizes from 128kb to 8192kb. For writes with transfer sizes from 128kb to 8192kb, I'm seeing read speeds of 20 to 22 MB/sec. About 10 times too slow. The same machine has a new Samsung SSD. Writes for the SSD are > 450 MB/sec. Besides checking the drivers, what might cause such bad performance? Where are good places to look? What good additional tests to run? I was the only user during these tests. No resource hungry processes were running.

raid - Slow writes Dell Server R720

I have a Dell R720 Server with PERC H310 hardware raid 5 and 4 x Seagate Cheetah 15K.7 ST3300657SS 300GB hard drives. Windows Server 2008 R2. The reads look good, but the writes are painfully slow. Running the Atto Disk Benchmark, I'm seeing read speeds of >500 MB/sec at transfer sizes from 128kb to 8192kb. For writes with transfer sizes from 128kb to 8192kb, I'm seeing read speeds of 20 to 22 MB/sec. About 10 times too slow. The same machine has a new Samsung SSD. Writes for the SSD are > 450 MB/sec. Besides checking the drivers, what might cause such bad performance? Where are good places to look? What good additional tests to run? I was the only user during these tests. No resource hungry processes were running.

linux - Should I run my own DNS recursor or local cache daemon?

I am on AWC EC2, as my server is going to make a lot of query for third party domains, I am thinking the following options install nscd on all servers use the default ec2 name recursor install my own name recursor just use 8.8.8.8 I am hesitate to install centralized recursor so it is single point of failure, and subject to attack like: href="http://support.godaddy.com/help/article/1184/what-risks-are-associated-with-recursive-dns-queries" rel="nofollow noreferrer">http://support.godaddy.com/help/article/1184/what-risks-are-associated-with-recursive-dns-queries Is it common nowadays now one will use name server support recursive DNS query like the above article suggest? In term of security and performance, I am thinking to install nscd , are there any drawback?

linux - Should I run my own DNS recursor or local cache daemon?

I am on AWC EC2, as my server is going to make a lot of query for third party domains, I am thinking the following options install nscd on all servers use the default ec2 name recursor install my own name recursor just use 8.8.8.8 I am hesitate to install centralized recursor so it is single point of failure, and subject to attack like: http://support.godaddy.com/help/article/1184/what-risks-are-associated-with-recursive-dns-queries Is it common nowadays now one will use name server support recursive DNS query like the above article suggest? In term of security and performance, I am thinking to install nscd , are there any drawback?

Debian server not booting after software RAID-1 array degraded

So I have a Debian 7 server with 3 hard drives. Its RAID-1 is basically configured this way: md0: sda1, sdb1 --> / (root) md1: sda5, sdc1 + sdb5 (spare) --> /data (sdc1 is on a SSD, and sda5 is marked 'writemostly'). both sda and sdb have grub installed on them. When installing an extra network card, I messed up and unplugged sdc's data cable (note that sdc doesnt have GRUB or /, and should have nothing to do with booting). So the system booted fine after that. I noticed my error, shut down the machine, and plugged sdc back in (while mdadm was rebuilding md1 on the spare). Now, the system either gave me the dreaded GRUB shell, or just a black screen with a blinking cursor. depending on which hard drive(s) I unplugged. But no combination of hard drives gave me a successful boot. I also tried it with having all 3 drives connecte

Debian server not booting after software RAID-1 array degraded

So I have a Debian 7 server with 3 hard drives. Its RAID-1 is basically configured this way: md0: sda1, sdb1 --> / (root) md1: sda5, sdc1 + sdb5 (spare) --> /data (sdc1 is on a SSD, and sda5 is marked 'writemostly'). both sda and sdb have grub installed on them. When installing an extra network card, I messed up and unplugged sdc's data cable (note that sdc doesnt have GRUB or /, and should have nothing to do with booting). So the system booted fine after that. I noticed my error, shut down the machine, and plugged sdc back in (while mdadm was rebuilding md1 on the spare). Now, the system either gave me the dreaded GRUB shell, or just a black screen with a blinking cursor. depending on which hard drive(s) I unplugged. But no combination of hard drives gave me a successful boot. I also tried it with having all 3 drives connected, and telling the BIOS to boot from any of the boot drives manually. What I did in the end was to boot Debian setup in rescue mode, aassembled

email - Adding an SPF record for a 3rd party, but don't have one for my own domain

itemprop="text"> We have a 3rd party service sending some email on our behalf. They are using our domain name in their outgoing emails. They have requested we configure an SPF record for them. We do not currently have an SPF record defined for our own domain, which is the same one the 3rd party is "spoofing". My concern is that if we add a record for a 3rd party without defining our own as well that mail originating from our servers could be rejected. Is my concern valid? Answer If you have no SPF record then receivers will generally fail safe and accept your email (although that's starting to change). As soon as you provide an SPF record you must include all legitimate mail senders, because otherwise the ones not listed could be treated as possible forgery sources. Strictly spea

email - Adding an SPF record for a 3rd party, but don't have one for my own domain

We have a 3rd party service sending some email on our behalf. They are using our domain name in their outgoing emails. They have requested we configure an SPF record for them. We do not currently have an SPF record defined for our own domain, which is the same one the 3rd party is "spoofing". My concern is that if we add a record for a 3rd party without defining our own as well that mail originating from our servers could be rejected. Is my concern valid? Answer If you have no SPF record then receivers will generally fail safe and accept your email (although that's starting to change). As soon as you provide an SPF record you must include all legitimate mail senders, because otherwise the ones not listed could be treated as possible forgery sources. Strictly speaking, you can include ~all or ?all and avoid listing all your mail senders, but if you do that you won't get any benefit from the SPF record other than for testing that it's otherwise accu

Linux vm in vmware "Error loading operating system"

itemprop="text"> I have a vm I cloned from a physical server using vmwares p2v converter and after the clone the new vm wont load the os. The error I get is "Error loading operating system" The cloned machine is a rhel 3 32bit server. I believe its esxi 5.5. I uploaded an arch iso to the vmware storage and booted with that. When I had that started lslbk shows the 3 partitions that were converted, sda1,2 and 3. /boot , swap and /. I was able to mount all of them and I was able to chroot into the root partition. parted -l shows the 3 partitions on /dev/sda. The first parition has the boot flag. It seems like all of this is ok as far as Linux goes. I think if it was a linux issue I would at least load grub and get an error about no drives and it would drop to the rescue shell. Im not sure if there is some vmware

Linux vm in vmware "Error loading operating system"

I have a vm I cloned from a physical server using vmwares p2v converter and after the clone the new vm wont load the os. The error I get is "Error loading operating system" The cloned machine is a rhel 3 32bit server. I believe its esxi 5.5. I uploaded an arch iso to the vmware storage and booted with that. When I had that started lslbk shows the 3 partitions that were converted, sda1,2 and 3. /boot , swap and /. I was able to mount all of them and I was able to chroot into the root partition. parted -l shows the 3 partitions on /dev/sda. The first parition has the boot flag. It seems like all of this is ok as far as Linux goes. I think if it was a linux issue I would at least load grub and get an error about no drives and it would drop to the rescue shell. Im not sure if there is some vmware configuration I am missing or what I would need to check? What seems suspicious to me is that in the bios the only detected device in primary master is the cdrom when set to auto. When

linux - CRON starts to skipping to the next minute

I am setting up single minute crons to the crontab via a bash script for a load testing purpose. There is no issue with the script executing and the crons are added up and i can monitor crons are being executed via the /var/log/cron. But the issue is when I informed the script the other day to 106 crons it added and they were executed nicely. But today I reset the crontab from scratch and I only able to set up 85 crons.Then it starts to say crond[31243]: (root) INFO (Job execution of per-minute job scheduled for 08:32 delayed into subsequent minute 08:33. Skipping job run.) and eventually all the crons will started to get skipped and nothing will get executed. But initial conclusion I had was 106 is the maximum value that can be st up on this server. But today it got reduced to 85. Server configurations weren't changed;same environmnet as it when it wa

linux - CRON starts to skipping to the next minute

I am setting up single minute crons to the crontab via a bash script for a load testing purpose. There is no issue with the script executing and the crons are added up and i can monitor crons are being executed via the /var/log/cron. But the issue is when I informed the script the other day to 106 crons it added and they were executed nicely. But today I reset the crontab from scratch and I only able to set up 85 crons.Then it starts to say crond[31243]: (root) INFO (Job execution of per-minute job scheduled for 08:32 delayed into subsequent minute 08:33. Skipping job run.) and eventually all the crons will started to get skipped and nothing will get executed. But initial conclusion I had was 106 is the maximum value that can be st up on this server. But today it got reduced to 85. Server configurations weren't changed;same environmnet as it when it was 106 Is this because of the setting up crons too frequently or something else. I'm new to cron and its workings. Please help

Internal DNS Best Practice - should a slave resolve internet domain names?

itemprop="text"> I have inherited a internal DNS solution at my new company and I want to start to improve its reliability! At the moment there is one Master for the internal domain which forwards external DNS lookups to our ISP. There is one slave which only seems to resolve internal requests. For improved resiliency should I be setting the slave to forward internet lookups as well? Thanks for any help. itemprop="text"> class="normal">Answer Yes, slave should be the same as master in all regards, except that it has a reference to master to feed for any other changes. The purpose of a slave DNS is exactly to continue doing what master was doing until it failed.

Internal DNS Best Practice - should a slave resolve internet domain names?

I have inherited a internal DNS solution at my new company and I want to start to improve its reliability! At the moment there is one Master for the internal domain which forwards external DNS lookups to our ISP. There is one slave which only seems to resolve internal requests. For improved resiliency should I be setting the slave to forward internet lookups as well? Thanks for any help. Answer Yes, slave should be the same as master in all regards, except that it has a reference to master to feed for any other changes. The purpose of a slave DNS is exactly to continue doing what master was doing until it failed.

amazon web services - AWS Elastic Load Balancer : white list only my IP

My objective: To make my AWS Elastic Load Balancer hittable by only traffic from my ip. What I have tried: created a security group in EC2 security groups set an inbound rule that allows all traffic from my ip [all, all, all, /32] assigned this ELB the newly created security group attempted to hit the elb from an ip outside myoffice The results: All traffic, even from ips other than mine could still hit my ELB (and thus get through to my app servers). What am I doing wrong? How can I block inbound traffic to my ELB (and the EC2 instances behind it)?

amazon web services - AWS Elastic Load Balancer : white list only my IP

My objective: To make my AWS Elastic Load Balancer hittable by only traffic from my ip. What I have tried: created a security group in EC2 security groups set an inbound rule that allows all traffic from my ip [all, all, all, /32] assigned this ELB the newly created security group attempted to hit the elb from an ip outside myoffice The results: All traffic, even from ips other than mine could still hit my ELB (and thus get through to my app servers). What am I doing wrong? How can I block inbound traffic to my ELB (and the EC2 instances behind it)?

nameserver - Vanity name servers with .no domains

itemprop="text"> I created vanity name servers e.g. ns1.example.com and ns2.example.com that are just maskеа and are pointed to rackspace free dns service. In other words ns1.example.com points to dns1.stabletransit.com and ns2.example.com points to dns2.stabletransit.com(stabletransit.com are rackspace's nameservers). In general it works fine and I can forward domains to ns1.example.com and they will work correctly and redirected to dns1.stabletransit.com. However for .no(norway) domains I stumbled into the following error: The nameserver ns1.example.com is not correctly configured. It has the following NS records in the zone file for somedomain.no: dns1.stabletransit.com dns2.stabletransit.com This does not correspond with the nameservers you have entered, which are: ns1.example.com ns2.example.com The list you

nameserver - Vanity name servers with .no domains

I created vanity name servers e.g. ns1.example.com and ns2.example.com that are just maskеа and are pointed to rackspace free dns service. In other words ns1.example.com points to dns1.stabletransit.com and ns2.example.com points to dns2.stabletransit.com(stabletransit.com are rackspace's nameservers). In general it works fine and I can forward domains to ns1.example.com and they will work correctly and redirected to dns1.stabletransit.com. However for .no(norway) domains I stumbled into the following error: The nameserver ns1.example.com is not correctly configured. It has the following NS records in the zone file for somedomain.no: dns1.stabletransit.com dns2.stabletransit.com This does not correspond with the nameservers you have entered, which are: ns1.example.com ns2.example.com The list you enter must be identical to the list of NS records returned by each nameserver on the list. For .no domains the nameservers must be configured correctly before the delegation can b

amazon ec2 - How to mitigate DDOS attacks on AWS?

I have web application (NodeJS) and I plan to deploy it to AWS. To minimize the cost it will run on single EC2 instance. I'm worried though about what will happen if someone decides to bless me with DDOS attack and hence have few questions. Now, I did quite a bit of research, but as my understanding is clearly lacking I apologise if some of the questions are plain stupid: I want to avoid people flooding my site with layer 4 attacks. Would it be sufficient to set my Security Group to accept traffic only (in additions to SSH port 22) from: Type HTTP Protocol TCP Port Range 80 Would above stop UDP flood and others from hitting my EC2 instance? Via Security Group I would allow SSH connections to port 22 only from my static IP address. Would that keep attackers away from trying to attack port 22 co

amazon ec2 - How to mitigate DDOS attacks on AWS?

I have web application (NodeJS) and I plan to deploy it to AWS. To minimize the cost it will run on single EC2 instance. I'm worried though about what will happen if someone decides to bless me with DDOS attack and hence have few questions. Now, I did quite a bit of research, but as my understanding is clearly lacking I apologise if some of the questions are plain stupid: I want to avoid people flooding my site with layer 4 attacks. Would it be sufficient to set my Security Group to accept traffic only (in additions to SSH port 22) from: Type HTTP Protocol TCP Port Range 80 Would above stop UDP flood and others from hitting my EC2 instance? Via Security Group I would allow SSH connections to port 22 only from my static IP address. Would that keep attackers away from trying to attack port 22 completely? My EC2 instance will run Ubuntu. I want to avoid application layer attacks (layer 7) and was planning to do it directly from my application, so somehow detect if certain IP floods pa

What type of SAS/SATA cables do Dell SAS6/iR and SAS5/iR controller cards use?

itemprop="text"> I'm considering buying a bunch of these to upgrade our servers at work (with SATA drives, not true SAS). We are out of ports to plug disks into, and these cards are decently supported and cheap. The one thing I can't seem to figure out is: what type of HBA-to-SATA cable/connector type do these cards use? In the pictures, they look like href="http://www.cybernetech.co.jp/image/product/sas/sas_internal_multilane.jpg" rel="nofollow noreferrer">SFF-8484 , but I'm not 100% sure, and after reading the specs, Googling around, and reading href="http://support.dell.com/support/edocs/storage/RAID/SAS6iR/en/PDF/en_ug.pdf" rel="nofollow noreferrer">the manual , I haven't found out for sure. I'd rather not spend a lot of money on cables that don't work, so..

What type of SAS/SATA cables do Dell SAS6/iR and SAS5/iR controller cards use?

I'm considering buying a bunch of these to upgrade our servers at work (with SATA drives, not true SAS). We are out of ports to plug disks into, and these cards are decently supported and cheap. The one thing I can't seem to figure out is: what type of HBA-to-SATA cable/connector type do these cards use? In the pictures, they look like SFF-8484 , but I'm not 100% sure, and after reading the specs, Googling around, and reading the manual , I haven't found out for sure. I'd rather not spend a lot of money on cables that don't work, so...does anyone have any experience with these SAS cards? What is their cable type? Cheers! Answer That is correct. The card's interface is an SFF-8484. Depending on what your drive cage looks like, the other side of the cable needs to match. If you are doing this without hot-swap drives, you'll need to use 4-lane SAS breakout cables (e.g. SAS SFF-8484 to SFF-8482 ).

security - Switch the SSL provider after Heartbleed bug instead of revoking

itemprop="text"> I have a question regarding the Heartbleed problem and the SSL certificates. About Heartbleed many people say that admins should revoke their certificates and get new ones. I got my SSL certs from Startcom and as you may know they charge for revoking. I am very angry about that but know my question(s): - Is it possible to just switch from Startcom to another provider like Comodo, get new certs and change the certs on my server? - Could be there any problems with the old certs if they are not being revoked? - Is it possible to "block" these old certs on my server (Ubuntu 12.04)? I don't think that my certs have been compromised but this is a serious topic for me. Answer I got my SSL certs from Startcom and as you may know they charge for revoking. I am very angry about that