Skip to main content

Posts

Showing posts from October, 2018

linux - Bind 9.2 server refuses to resolve CNAME from sub-zone

itemprop="text"> I've got an old master nameserver running Bind 9.2 and a newer slave running 9.8. Right now we've got a project going on where we're essentially splitting a cloud in two and we're using subzones and CNAMEs to keep our services running smoothly. However, the cranky old 9.2 server doesn't seem to want to resolve the CNAMEs to the subzone and returns REFUSED: recursion requested but not available . On the other hand the 9.8 server serves the requests just fine. Disclaimer: I know these nameservers are horribly out of date, and even worse the one running 9.2's OS is waaaaay out of support as well, so I'm not likely to find a reputable package to upgrade it. The project immediately after this cloud split is rebuilding our DNS servers/services from scratch. How can I get the older

linux - Bind 9.2 server refuses to resolve CNAME from sub-zone

I've got an old master nameserver running Bind 9.2 and a newer slave running 9.8. Right now we've got a project going on where we're essentially splitting a cloud in two and we're using subzones and CNAMEs to keep our services running smoothly. However, the cranky old 9.2 server doesn't seem to want to resolve the CNAMEs to the subzone and returns REFUSED: recursion requested but not available . On the other hand the 9.8 server serves the requests just fine. Disclaimer: I know these nameservers are horribly out of date, and even worse the one running 9.2's OS is waaaaay out of support as well, so I'm not likely to find a reputable package to upgrade it. The project immediately after this cloud split is rebuilding our DNS servers/services from scratch. How can I get the older server to resolve these CNAMEs properly? dig results dig @ NS1 [Bind 9.2] # dig foo.domain.com @ns1.domain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6_6.3 <<>

apache 2.2 - Web Load testing a website

I have spent time trying to optimize my website but have never got a chance to test maximum users it can take. Have been doing some reading and found out the best scenario would be to use cloud web load testing. The only catch is that its ridicously expensive. Is there any service that can be used to test this that is free and can simulate real browser users.

apache 2.2 - Web Load testing a website

I have spent time trying to optimize my website but have never got a chance to test maximum users it can take. Have been doing some reading and found out the best scenario would be to use cloud web load testing. The only catch is that its ridicously expensive. Is there any service that can be used to test this that is free and can simulate real browser users.

linux - /tmp used 100% where is files?

itemprop="text"> On Centos 6.3 server I noticed that /tmp has no longer free space to store files. [root@]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg0-lv_root 99G 11G 84G 12% / tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 194M 65M 120M 35% /boot /dev/mapper/vg0-lv_tmp 97M 92M 704K 100% /tmp /dev/mapper/vg1-lv0 50G 180M 47G 1% /mnt/ssd2 But there is nothing in /tmp at all [root@]# ls -Sahl /tmp |more total 10K dr-xr-xr-x. 25 root root 4.0K Mar 16 04:29 .. drwxrwxrwt. 3 root root 3.0K Mar 16 03:32 . drwx------. 2 root root 1.0K Mar 16 04:28 mc-root My question is: How could it be? By what /tmp mount space used? And how could I clean it? Answer You should use lsof /tmp to see currently opened file. If you del

linux - /tmp used 100% where is files?

On Centos 6.3 server I noticed that /tmp has no longer free space to store files. [root@]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg0-lv_root 99G 11G 84G 12% / tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 194M 65M 120M 35% /boot /dev/mapper/vg0-lv_tmp 97M 92M 704K 100% /tmp /dev/mapper/vg1-lv0 50G 180M 47G 1% /mnt/ssd2 But there is nothing in /tmp at all [root@]# ls -Sahl /tmp |more total 10K dr-xr-xr-x. 25 root root 4.0K Mar 16 04:29 .. drwxrwxrwt. 3 root root 3.0K Mar 16 03:32 . drwx------. 2 root root 1.0K Mar 16 04:28 mc-root My question is: How could it be? By what /tmp mount space used? And how could I clean it? Answer You should use lsof /tmp to see currently opened file. If you delete a file, while a software still have a lock on it, you won't see it anymore, but it will still have hd space assigned to it.

domain name system - Dns - wildcard vs. cname subdomains

itemprop="text"> Alright I have to admit I'm confused with how DNS works. I've always just added things until they worked, and now it's time to learn how they work. So one confusing thing to me is that there's sort of two places I can have records. I have an account with rackspace cloud servers. And then there's the place I registered the domain. But both allow me to edit DNS records. Should I do everything at both places or is one better than the other or am I missing the point? Subdomains confuse me too. I'd like to be able to just have a wildcard subdomain (I've done this in the past.) I just don't like the idea of adding a cname record or A record every time I need a new subdomain. Then I read href="http://en.wikipedia.org/wiki/Wildcard_DNS_record" rel="nofollow norefer

domain name system - Dns - wildcard vs. cname subdomains

Alright I have to admit I'm confused with how DNS works. I've always just added things until they worked, and now it's time to learn how they work. So one confusing thing to me is that there's sort of two places I can have records. I have an account with rackspace cloud servers. And then there's the place I registered the domain. But both allow me to edit DNS records. Should I do everything at both places or is one better than the other or am I missing the point? Subdomains confuse me too. I'd like to be able to just have a wildcard subdomain (I've done this in the past.) I just don't like the idea of adding a cname record or A record every time I need a new subdomain. Then I read this and it says: The exact rules for when a wild card will match are specified in RFC 1034, but the rules are neither intuitive nor clearly specified. This has resulted in incompatible implementations and unexpected results when they are used. Answer The first

storage - Our company has 100,000s+ photos, how to store and browse/find these efficiently?

We currently store our photos in a structure like this: folder\1\10000 - 19999.JPG|ORF|TIF (10 000 files) folder\2\20000 - 29999.JPG|ORF|TIF (10 000 files) etc... They are stored on 4 different 2TB D-link NASes attached and shared on our office network (\\nas1, \\nas2, and so on...) Problems: 1) When a client (Windows only, Vista and 7) wishes to browse the let's say \\nas1\folder\1\ folder, performance is quite poor. A problem. List takes a long time to generate in explorer window. Even with icons turned off. 2) Initial access to the NAS itself is sometimes slow. Problem. SAN disks too expensive for us. Even with iSCSI interface/switch technology. I've read a lot of tech pages saying that storing 100 000+ files in one single folder shouldn't be a problem. But we don't dare go there now that w

storage - Our company has 100,000s+ photos, how to store and browse/find these efficiently?

We currently store our photos in a structure like this: folder\1\10000 - 19999.JPG|ORF|TIF (10 000 files) folder\2\20000 - 29999.JPG|ORF|TIF (10 000 files) etc... They are stored on 4 different 2TB D-link NASes attached and shared on our office network (\\nas1, \\nas2, and so on...) Problems: 1) When a client (Windows only, Vista and 7) wishes to browse the let's say \\nas1\folder\1\ folder, performance is quite poor. A problem. List takes a long time to generate in explorer window. Even with icons turned off. 2) Initial access to the NAS itself is sometimes slow. Problem. SAN disks too expensive for us. Even with iSCSI interface/switch technology. I've read a lot of tech pages saying that storing 100 000+ files in one single folder shouldn't be a problem. But we don't dare go there now that we experience problems on a 10K level. All input greatly appreciated, /T

HP ProLiant ML350 G6 powerup issue

My setup: HP ProLiant ML350 G6 (2x PSU/redundant) I present my issue in a few steps: Power cables disconnected (no power coming to server) Re-connect cables Press "Power button" -> I can see green lights, I can hear fans spinning (it's actually quite loud) and server will power up fine Now if I power down server (but keep power cables on the back as they are) Press "Power button" -> I can see green lights, but there is no fans sound coming out and server will not power up at all. All seems fine, correct lights, but no beeps and no video output on the screen. It just stays black. In order to power up the server again. I need to disconnect both power cables from PSUs (on the back) and then re-connect them back and upon pressing the power button we are back at step no.3 and all is fine, until n

HP ProLiant ML350 G6 powerup issue

My setup: HP ProLiant ML350 G6 (2x PSU/redundant) I present my issue in a few steps: Power cables disconnected (no power coming to server) Re-connect cables Press "Power button" -> I can see green lights, I can hear fans spinning (it's actually quite loud) and server will power up fine Now if I power down server (but keep power cables on the back as they are) Press "Power button" -> I can see green lights, but there is no fans sound coming out and server will not power up at all. All seems fine, correct lights, but no beeps and no video output on the screen. It just stays black. In order to power up the server again. I need to disconnect both power cables from PSUs (on the back) and then re-connect them back and upon pressing the power button we are back at step no.3 and all is fine, until next restart. I have tried following: connect one power cable from mains and the other from APC -> same result connect both power cables from APC -> same result di

raid - Moving MegaRAID SAS 9240-8i to case with a backplane: anything to be scared of?

itemprop="text"> Currently we have a "homemade" ESXi server that has been put in a href="https://www.supermicro.com/products/chassis/4U/842/SC842i-500.cfm" rel="nofollow noreferrer">Supermicro SC842i-500B chassis; this is obviously suboptimal, given that we do have a RAID 10 setup (6 SATA disks, with a href="http://www.avagotech.com/products/server-storage/raid-controllers/megaraid-sas-9240-8i" rel="nofollow noreferrer">MegaRAID SAS 9240-8i RAID controller) but the chassis does not support hot-swappable disks. Finally, we got a more suitable case ( rel="nofollow noreferrer">Supermicro SC825TQ-R740LPB ), which takes less space, should have a better air flow and, most importantly, has hot-swappable disk support. Currently the disks are conn

raid - Moving MegaRAID SAS 9240-8i to case with a backplane: anything to be scared of?

Currently we have a "homemade" ESXi server that has been put in a Supermicro SC842i-500B chassis; this is obviously suboptimal, given that we do have a RAID 10 setup (6 SATA disks, with a MegaRAID SAS 9240-8i RAID controller) but the chassis does not support hot-swappable disks. Finally, we got a more suitable case ( Supermicro SC825TQ-R740LPB ), which takes less space, should have a better air flow and, most importantly, has hot-swappable disk support. Currently the disks are connected directly to the RAID controller through two SFF-8087 → 4xSATA cables; the new case introduces an extra layer - the backplane. Given that I have no experience with SAS, server-grade hardware and backplanes, I have some doubts: other backplanes I read about seem to connect to the RAID card directly through a single cable ; here however the backplane has just 8 separate SAS ports and the controller has only 2 SFF-8087 ports; is it correct to use the split-cables we are already using, although t

ssh redirect without using port forwarding

itemprop="text"> I am looking for a way to redirect an ssh connection from one host to another. When a user creates an ssh connection to host foo, I would like the server to return some response which causes the ssh client to close the ssh connection to foo and instead connect to host bar. Importantly, for the application I have in mind it is not okay to simply forward the ssh connection to bar via foo, so standard port forwarding is out of the question. Once the redirection occurs, the client should be sending TCP packets directly to bar, not to foo (and not to bar via foo). So, roughly, I'm looking for an SSH analogue to an HTTP redirect (which causes the client to hang up the original connection and connect instead to the host to which it was redirected). It is also important for the application I have in mind th

ssh redirect without using port forwarding

I am looking for a way to redirect an ssh connection from one host to another. When a user creates an ssh connection to host foo, I would like the server to return some response which causes the ssh client to close the ssh connection to foo and instead connect to host bar. Importantly, for the application I have in mind it is not okay to simply forward the ssh connection to bar via foo, so standard port forwarding is out of the question. Once the redirection occurs, the client should be sending TCP packets directly to bar, not to foo (and not to bar via foo). So, roughly, I'm looking for an SSH analogue to an HTTP redirect (which causes the client to hang up the original connection and connect instead to the host to which it was redirected). It is also important for the application I have in mind that this not require any client-side configuration. So is it possible to do this? Answer There is no provision in the SSH protocol for redirects. Read RFC 4251 , RFC 4252 ,

Apache 2.4 using default DocumentRoot instead of VirtualHost DocumentRoot

itemprop="text"> Long time listener, first time caller... I've been running Apache for years, and have set up multiple servers. This one is giving me a hard time and I just can't spot the issue. I've seen a number of threads here and elsewhere with VirtualHost problems and the wrong DocumentRoot being served, but none of those threads have helped me out. Server is running Centos 7.5, SELinux enabled, Apache 2.4.33. I'm wanting to run two VirtualHosts. For some reason, the secondary VH isn't serving the right files. Changing the order of the VH didn't matter. Last thing I tried was hard coding a default DocumentRoot (/var/www/html) and then putting each VH in its own separate directory (/var/www/VirtualHost). Here is my current virtualhost.conf file: #Set a default DocumentRoot Documen

Apache 2.4 using default DocumentRoot instead of VirtualHost DocumentRoot

Long time listener, first time caller... I've been running Apache for years, and have set up multiple servers. This one is giving me a hard time and I just can't spot the issue. I've seen a number of threads here and elsewhere with VirtualHost problems and the wrong DocumentRoot being served, but none of those threads have helped me out. Server is running Centos 7.5, SELinux enabled, Apache 2.4.33. I'm wanting to run two VirtualHosts. For some reason, the secondary VH isn't serving the right files. Changing the order of the VH didn't matter. Last thing I tried was hard coding a default DocumentRoot (/var/www/html) and then putting each VH in its own separate directory (/var/www/VirtualHost). Here is my current virtualhost.conf file: #Set a default DocumentRoot DocumentRoot /var/www/html ServerAdmin webmaster@example1.com DocumentRoot /var/www/example2.com ServerName example2.com ServerAlias www.example2.com Options -Indexes +FollowSymLink

domain name system - Is it possible to change MX records without downtime/returning/dropping messages?

We're migrating e-mail hosting from SERVER-A (on HOSTING_CO-A ) to SERVER-B (on HOSTING_CO-B ). SERVER-A will continue to be functional until the transition is complete and HOSTING_CO-A has their own DNS server since our records currently show something that looks like: mx1.hosting_co-a.example ns1.hosting_co-a.example and HOSTING-CO-B presumably has their own DNS servers as well (it is Google/GSuite). I've read [1] that DNS changes may take up to 72 hours. What would be the worst case scenario for what happens to an e-mail sent during this period? Would they either go to SERVER-A or SERVER-B (that is, would e-mail providers using non-updated DNS records be able to get through to SERVER-A or would SERVER-A reject or forward it based on its updated

domain name system - Is it possible to change MX records without downtime/returning/dropping messages?

We're migrating e-mail hosting from SERVER-A (on HOSTING_CO-A ) to SERVER-B (on HOSTING_CO-B ). SERVER-A will continue to be functional until the transition is complete and HOSTING_CO-A has their own DNS server since our records currently show something that looks like: mx1.hosting_co-a.example ns1.hosting_co-a.example and HOSTING-CO-B presumably has their own DNS servers as well (it is Google/GSuite). I've read [1] that DNS changes may take up to 72 hours. What would be the worst case scenario for what happens to an e-mail sent during this period? Would they either go to SERVER-A or SERVER-B (that is, would e-mail providers using non-updated DNS records be able to get through to SERVER-A or would SERVER-A reject or forward it based on its updated MX-record)? Would they be 'dropped' (that is neither the sender nor recipient will be aware that the message didn't go through)? Would the sender be notified that the message was not delivered? We are switc

debian - Best Postfix spam RBL policy weight daemon?

I just heard about href="http://www.policyd-weight.org/" rel="nofollow noreferrer">policyd-weight so I did an apt-cache search policyd which returns three options: Which one is the best, and do you have any tips on setting them up? Our current setup is whitelister plus postgrey to greylist RBLd hosts, then fail2ban them for 10 minutes if they have 10 failures, followed by content filtering (Kaspersky Anti-Spam). The content filtering is pretty good, but there's still a lot of spam that gets through the RBL greylisting.

debian - Best Postfix spam RBL policy weight daemon?

I just heard about policyd-weight so I did an apt-cache search policyd which returns three options: Which one is the best, and do you have any tips on setting them up? Our current setup is whitelister plus postgrey to greylist RBLd hosts, then fail2ban them for 10 minutes if they have 10 failures, followed by content filtering (Kaspersky Anti-Spam). The content filtering is pretty good, but there's still a lot of spam that gets through the RBL greylisting.

networking - DNS resolution failing over to secondary DNS - why?

itemprop="text"> We have large number of branch offices connected via VPN, but without any kind of server infrastructure. The client machines in each office get their network configuration from an ASA 5505, which is also used for the VPN connection. The Windows XP client machines are configured to use one of our corporate DNS servers as the primary, with the DNS server of the ISP as the secondary. The idea is that if the VPN connection fails for any reason, staff in the office will still be able to access the internet, and access our webmail and home access portal. In the majority of cases this works fine. However, for offices based in South America we are seeing DNS resolution on the client machines regularly being done against the ISP DNS server - this results in our corporate resources being effectively unavailable to staff in t

networking - DNS resolution failing over to secondary DNS - why?

We have large number of branch offices connected via VPN, but without any kind of server infrastructure. The client machines in each office get their network configuration from an ASA 5505, which is also used for the VPN connection. The Windows XP client machines are configured to use one of our corporate DNS servers as the primary, with the DNS server of the ISP as the secondary. The idea is that if the VPN connection fails for any reason, staff in the office will still be able to access the internet, and access our webmail and home access portal. In the majority of cases this works fine. However, for offices based in South America we are seeing DNS resolution on the client machines regularly being done against the ISP DNS server - this results in our corporate resources being effectively unavailable to staff in the offices. The client machines are able to ping the corporate DNS server ok. When doing an nslookup of a corporate hostname, I get a reply. I'm thinking one of the

domain name system - Google DNS is too slow connecting to Google APIs

I have a linux server in Germany. The server is configured with Google DNS. When I call from the server some Google API, the connection is too slow, it takes always 2 or 3 seconds to connect to the Google server. I have no problem connecting to other servers. Paradoxically it seems there are problems resolving Google URLs with Google DNS. I have temporarily resolved inserting a row in the file hosts.txt that associates the Google IP corresponding to the Google API that I use. Can I resolve this problem in another (clean) way? Thank you!

domain name system - Google DNS is too slow connecting to Google APIs

I have a linux server in Germany. The server is configured with Google DNS. When I call from the server some Google API, the connection is too slow, it takes always 2 or 3 seconds to connect to the Google server. I have no problem connecting to other servers. Paradoxically it seems there are problems resolving Google URLs with Google DNS. I have temporarily resolved inserting a row in the file hosts.txt that associates the Google IP corresponding to the Google API that I use. Can I resolve this problem in another (clean) way? Thank you!

routing - Getting Squid and TPROXY with IPv6 working on CentOS 7

I'm having trouble getting TPROXY working with Squid and IPv6 on a CentOS 7 server. I was previously using a generic intercept setup with NAT, but it was limited to IPv4 only. I'm now expanding the setup to include IPv6 with TPROXY. I've been using the official Squid wiki article on the subject to configure everything: href="http://wiki.squid-cache.org/Features/Tproxy4" rel="noreferrer">http://wiki.squid-cache.org/Features/Tproxy4 Thus far the TPROXY config appears to be working for IPv4 with no issues. With IPv6 however connections are timing out and not working properly. I'll break down the setup for better understanding. Note all firewall and routing rules are exactly the same for IPv4, the only difference is inet6 and ip6tables for configuring IPv6 based rules in the examples

routing - Getting Squid and TPROXY with IPv6 working on CentOS 7

I'm having trouble getting TPROXY working with Squid and IPv6 on a CentOS 7 server. I was previously using a generic intercept setup with NAT, but it was limited to IPv4 only. I'm now expanding the setup to include IPv6 with TPROXY. I've been using the official Squid wiki article on the subject to configure everything: http://wiki.squid-cache.org/Features/Tproxy4 Thus far the TPROXY config appears to be working for IPv4 with no issues. With IPv6 however connections are timing out and not working properly. I'll break down the setup for better understanding. Note all firewall and routing rules are exactly the same for IPv4, the only difference is inet6 and ip6tables for configuring IPv6 based rules in the examples below. OS and Kernel: CentOS 7 (3.10.0-229.14.1.el7.x86_64) All packages are up to date according to yum Squid Version: 3.3.8 (Also tried 3.5.9) Firewall: iptables/ip6tables 1.4.21 libcap-2.22-8.el7.x86_64 IPv6 connectivity is currently through a 6in4 tunnel v

storage - ZFS over iSCSI high-availability solution

I am considering a ZFS/iSCSI based architecture for a HA/scale-out/shared-nothing database platform running on wimpy nodes of plain PC hardware and running FreeBSD 9. Will it work? What are possible drawbacks? Architecture Storage nodes have direct attached cheap SATA/SAS drives. Each disk is exported as a separate iSCSI LUN. Note that no RAID (neither HW nor SW), partitioning, volume management or anything like that is involved at this layer. Just 1 LUN per physical disk. Database nodes run ZFS. A ZFS mirrored vdev is created from iSCSI LUNs from 3 different storage nodes. A ZFS pool is created on top of the vdev, and within that a filesystem which in turn backs a database. When a disk or a storage node fails, the respective ZFS vdev will continue to operate in degraded mode (but still have 2 mirrored disks). A different (new) dis

storage - ZFS over iSCSI high-availability solution

I am considering a ZFS/iSCSI based architecture for a HA/scale-out/shared-nothing database platform running on wimpy nodes of plain PC hardware and running FreeBSD 9. Will it work? What are possible drawbacks? Architecture Storage nodes have direct attached cheap SATA/SAS drives. Each disk is exported as a separate iSCSI LUN. Note that no RAID (neither HW nor SW), partitioning, volume management or anything like that is involved at this layer. Just 1 LUN per physical disk. Database nodes run ZFS. A ZFS mirrored vdev is created from iSCSI LUNs from 3 different storage nodes. A ZFS pool is created on top of the vdev, and within that a filesystem which in turn backs a database. When a disk or a storage node fails, the respective ZFS vdev will continue to operate in degraded mode (but still have 2 mirrored disks). A different (new) disk is assigned to the vdev to replace the failed disk or storage node. ZFS resilvering takes place. A failed storage node or disk is always completely recycl

Active Directory Domain where FQDN and NetBIOS name are the same

itemprop="text"> I have "inherited" an ancient domain (NT4, then upgraded to Win2000 Mixed Mode and now running on Win2003) where the NetBIOS name coincides with the DNS/FQDN one, and this is giving us problems with remote clients which need to be joined to the domain. Lets the domain be called EXAMPLE : it is both the NetBIOS name and the DNS name, as can be seen by opening the DNS administration panel. Inside the local LAN, apart some occasional confusion on what protocol is resolving the machine's name, this arrangement seems to work. On remote LANs (connected by VPN), problems happen: the remote client can not connect to the domain. The error message states that the DNS query correctly returns the domain controller list, but no domain server can be contacted. From a connectivity standpoint, all ports