Skip to main content

Posts

Showing posts from March, 2015

configuration - apache spawning too many processes despite maxclient and other constraints

Here are my MPM constraints: StartServers 10 MinSpareServers 10 MaxSpareServers 10 MaxClients 10 MaxRequestsPerChild 2000 However despite this, I have over 20 apache processes running currently, and in the past hour or two there have been as many as 40-50. Shouldn't the MaxClient and MaxSpareServers keep the number of processes under control (i.e. about 10)? Is there something I'm missing?

virtualhost - Apache ip-based hosting setup and httpd.conf directives

In apache, I would like to setup "ip-based" hosting for 2 sites and enable SSL for them. However, I'm not clear on how to configure httpd.conf file. Questions: 1) Do I need a NameVirtualHost directive for ip-based setup? On Apache's site, it say it's required for name-based but there's no mention of ip-based. 2) If NameVirtualHost required, must the number of description and quantity match the number of VirtualHost directives? Example, can I say "NameVirtualHost *:80" and later use and ? Or, will I need "NameVirtualHost IP_ADDRESS_1:80" and "NameVirtualHost IP_ADDRESS_2:80" 3) If ServerName were example1.com (without "www"), would it make a difference?? 4) In VirtualHost, do I need to set a value for ServerAlias, such as the IP itself? One thing I'll to share is if you have (and likely including) ssl.conf, you should not add "Listen 443" to your httpd.conf, otherwise upon reload, apache will throw a "

filesystems - How does Linux handle concurrent disk IO?

When a Linux server is serving many concurrent requests to read many different files, does it: Seek to File_1, read the entire file, then seek to File_2, read the entire file, then seek to File_3, etc etc Seek to File_1, read part of it (up to the readahead value?), then seek to File_2, read part of it, then seek back to File_1 where it has left off, read more of it, then seek to File_3, etc, etc If it's the 2nd case, then the server is doing a lot more seeks than is necessary, which would slow things down significantly. In that case is there any tuning I could do? Answer In disk I/O there is a thing called elevator. The disk subsytem tries to avoid thrashing the disk head all over the platters. It will re-order I/O requests (when not prohibitted e.g. by a barrier) so that the head will be moving from the inside of the disk to the outside, and back, performing requested I/Os on the way. Second thing is I/O request merging. If there are many requests within a short tim

domain name system - SPF softfail for forwarded emails to Gmail account

I've been able to make SPF pass on all the sent emails from my Postfix server. But for forwarded domains which simply redirect email to my gmail id I see softfail in the SPF. For example if I send email from a hotmail account to contactus@workingwoman.org then it is forwarded to test email id ragraggupta8899@gmail.com. I've added SPF header "spf1 a mx -all" for my hostname(host.tariffplans.com) as well for all domains. The A record of all domains/subdomains is correctly pointing to my server IP : 23.239.30.81 But in the forwarded email header .. Google shows it as softfail . What could be the problem?: Delivered-To: rag.raggupta8899@gmail.com Received: by 10.114.96.70 with SMTP id dq6csp51447ldb; Sat, 19 Jul 2014 23:05:03 -0700 (PDT) X-Received: by 10.182.65.66 with SMTP id v2mr22896624obs.74.1405836302184; Sat, 19 Jul 2014 23:05:02 -0700 (PDT) Return-Path: Received: from host.tariffplans.com (tariffplans.com. [23.239.30.81]) by mx.google.com

domain name system - Protecting my Bind dns server from slow kaminsky-style cache poisoning attacks

Dan Kaminsky described how DNS servers could be poisoned with spoofed DNS responses [1]. As I understand it, the problem was that Kaminsky found a way to account for most other sources of randomness in a DNS query such that the main barrier to an attacker was in guessing the DNS query id (16 bits of entropy) when generating a spoofed response. An attacker could, on average, spoof the response within 32k guesses. So, the recommended mitigation was to randomize the source port, and everyone applied their patches and all was well. Except that this only brought up the number of guesses from 32k to somewhere between 134m to 4b. Sure, it couldn't be done quickly, but a patient attacker could still do this slowly - in fact, Bert Hubert calculated that an attack at 100qps has 50% chance of success within 6 weeks. [2] I don't have sufficient reputation to post more links. However, I see that many technical approaches have been considered, such as draft-wijngaards-dnsext-resolver-si

HP smart array p400 RAID 1+0; two disks crashed

I am running a RAID 1+0 on a HP ProLiant DL360 G5 with 6 disks (each 146 GB). The mirror groups looks as follows: 146 GB 1-Port SAS Drive at Port 1l: Box1:Bay1 146 GB 1-Port SAS Drive at Port 1l: Box1:Bay2 0 GB 1-Port SAS Drive at Port 1l: Box1:Bay3 146 GB 1-Port SAS Drive at Port 1l: Box1:Bay4 146 GB 1-Port SAS Drive at Port 2l: Box1:Bay5 0 GB 1-Port SAS Drive at Port 2l: Box1:Bay6 Unfortunately the disks in bay 3 and bay 6 are not working anymore (almost since the same time). Is there a chance to get the system back online again by replacing disk 3 + 6 by new ones? * UPDATE * It's a MS Server 2003 R2. I've got the information that the disks are broken from the HP smart start CD diagnostics tool. The following versions are in place: * ACU Version 8.60.7.0 * Diagnostic Module Version 5.2.52.0 * INFOMGR Version 5.9-29.0 Server was already off (pressing power button) when people involved me. ROM/RAM firmware revision is 2.10

Why is IIS Anonymous authentication being used with administrative UNC drive access?

My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked

sql server - What would cause a query being ran from SSMS on local box to run slower then from remote box

When I run a simply query such as "Select Column1, Column2 from Table A" from within SSMS running on my production SQL Server the results seems to take extremely long (>45Min). If I run the same query from my dev system’s SSMS connecting to the production SQL Server the results return within a few seconds (<60sec). One thing I have notices is if the system was just rebooted performance is good for a bit. It is hard to determine a time as I have had it start running slow very quickly after reboot but at most it performed good for 20min and then start acting up. Also, just restarting the SQL service does not resolve the issue or provide a temporary performance boost. Specs for Server are: Windows Server 2003, Enterprise Edition, SP2 4 X Intel Xeon 3.6GHz - 6GB System Memory Active/Active Cluster SQL Server 2005 SP2 (9.0.3239) Answer Have you compared the execution plans from both servers? Have you tried querying your production server locally, when the resu

windows server 2008 - DNS Cannot Resolve a Second IP to NS2 Record

I had a second IP address assigned to our dedicated Win2008 server. I went into DNS and added a n2.ourdomain.com record and used the second IP address. However, when I clicked Resolve, DNS could not resolve the IP address. Can anyone suggest possible remedies to this situation? Forward Lookup Zones in DNS: SOA Primary Server ns1.floristshoppingcart.com ns1.floristshoppingcart.com 173.201.33.152 ns2.floristshoppingcart.com 173.201.35.90 txt v=spf1 a mx -all A records for *, ftp, mail, www, ns1 173.201.35.90 A records for ns1, ns2 173.201.33.152 Can someone identify what, if any, additional DNS entries should be created? Note that we are trying to run Website Panel. When we try to create a hosting plan on WSP, an error is thrown DNS zone already exists on the target service. Answer I'm guessing that you only added a NS record. You need to also add an A record for the server.

apache 2.2 - Single domain SSL presented for all domains on Shared IP

I have a VPS running Apache/2.2.22 on Ubuntu Server 12.04 LTS. I have successfully installed an SSL for domaina.com Unfortunately if I visit https://domainb.com , https://domainc.com , etc… I am presented with certificate warnings as each domain is presenting domaina.com certificate. How can I stop this? Can I stop Apache sending the certificate for all sites sharing the same IP. Can I block port :443 access using ufw for a domain name? Something else? Domain A configuration ServerName domaina.com ServerAlias www.domaina.com DocumentRoot /var/www/domaina.com/public ServerName domaina.com ServerAlias www.domaina.com DocumentRoot /var/www/domaina.com/public SSLEngine on SSLCertificateFile /etc/apache2/ssl/domaina.com.crt SSLCertificateKeyFile /etc/apache2/ssl/domaina.key SSLCertificateChainFile /etc/apache2/ssl/domaina.com.ca-bundle Domain B, C… configuration ServerName domainb.com ServerAlias www.domainb.com Doc

tcpip - How can I add more than 255 machines to a single Class C network?

I'm mainly a programmer. I have no idea beyond some basic theory when it comes to Networking/Administrating. My university feels that we should course at least the basics in Networking and I'm psyched. It's something incredibly new to me and I'm enjoying the class a lot. Yesterday was my first day and he posed the following question. "Since each C-level network can only have a maximum of 254 IP addresses, how could you add 300 machines to a single C-level network? I was thinking something like: 192.168.1.1 192.168.1.2 192.168.1.3 --- 192.168.1.254 (Make this a ROUTER) //and inside this router's network I could repeat the following addresses no problem. 192.168.1.1 192.168.1.2 //etc. Is this what my teacher was talking about? My teacher said this has a special name and we should research what it was called. Anyone care to share some knowledge? :) Edit: Maybe I'm not expressing my question clear enough; I need to have 300 machines al

security - Linux: productive sysadmins without root (securing intellectual property)?

Is there any way to make a seasoned Linux syadmin productive without giving him full root access? This question comes from a perspective of protecting intellectual property (IP), which in my case, is entirely code and/or configuration files (i.e. small digital files that are easily copied). Our secret sauce has made us more successful than our smallish size would suggest. Likewise, we are once-bitten, twice shy from a few former unscrupulous employees (not sysadmins) who tried to steal IP. Top management's position is basically, "We trust people, but out of self-interest, cannot afford the risk of giving any one person more access than they absolutely need to do their job." On the developer side, it's relatively easy to partition workflows and access levels such that people can be productive but only see only what they need to see. Only the top people (actual company owners) have the ability to combine all the ingredients and create the special sauce. But I have

flooding - How can I prevent apache DoS flood?

I've configured a server running apache and a couple of days ago I noticed in the logs that there are bots running endless queries to the site. The logs show that those bots are running about 60 queries per second for about 20 minutes coming from the same IP address. How can I limit the queries and what's the appropriate way of dealing with such bots? Thanks in advance. Answer Consider installing a rate limiting software, it will help you to defend not only against lawful bots. You can use mod_evasive for Apache, or you can install Nginx as a frontend and use its HttpLimitZoneModule , it is built in.

Windows Apache 2.2 painfully slow executing CGI

I've recently set up Apache 2.2 and git on one of our Windows XP PCs for gitweb access using the setup at https://git.wiki.kernel.org/index.php/MSysGit:GitWeb As noted on the wiki, the only version of Perl that seems to work with gitweb the way it is coded is the one included with MSysGit. ActivePerl and StrawberryPerl don't implement a certain required feature so another interpreter is not an option. C:\Program Files\Git\bin>perl.exe --version This is perl, v5.8.8 built for msys In any case, it is set up and it works but for some reason there is an approximately 10 second delay for every page load. To troubleshoot this I made a simple helloworld.cgi and placed it in the directory next to gitweb.cgi. It is set up to use the same perl interpreter as gitweb: #!C:\Program Files\Git\bin\perl.exe print "Content-type: text/html\n\n"; print "Hello, world!\n"; This script too takes over 10 seconds to execute on the server. If I fire up a command prompt and execut

Ubuntu 10.04 SSH becomes unresponsive

I have a fresh install of Ubuntu Server 10.04 and all is well when using the local term. However, when I use ssh to admin it remotely (it is on the same switch and in the same room) I can login and work for about 2 min. Then the term just stops responding, no errors, nothing. The server is still working fine. If I kill the terminal and open a new one then login again all is well for another 2 min. Any ideas? -Kerry Answer Type dmesg and look for any errors. Also look in /var/log/messages I had something just like this happen many years ago. Turned out to be a bad network card or a combination network card and driver problem. When I replaced the card, the problem went away.

domain name system - configure DNS to access local servers using public address

I have a domain and DNS server set up using Windows Server 2012 R2. The local domain is a subdomain of my public one, and I have a forward lookup zone configured for it in my DNS server. i.e. local: lan.publicdomainname.com public: publicdomainname.com The DNS records for publicdomainname.com are stored with the public DNS on the registrar. (godaddy in this case). lan.publicdomainname.com is not stored with that DNS server. I have several local servers that are listed on the public DNS as subdomains. for eample: server1.publicdomainname.com server2.publicdomainname.com These can be accessed using those URLs from outside the local network just fine, but don't work while connected to the LAN. Should I be adding a new Forward Lookup Zone to my internal DNS server named publicdomainename.com? edit: Seems like I should either be using hairpin NAT or split DNS. From what I understand a hairpin NAT causes extra processing to be done on the router for local traffic, and a split DNS require

apache 2.2 - Global Redirection of port 80 to 443

I'd like to setup my linux box so that anything hitting port 80 would simply be told ask that of 443. I want it to be regardless of domain, IP or whatever specific details may exist. If it can be requested of port 80, it should be told nope. We do that on 443. I'll be using Apache on 443 so could bind it to 80 easy enough, but don't see the solution as having to include Apache on port 80. To be clear, I'm looking for a solution that would require no changes to the vhosts. I understand global redirects that can be passed down with inheritance. That requires vhost changes. I'm looking for something more all-encompassing and less prone to "Oops, I forgot that line and now port 80 is exposing my data unencrypted." How would you go about solving that problem? iptables, apache, custom shell script with netcat and some magic to make it go SSL? Answer Try adding this to your httpd.conf; RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https

linux - CentOS CPU load and CPU Freq

I have some Dual Xeon (X5650 @ 2.67GHz) server, with 72GB of RAM and HT disabled, but I have a problem. I host srcds servers (game servers) and they are really CPU intensive. The CPU usage is usually over 50% but CPU load is 0.05~0.30 (even if I run 10 servers, each server using 1 core 100%, it will stay 0.05~0.30). The problem is that the CPU do not ramp up, it just stays at 1.5Ghz forever as there is no load registered by the system, when actually there is. As the game servers load increase it start lagging and dropping frames becuse of the low CPU freq. I did some benchmarks on the server, and the CPU load and frequency did rump up to ~3Ghz as it should, so I don't think its a server problem. I used to use Ubuntu, and the CPU load was ok, but I dont want to reformat the server and set everything up again. I there anything I can do to make CentOS display the right load and ramp up the CPU frequency as it should?

Linux Centos Disk information on HP DL 160

I have a HP DL 160 and I would like to know all possible information on my physical disks. I want to know how many physical disks I have. I want to know if they are in raid 1 or raid 5 etc. I want to know if I can add disk or I have a disk that I can add to the operating system and use it. I'm not sure which commands or utilities can show me these info when I do df -h I get: df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 4.5G 847G 1% / /dev/sda1 99M 24M 70M 26% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm

domain name system - IIS 502 error when using a CNAME

I have a cname record to a dyndns address. This has worked fine in the past, but now if I use the cname error I get a 502 error from IIS 7. The dyndns address works fine and so does the actual ip address. Any ideas on what would cause this? Answer Found the root cause. I had an old rewrite rule at the server level for that domain name. I had only checked the rewrites at the website level. The domain it was pointing to is now gone, hence the 502 bad gateway message. I disabled the rule and all is fine.

mysql - Can my server handle 12,000 database requests per-minute?

First of all, Apologies if this is a silly question. I've never really had to manage servers and databases, on a "large-scale", that is. ANYWAY, on to the question. I am trying to figure out if our current server can handle 12,000 database requests per minute. (once again, I don't know if this a lot. I assume it's mid-range). I am estimating that 2/3 of the 12,000 requests will be simple SELECT queries from super small tables. No more than 20,000 rows in a table- I've made a point to prune them on a regular basis. LAMP Stack. Below are the server hardware and software specs: Processors- Intel Haswell 2095.050 MHz Memory - 7.45gb useable Storage - 80GB SSD OS- Ubuntu, CentOS 7 DB MySQL V5.7.25 PHP - 7.2.7 The database is stored on the same server as where files are being served. If this server is capable of this, how much further can the server be pushed? Thank you in advance (And sorry if this seems to be a dumb question)

Is there a way to protect SSD from corruption due to power loss?

We have a group of consumer terminals that have Linux, a local web server, and PostgreSQL installed. We are getting field reports of machines with problems and upon investigation it seems as if there was a power outage and now there is something wrong with the disk. I had assumed the problem would just be with the database getting corrupted, or files with recent changes getting scrambled, but there are other odd reports. files with the wrong permissions files that have become directories (for example, index.php is now a directory) directories that have become files files with scrambled data There are problems with the database getting corrupted, but that's something I could expect. What I'm more surprised about is the more basic file system problems - for example, permissions or changing a file into directory. The problems are also happening in files that did not recently change (for example, the software code and configuration). Is this "normal" for SSD corruption?

ssl - How can I test HTTPS configuration in a LAN?

I'm developing a website that should be used with HTTPS. I am developing the site on my Windows machine and I have a Linux machine on the same LAN where I test my websites. I don't have a DNS-server on the LAN so I use IP-addresses to access the website. On the server I use Ubuntu Server and Nginx web server. Is there any easy way I can test the HTTPS configuration in my LAN? E.g if I generate a SSL-certificate myself, but I don't have a domain name to set up it with. Any recommendations for testing? Or should I use a subdomain test.example.com and put an A record to my local IP-adress to the test server E.g. 192.168.1.10 ? Answer Generate a self-signed certificate * for Nginx. Configure Nginx to use it. Edit the hosts file on your computer, add the domain your testing. On *nix, it's /etc/hosts in Windows it's %systemroot%\System32\Drivers\Etc\hosts . Add a line similar to www.example.com 192.0.2.5 (substitute the domain and your server's IP).

What's the purpose of local storage hdd in Blade servers with virtualization?

I'm designing lab with a Dell M1000e and 3 M620 blade servers. Each blade server has 2 local HDD of 130GB on RAID 1. My first attempt is to install ESXi in an SD card or USB key so my local storage is empty. The question, what's the purpose of local HDD in this blade server? I mean, I have a SAN network with XX TB so this 130Gb looks ridiculous. Apart from installing ESXi to the local drive instead of the SD card, is there another use for this local storage? EDIT: I just inherited the devices from previous tech guys. I just trying to understand why they bought local storage.

linux - Exim taking long time to send emails, how to decrease delay in Exim service?

The linux server where exim service is running is under no load. The system is sending email successfully but is taking a long time to send each email. Basically if I telnet to localhost port 25 and then I try to send an email from there the response from the server is super slow. The mail application we have running there is taking more than 8 minutes to send 4 emails. Has anyone confronted this issue before with EXIM and maybe you might now what is the setting that is making EXIM wait so long for something. I am looking at the logs and I cannot see anything indicative of an error. Below a sample of the mainlog: 2009-08-10 07:21:05 H=(aurl.domain.ni) [127.0.0.1] Warning: Sender rate 4.6 / 1h 2009-08-10 07:21:29 1MaTsX-0000mw-Oe <= stgbouncing@theperfectplace.net H=(aurl.domain.ni) [127.0.0.1] P=esmtp S=22003 id=67402024.1249906753667.JavaMail.root@aurl.domain.ni 2009-08-10 07:21:54 SMTP command timeout on connection from (domain.com) [127.0.0.1] 2009-08-10 07:22:42 1MaTsX-000

configuration - Multiple domains (including www-"subdomain") on apache?

after messing around a while I decided to ask here: I have a vhost and want to use 2 domains on this server. My apache configuration file looks something like this: NameVirtualHost * ServerName www.domain1.de DocumentRoot /var/www/folder1/ ServerName www.domain2.de DocumentRoot /var/www/folder2/ On the configuration page for the domains of my vhost both domains assigned to the server ip. The problem now is: www.domain1.de works domain1.de works www.domain2.de works domain2.de does not work Has any one any idee why the second domain only works with the added "www"? Answer domain1.de works because www.domain1.de is the first VirtualHost and is served as default. You need to add ServerAlias domain2.de to www.domain2.de for the shorter version to work as well (you should add ServerAlias for www.domain1.de too). If you don't want www.domain1.de to be served as default add another VirtualHost at the begining serving some simple HTML fil

fedora - Recovering from su chown -R 770 x:x *

Like other questions lurking around here, e.g. this one and another , I've made a boo-boo. Specifically, I ran a command, in / , while rooted, like so: chown -R 770 x:x * Where x was apache (I think! Maybe I was experimenting with nobody , or somebody else). Similar to others I almost immediately noticed and killed the job but it seems too little too late. My problem diverges from the existing ones at the point that I was setting selinux to permissive and stupidly decided to reboot "to prove it". Now I can't boot - the screen remains black while the cursor is a constant loading animation post-GRUB selection. I've tried booting into 'recovery mode' which sounds hopeful though loses its promise when I follow the instruction to return to default mode (naturally) since (probably among other reasons) " etc/audit/somethingOrOther is not owned by root" and I'm back to perpetual loading. Obviously I need to fix the system before returning to d

dhcp - lan to wan connection refused by windows firewall

I have plenty of experience with LAN's and bridges however this is my first time using a WAN style setup... here is a diagram of my network WAN (public ip & 10.1.10.1)---COMP1 (10.1.10.2) --- ---LAN(10.1.10.3 & 192.168.1.1) --- COMP2 (192.168.1.2) the WAN gateway of LAN is setup as 10.1.10.1 and the port forward rules WAN:A -> COMP1:B WAN:C -> LAN(10.1.10.3):D LAN:D -> COMP2:D from the public internet connecting to WAN(public ip):C connects me to COMP2:D from COMP1 connecting to LAN(10.1.10.3):D connects me to COMP2:D . However netstat on COMP2 lists the connection as LOCAL(COMP2:D) REMOTE(COMP2:XYZ) when I was expecting REMOTE(COMP1:XYZ) Yet the connection COMP2->WAN:A never works... I don't know what's wrong. What would make COMP2 think a remote connection from an external network was from itself? Shouldn't LAN see a 10.1.10.? destination address and quickly route it to the WAN gate

performance - Why do servers have SAS instead of SSDs?

I was wondering why do servers still come with SAS disks instead of SSD disks? I know that SAS are faster than normal hard drives but they are still much slower than SSDs. I think they are more expensive too :s so what's the deal here? Answer Cost, capacity and reliability are factors for why SSD adoption hasn't occurred at all levels. SSDs cost more than SAS disks for a given capacity. But in general, servers don't actually come with a particular type of disk. Storage is something that is configured afterwards. Some background information: Are SSD drives as reliable as mechanical drives (2013)? Edit: Also note that not every workload benefits from SSD. Provisioning 20TB of bulk storage where only a small subset needs to be active at a given time is best for standard disks ( plus tiering/caching ). Purely sequential workloads are better served by spinning disks. And more than that, SSDs tend to do best outside of the disk form-factor . For the past yea

virtualization - What are the benefits of having local disks on hypervisor nodes running VMs off a SAN?

When an hypervisor like XenServer or vSphere can be run on diskless nodes (eg. by booting from flash cards or from the network) and VM storage is handled via a SAN, is there any good use for local disks? Would it be better to have those disks even if not used to boot the hypervisor or to hold VMs? What are the reasons, if any, to choose completely diskless servers VS having some local storage? Answer I don't have a ton of XenServer experience, but here's some information coming from a VMware background. vSphere ESXi runs completely in memory following boot, so local storage JUST for the ESXi installation is generally considered overkill. Booting off SD card/USB stick/PXE(network) is supported under ESXi, so there's lots of options. Servers with no local storage have some benefits, primarily: Lower cost Lower power consumption Less heat generated by server However, this doesn't mean local storage can't be useful. First and foremost, you can configure

domain name system - DNS A record for sub-subdomain when subdomain is a CNAME

Is it possible to create an A record for a sub-subdomain when the subdomain has been CNAMEd? Something like this: domain.com A 198.51.100.1 my.domain.com CNAME othersite.com www.my.domain.com A 198.51.100.1 Will the CNAME on my.domain.com cause the lookup for www.my.domain.com to fail, or cause any other ill effects? Answer That wouldn't have any impact whatsoever. The DNS query will start by looking up the sub-domain in its cache and if it's not found, it will look up the NS record for the zone and query it (your DNS server) for the subdomain. It won't even notice the CNAME that worries you.

networking - Very slow ssh, snmp, telnet network connections but http, sftp are fast

I have a small network with about 8 linux servers, a Cisco 2600 router, and a Cisco 3500XL switch. The router and switch have been configured and working properly for years. About 6 hours ago the time to establish a connection via certain protocols skyrocketed. Connecting to a server via SSH can take a couple minutes to establish. But once the connection is made, it works normally. Copying files via scp is fast as well, but making the initial connection takes forever. Same with telnet. However, connection via HTTP or HTTPS are perfectly fine. They cruise along like normal. Also SFTP seems to be fine as well. SNMP connections also seem to be affected. My Cacti monitoring server has stopped working properly with timeout errors in the logs. PHPSVR: Poller[0] Maximum runtime of 292 seconds exceeded for the Script Server. Exiting. It gets intermittent, but mostly failed results from the hosts, however it is fairly reliable in reporting the router and switch cpu and memory. The strange thin

domain name system - DNS Problems with .pt configuration

I have a hosting service with aplus.net, however I had a need to register a .pt domain, but aplus doesnt have this service, so I contacted a .pt registar, called hostingbug.net, to do this. So now I'm owner of a .pt domain, click.pt . I gave hostingbug the aplus nameservers needed for propagation. And here began the problems. When hostingbug tried to configure, the following error was displayed: <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> @64.29.151.221 click.pt. NS +norecurse (1 server found) global options: printcmd connection timed out no servers could be reached And they told me that aplus.net needed to create a new dns zone for .pt domains. So I contacted aplus.net, and they didnt understand this issue, and told me that everything was fine with their servers, and sent me back to hostingbug. So I'm felling like a ping pong ball right now... How can I configure this "new dns zone" for .pt domains? Anyone have clue of how to do this

ServerName on VirtualHost not working in Apache 2.4.6

I have the following VirtualHosts inside /etc/httpd/sites-enabled/domain.com.conf: DocumentRoot /var/www/html/ ServerName www.domain1.com ServerName test.domain2.com ServerAdmin admin@domain2.com DocumentRoot /var/www/html/domain2dir ErrorLog /var/log/httpd/domain2.com-error.log CustomLog /var/log/httpd/domain2.com-access.log combined ServerName something.domain1.com ServerAdmin admin@domain2.com DocumentRoot /var/www/html/somethingdir ErrorLog /var/log/httpd/something.domain1.com-error.log CustomLog /var/log/httpd/something.domain1.com-access.log combined When I access my server via IP, it shows me /var/www/html/; [OK] When I access my server via something.domain1.com, it shows me/var/www/html/somethingdir; [OK] But when I access test.domain2.com, it returns me to http://IP/ instead of taking me to /var/www/html/domain2dir. Why is that? PS: domain2dir is a wordpress site. EDIT: I changed

email - Can message go to spam because of X-AUTH and X-SID fail on hotmail

When email with header as below comes to hotmail, it is put into spam folder. May be this is because of X-AUTH-Result: FAIL or X-SID-Result: FAIL? Or maybe the reason is different?How can I fix this? x-store-info:4r51+eLowCe79NzwdU2kR3P+ctWZsO+J Authentication-Results: hotmail.com; spf=softfail (sender IP is 114.112.255.122) smtp.mailfrom=www-data@uniquemobiles.com.au; dkim=none header.d=uniquemobiles.com.au; x- hmca=fail header.id=sales@uniquemobiles.com.au X-SID-PRA: sales@uniquemobiles.com.au X-AUTH-Result: FAIL X-SID-Result: FAIL X-Message-Status: n:n X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0xO0Q9MjtHRD0yO1NDTD0w X-Message-Info: 2etWe3f/w1cLzNMFGWUkxI4X8GWjUgRPldCSLaHlMPz8KnnMw4wBLDZs45EYPr3D2LbW9QLPCct0MQQSuVuU4zU05+QEV84llG4Dg802VOeHLX90x3RbeXG0tmVB1as7GPr5ogCj2Y5rfmkYkroQia15I9SlWXAaM4gmU/Jw3y3yzUlB8kZ3Ihqkx5o9o96DUIgQU8BZOI3s1cj++xdMrorb6Gh1fAHR Received: from uniquemobiles.com.au ([114.112.255.122]) by BAY0-MC2-F21.Bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900

Backup Entire USB Drive Containing Bootable Partitions in Debian

What's the best way to backup a USB drive with bootable data on it? For instance, I have what's called an ESXi server on a bootable USB, it's basically a linux variant with multiple partiions. What's the best way to back it up in case the original USB drive fails and the server needs to be put back together in a hurry? It seems my desires are: 1 Keep the backup solution as simple as possible 2 Keep system online for when system files are routinely backed up 3 Make drive replacement easy So with dd alone, criterion 1 and 3 are met possibly with... sudo dd if=/dev/sdc | gzip > /storage/backups/esxi-usb-backup-2014-nov.gz but someone on the internet criticized using dd due to geometry not being garanteed between drives. I didn't really understand what they meant because they were inarticulate, but I understand that a replacement drive of the same spec as the original may contain more bad sectors than the original and thus may not be able to fit all the partiti

linux - How to increase swap size?

Recently, I put more ram into my server and now I have got a total of 24GB of RAM. Originally, I setup the OS to have a 2GB swap size. /dev/sdc1 1 281 2257101 82 Linux swap / Solaris /dev/sdc2 * 282 60801 486126900 83 Linux 2GB is allocated for swap currently, but reading around it seems it is not much. For a system with 24GB, I am thinking to allocate at least 10GB of swap. My questions is: Can I do it while the OS is running? Do I have to reinstall? I am using OpenSuse 11.3 Answer You decided to create a separate swap partition upon installation. You can't resize it online - even an offline resize is going to take a considerable amount of time and bear the potential risk of damaging your subsequent filesystem on /dev/sdc2. The easiest option to work around this is to either create a new swap partition on a different disk you don't currently use (or can afford to offline for re-partitioning) or simply use a swap f

Proper SPF record

Our company emails are usually treated as spam. When we checked our score on Spamassassin, our score was pretty bad, 3.3 and the main reason was our SPF record. We would like to update our SPF record so that we resolve this issue. However, we are not very familiar with SPF records, so we would like to get your help: Our outside third party mail server uses three SMTP servers (let's call them 1.1.1.1, 2.2.2.2 and 3.3.3.3). Our mx is forwarded to 4.4.4.4. Currently, our mx record is the following: v=spf1 +a +mx +ip4:1.1.1.1 +ip4:2.2.2.2 +ip4:3.3.3.3 ~all When we tested our email, we have received the following info from mail-tester.com: softfail domain.com.tr: Sender is not authorized by default to use 'aaa.aaa@domain.com.tr' in 'mfrom' identity, however domain is not currently prepared for false failures (mechanism '~all' matched) domain.com.tr: Sender is not authorized by default to use 'aaa.aaa@domain.com.tr' in 'mfrom' identity, however dom

apache 2.2 - Two Server Names mapped to the same DocumentRoot

I have a server with two virtual hosts pointing to the same DocumentRoot folder. In that folder there is a Magento installation that manages properly both domains. Just in case this is important, the DocumentRoot is for both domains: var/www/magento/htdocs/ Now I need to install a Wordpress site in a folder, but it should be visible only under one domain. That is: www.domain1.com/blog must show the wordpress blog www.domain2.com/blog should not show anything I'm a newby in apache configuration, so I was wondering if someone can point me the right direction to know where to put the files in the server and how to avoid the blog being visible under the second domain. Thanks! Answer You should put Wordpress' files anywhere outside of the DocumentRoot. Then add something like the following configuration to the config of the virtualhost you want Wordpress on. Alias /blogs /path/to/wordpress # Put wordpress config here. The important directive here is Alias to point fro

nginx redirect/rewrite hostname only

I'm having some trouble redirecting https requests to my Nginx server when the only the short hostname of the server is used. I tried all combination of examples out there but nothing seems to trigger a redirect/rewrite. For instance, the server is called web01, and I want all requests to go to https://web01.domain.com . I have the http -> https rewrite working perfectly to the FQDN. However, if I try to go to https://web01 , it will proceed try and load the page and obviously error on certificate mismatch. I want it to rewrite the request to the FQDN. How can I configure the rewrite/redirect? What I have now: server { listen *:80; rewrite ^ https://web01.domain.com$request_uri permanent; } server { listen 443 default ssl; server_name web01.domain.com; } Answer The issue here is that when you access the server with https://web01 , your browser expects to get an SSL certificate for web01 common name. However, if your certificate is for the full doma

linux - Cannot bind to a specific IPv4 address when making outbound TCP connections, to hostnames that resolve to both IPv4 and IPv6 addresses

I just spent about 6 hours trying to figure this out, and I now believe CentOS/Linux is unable to bind to a specific virtual IPv4 address when connecting to a hostname that has an IPv6 address. This is a problem on servers that have multiple IP addresses. I am using Centos 6 (Linux kernel 2.6.32-573.12.1.el6.x86_64) To reproduce this big: Find a Linux machine with at least a /29 IPv4 public address space, and an IPv6 public IP. Alias at least one additional IPv4 to the main interface (eth0 or otherwise). For this example, I will say 30.0.0.1 as the machine's primary eth0 IPv4 address, and 30.0.0.2 is an alias bound to eth0:2 on a network of 30.0.0.0/29. Find a hostname that has both IPv4 and IPv6 addresses. For example, www.microsoft.com. telnet -b 30.0.0.2 www.microsoft.com 80 (This tests making an outbound connection using a specific ipv4 address) The IPv4 request connects successfully after unsuccessfully trying the hostname's IPv6 addresses, but the TCP connection actual

domain name system - SPF record (DNS)

Please help to set up a SPF record. I have found several SPF record generators but all questions are too complicated and I am afraid to make something wrong. I want to allow sending mail only from: 1) from all IP addresses that are listed as A records for this domain. 2) from other servers in my data center in the same IP range 2) from Gmail servers - my domain is set up to use Gmail (all MX records are Google's MX records). Google's instructions say to include include:_spf.google.com ~all in the SPF record. Do I need mx in this case? Which is correct: v=spf1 a ip4:111.222.333.0/24 include:_spf.google.com ~all or v=spf1 mx a ip4:111.222.333.0/24 include:_spf.google.com ~all Thanks.

centos - Mounted disk disappears after reboot

I've got a Centos 7 dedicated server with a SSD + a nvme ssd with 400gb. I followed a tutorial to mount + format (as xfs filesystem) the nvme disk so that I could setup my NoSQL databases on that drive. However every time I reboot my server the disk disappears and the mounted folder is empty again. What commands I ran to mount the disk: fdisk -l Returned: Disk /dev/nvme0n1: 400.1 GB, 400088457216 bytes, 781422768 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 275.1 GB, 275064201216 bytes, 537234768 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000b09ac Device Boot Start End Blocks Id System /dev/sda1 * 2048 2050047 1024000 83 Linux /dev/sda2 2050048

linux - Ratio Recommended Paging Space

How would you find the approximate ratio of recommended paging space to RAM size? and does the currently allocated paging space= recommended? Where is the recommended paging space? I used the top command and got KiB Mem: 1016476 total, 171668 free, 439328 used, 4054580 buff/cahce KiB Swap: 3999740 total, 3999740 free, 0 used, 399924 avail Mem

Iptables port forwarding for specific host dd-wrt/tomato

i m trying to open ports 5060 and 5004 (udp & tcp) for a specific internal ip (192.168.1.5) but i only want communication over these ports to be between specific external host(s) and deny everything else to this internal IP. i have tried various rules but they either seem to open port for any external source or block everything. Here is my -vL output Chain INPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any anywhere anywhere state INVALID 19 2811 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 shlimit tcp -- any any anywhere anywhere tcp dpt:ssh state NEW 0 0 ACCEPT all -- lo any anywhere anywhere 3 156 ACCEPT all -- br0 any anywhere anywhere 0 0 ACCEPT udp -- any

virtualization - How to [politely?] tell software vendor they don't know what they're talking about

Not a technical question, but a valid one nonetheless. Scenario: HP ProLiant DL380 Gen 8 with 2 x 8-core Xeon E5-2667 CPUs and 256GB RAM running ESXi 5.5. Eight VMs for a given vendor's system. Four VMs for test, four VMs for production. The four servers in each environment perform different functions, e.g.: web server, main app server, OLAP DB server and SQL DB server. CPU shares configured to stop the test environment from impacting production. All storage on SAN. We've had some queries regarding performance, and the vendor insists that we need to give the production system more memory and vCPUs. However, we can clearly see from vCenter that the existing allocations aren't being touched, e.g.: a monthly view of CPU utilization on the main application server hovers around 8%, with the odd spike up to 30%. The spikes tend to coincide with the backup software kicking in. Similar story on RAM - the highest utilization figure across the servers is ~35%. So, we've been do