Skip to main content

Posts

Showing posts from January, 2016

ubuntu 12.04 - Heartbleed not fixed by Openssl and server upgrade

I have inherited a server in one of our Dev environments and found out straight away that it was not patched when the heartbleed was discovered. Now, I've upgraded it - including all SSL libraries and I've regenerated self signed certificates, yet even after full server reboot it still shows up as vulnerable against various Heartbleed checkers. This is the state of the things. Ubuntu/Kernel version: root@server:~# uname -a Linux server.domain.com 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@server:~# OpenSSL lib version: root@server:~# dpkg -l|grep ssl ii libio-socket-ssl-perl 1.53-1 Perl module implementing object oriented interface to SSL sockets ii libnet-ssleay-perl 1.42-1build1 Perl module for Secure Sockets Layer (SSL) ii libssl1.0.0 1.0.1-4ubuntu5.13 SSL shared libraries ii openssl 1

iis 7 - Cannot 301 redirect with IIS URL Rewrite Module

I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: " http://www.mydomain.com/test.html " And the rule is supposed to do a 301 redirect to " http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-t

apache 2.4 - Multiple virtualhosts with the same servername

I have an apache server, a unique domain name and multiple tomcat instances, here is my actual config : ServerName my.domain.com ... ProxyPass /sample_1 ajp://127.0.0.1:8009/sample_1 ProxyPass /sample_2 ajp://127.0.0.1:8009/sample_2 ProxyPass /sample_3 ajp://127.0.0.1:8014/sample_3 ProxyPass /sample_4 ajp://127.0.0.1:8019/sample_4 .... CustomLog logs/my_access_log combined ErrorLog logs/my_error_log Now, I would like to use a CustomLog for each tomcat instance, I tried to put each ProxyPass directive in a separate virtualhost, but I got 403 error except for the virtualhost on the top according to httpd -S output. I don't like to remove the ServerName directive, because the service will be available throug the IP address. Help please Answer You can use the SetEnvIf directive in combination with conditional logging to log specific requests to different log files. ServerName my.example.com ... SetEnvIf Request_URI "/sample_1.*" sample_1

windows - Active Directory W7 Client - Primary/Secondary Failover of DNS not happening

I have 2 DC's with DNS on both as well. DC1 = 10.0.100.1 DC2 = 10.0.100.2 All windows 7 client's have primary secondary ip's pointing to DC1/DC2 respectfully. For test purposes, I shutdown DC1, and rebooted Win7 Client. Then I launched NSLOOKUP, and every time it always selecting DC1 DNS. Although it have DC2 dns as secondary but still it chooses dc1 dns always. on W7 client, I tried echo %logonserver% and its showing \DC02 correctly which means client is loggedin to secondary DC successfully. If I manually set DC2 DNS as primary then all OK, or if manually set nslookup - dc02 then it works or if i PING DC02 or other host, fine too. But normal nslookup, it always selecting dc1 dns and giving Timeout in every query. I have tried waiting for about an hour, rebooted client machine many times , no use. So my question is why Win7 is not switching to Secondary DNS after primary is failed? Answer Nslookup is a specific DNS testing tool. It does not mimic the behavior

splunk - Weird DF ourput in Red Hat 5.4 - Used < Size, but 0 available?

I have a server with two LUN's mounted from a local SAN. I have a configuration file in place for the vendor software we're using (splunk) that defined the size of the second LUN, but I had accidentally configured it as 6GB larger than it actually was. This morning I came in to see whistles going off about the error. It has been fixed, and the Splunk server process has been restarted to make use of it. It should be clearing out data, and it appears to be doing so. However When i look at the output of DF, I see something weird: Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p3 507G 4.0G 477G 1% / /dev/cciss/c0d0p1 97M 19M 73M 21% /boot tmpfs 36G 0 36G 0% /dev/shm /dev/mapper/hot_group-lvol0 148G 128G 14G 91% /splunk/hot /dev/mapper/cold_group-lvol0 837G 797G 0 100% /splunk/cold As you can see DF is showing that the total size of the disk is significantly larger

Monitoring PostgreSQL replication with Nagios and check_postgres shows intermittent delay

I have a master and hot standby setup with PostgreSQL 9.3, and I'm attempting to monitor the state of replication on the standby using the check_postgres tool and the "hot_standby_delay" action. This seems to work by calculating the difference in bytes between the xlog position on the master and the standby. In numerous online examples I have seen warning and critical thresholds for this in the < 1MB range. The exact command we are using in Nagios is: /usr/local/bin/check_postgres.pl --action=hot_standby_delay --host=$HOSTNOTES$,$HOSTADDRESS$ --port=5432 --dbname=monitoring --dbuser=monitoring --dbpass=monitoring --warning=1000000 --critical=5000000 Which should set a warning at roughly 1MB and an outage at roughly 5MB. However, on our servers we routinely see it spike to a high level, like this: [1417719713] SERVICE ALERT: host;PostgreSQL: Replication Delay;CRITICAL;SOFT;1;POSTGRES_HOT_STANDBY_DELAY CRITICAL: DB "monitoring" (host:host.example.com) 12117588

ftp proxy using FQDN's

I have two servers behind a Watchguard, one is a linux server, one is a windows server. The watch guard forwards http and ftp requests (ports 80, and 21) to a proxy server. I have configured apache on the proxy server so I can proxy the http requests to either server based on domain names as below ServerName mysite.com.au ProxyPreserveHost On ProxyPass "/" "http://10.0.2.21/" ProxyPassReverse "/" "http://10.0.2.21/" ServerName mysite.net.au ProxyPreserveHost On ProxyPass "/" "http://10.0.2.31/" ProxyPassReverse "/" "http://10.0.2.31/" So .com.au goes to 10.0.2.21, and .net.au goes to 10.0.2.31. These are both internal servers. I want to do the same type of forwarding for ftp (port 21). So if I try to ftp to a site hosted on the windows server, the proxy will know it is hosted on the windows server (10.0.2.31) and forward the ftp requests to the correct

apache 2.2 - Tuning Apache2 prefork MaxClients ServerLimit

I have a machine with 128 GB Ram that is using Apache2 as Webserver (in this machine there is no Database Server, the Database Machine is a 64 GB Ram machine that can handle 2000 max connections). I see with a monitoring tool that there are at the moment about 44 busy workers and 12 idle workers, what are best theoretical values for my prefork module? i got blank pages sometimes loading websites on high load hours and got this error on my apache error log: [notice] child pid 13595 exit signal Segmentation fault (11) how can solve this issue too? My Apache2 Prefork Module configuration: StartServers 3 MinSpareServers 3 MaxSpareServers 5 ServerLimit 3200 MaxClients 3100 MaxRequestsPerChild 0 Free -h on the www Machine : total: 128 G free: 97GB (with apache2 running) shared 0b buffers 1.9G cache 23G Ram used By Apache2 and other Programs: Private + Shared = RAM used Program 96.0 KiB + 61.0 KiB = 157.0 KiB sh 176.0 KiB + 26.0 KiB =

email - SPF and DKIM help: Do the FAIL reports from DMARC indicate an issue?

I am having trouble determining if my SPF and DKIM are configured properly. Here are key details: My domain is mysteryscience.com We send mail from google apps, from SendGrid, and from Intercom. All seem to be working properly, although I do hear cases of our emails getting flagged as spam which is why I'm investigating this. I have enabled SPF, DKIM, and DMARC My SPF record seems to be semantically correct (checked here: http://www.kitterman.com/spf/validate.html ) My SPF TXT record is: v=spf1 ip4:198.21.0.234 include:_spf.google.com include:spf.mail.intercom.io -all 198.21.0.234 is my dedicated IP address for sending through SendGrid (mail.mysteryscience.com is my CNAME forwarding to them) I have enabled DMARC and I'm reviewing the emails I get from various mail servers. While reviewing my results from Google.com I noticed a bunch of SPF and DKIM fails. It looks like these may have been rejections of legitimate emails I sent, but I'm not sure how to read this file. Here a

Two servers, same email sent from both, Gmail sees only one as spam

I've been working on this problem for years with no success (I gave up a while back and just hoped Gmail would eventually "learn" that messages from one of my servers wasn't spam, but that apparently never happened). I'm a game developer who runs forums and download servers for my customers. As part of my operation, I need to sent people emails, often at their request. For example, password-reset emails from my forums. This isn't a "bulk" mailing situation, nor is my server sending out lots of email. I have two servers, both with the same hosting provider. One is in a shared hosting environment, where I get a subdir and my domain name is resoloved using virtual hosting. Emails from that server have always been received just fine (though I didn't set up the email system, nor do I have much control over it). The other is a VPS that I manage. I have my own IP address there, and have full control over everything. From the VPS, I've never be

debian - my server was rooted via h00lyshit exploit, any good advice?

So yesterday I found out that my server was rooted via the h00lyshit exploit. So far I deleted all the files that might be asociated with the exploit. I also deleted all the ssh keys in ~/.ssh/authorized_keys . I changed the root password to 25 random character password and changed mysql passwords as well. Also i think the attacker was from italy, and since i need to have access only from my country i blocked every ip range except my own country, will this help? Do you guys have any good advice what i should do? I plan to disable root via ssh (i should have done it much sooner, I know :( ). And is there a way to check if he can access my server again? Also no damage was done luckyly, oh an i'm running Debian Lenny with 2.6.26 kernel if somebody is interested. PS: yay my first question :D Answer You should restore the server from a known good backup. There's no real way to know that no other back doors were installed is there?

installation - HP Smart Array B110i SATA RAID Controller drivers crash HP DL320 G6

I'm trying to install Windows Server 2012R2 on a HP DL320 G6 with a Smart Array B110i SATA RAID Controller. During the install, it asks me to load the driver for the RAID, I load the drivers from cp022401 (also tried cp020545 ) and the machine promptly crashes with the HP BIOS frowny face. That particular server was running Hyper-V 2012 with no problem, so I know that the hardware is fine, I'm just replacing the old hard drives with newer/bigger ones. Do you have any idea how to install the B110i drivers successfully on Win2012R2? Answer Ok, finally figured out what was wrong! Thanks to Chopper3 for pointing me in the right direction. The problem is that the firmware was too old for the Windows Server 2012R2 B110i driver. I had to upgrade the firmware using Smart Update Firmware DVD Proliant Support Pack v10.10 first, then I was able to run SPP to get the latest version. After all that, the B110i driver ran fine.

networking - Switching to IPv6 implies dropping NAT. Is that a good thing?

This is a Canonical Question about IPv6 and NAT Related: So our ISP has set up IPv6 recently, and I've been studying what the transition should entail before jumping into the fray. I've noticed three very important issues: Our office NAT router (an old Linksys BEFSR41) does not support IPv6. Nor does any newer router, AFAICT. The book I'm reading about IPv6 tells me that it makes NAT "unnecessary" anyway. If we're supposed to just get rid of this router and plug everything directly to the Internet, I start to panic. There's no way in hell I'll put our billing database (With lots of credit card information!) on the internet for everyone to see. Even if I were to propose setting up Windows' firewall on it to allow only 6 addresses to have any access to it at all, I still break out in a cold sweat. I don't trust Windows, Windows' firewall, or the network at large enough to even be remotely comfortable with that. There's a few ol

Simulate a hard disk fail on Virtual Box

I'm testing some NAS setups using virtualbox, with several virtual hard drives, and software raid. I would like to test the behavior under certain fails, and I would like to simulate that one of the hard disks broke and there's need to rebuild the RAID... Would it be enough to do a cat /proc/urandom > /virtualdisk Or as the virtual disks are containers, VBox couldn't use it and would break the VirtualBox machine? Answer I don't know that you can fail a hard drive this way in VBox (or any VM -- They're typically designed to pretend hardware is perfect). You can try it and see but the results could be pretty awful... A better strategy might be to shut down the VM & remove the disk, power on & do stuff, then shutdown & re-add the disk. Another option is to use the software RAID administration tools to mark a drive as failed (almost all of them support this AFAIK), scribble on it from within the VM, then re-add it & watch the rebuild