Skip to main content

Posts

Showing posts from December, 2015

disaster recovery - What's your checklist for when everything blows up?

Users can't get to their e-mail, the CEO can't get to the company's home page, and your pager just went off with a "911" code. What do you do when everything blows up? Answer The first answer is stay calm! I learned that the hard way that panicking often just makes things worse. Once thats achieved the next thing is to actually ascertain what the problem is. Complaints from users and managers will be coming at you from all angles, telling you what THEY cannot do, but not what the problem is. Once you know the problem you can start the plan to fix it and start giving your angry users a timescale!

domain name system - localhost in a DNS zone

Our ISP also hosts our external DNS. By default they include an entry for localhost. For example: localhost.example.com. 86400 IN A 127.0.0.1 When I've asked them to remove it they give me a hard time and say that it's just the way Bind works. I've tried to do some research on why I might want to have that included but I couldn't find much. I did find at least one place that thought it be a possible XSS attack vector. It does appear to be fairly common so I did lookups on the top 20 website domains from alexa and most don't have such an entry but a couple do. A few others have an entry but instead of pointing to 127.0.0.1 they point to another a world route-able IP address. So anyway, why would I want to have locahost in the zone for my domain? Are their any issues with not having it? Is there any kind of best practice concerning this? Is it indeed a default Bind thing that I'm not aware of? Thanks Answer localhost.example.com is sometimes include

apache 2.2 - Why can my servers not access their own addresses?

I have two VMs on CentOS 6.5, running Plesk 12, and the webserver is Apache 2.2. One is a clone of the other, and I inherited them setup by someone else. My problem is that I can access my sites hosted on the servers from any computer other than these servers themselves . Say example.com points to one of my servers. If I am SSH'd into either of the servers and run wget example.com I will get back: --2014-10-20 18:01:42-- http://example.com/ Resolving example.com... Connecting to example.com| |:80... failed: Connection timed out. Retrying. The IP address it resolves to is correct. If I run wget on the servers using the IP address directly I have the same result negative result. If I run wget to the same domain on a computer outside these VMs I resolve to the same correct IP, and I am connected. Using localhost on the VMs does work fine: wget localhost --2014-10-20 18:12:35-- http://localhost/ Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... conn

spf - How to improve DMARC Compliance?

I've been monitoring our DMARC compliance with policy "p=none" for a month or two using both dmarcian and dmarcanalyzer. I've noticed that when we send a large email marketing campaign (10k+ emails), there is a spike in mail that fails DMARC that seems to be from the campaign. My company sends marketing emails to our clients using Pardot, and Pardot sends emails using a 5321.MailFrom address with a domain of "bounce.s7.exacttarget.com". We have set up our DKIM keys properly in Pardot and have SPF records on our domain that allow their servers to send mail on our behalf. I also know that since the Pardot emails are sent from "bounce.s7.exacttarget.com", we'll never be in DMARC alignment for SPF. So the problem is, if we send 10,000 emails to our clients, I'm only seeing DMARC aggregate report successes (using DKIM) for 1,000-1,500 emails. (I assume its normal for only a percentage of mail servers to send aggregate reports?) And I see a

Active Directory forest, root domain controller died

I have the following active directory structure: +------+ Forest +------+ | | | | v v domain1.example.com domain2.example.com (root) + + | | | | | v v DC1 DC2 (WinSrv2003) (WinSrv2008R2) DC1 died a long time ago without any backup and was the only domain controller for Domain1.example.com I think we transferred all of the FSMO roles to DC2 before DC1 died All of the company users & computers are on domain2.example.com I tried to add a new domain controller for domain2.example.com, but as its OS is Windows Server 2019, I receive an error saying that the forest functional level is Windows 2000. I would like to have a clean active directory structure (with replication DC etc..) to avoid this kind of mess in the future, what

domain name system - How to configure DNS so that www.example.com goes to one server, *.example.com to another

I'm trying to set up my domain as follows, but I'm not actually sure if it's possible. I have a domain where I would like the base and www addresses to go to my static site, but others to go to my application server. For example: My domain is registered with Dreamhost, and my application is on a VPS at Webbynode. I've set up the domain in Dreamhost to use Webbynode's nameservers: ns1.dnswebby.com ns2.dnswebby.com ns3.dnswebby.com And in Webbynode I've set up a wildcard A record to point to the IP address of my VPS: * 1.2.3.4 A and this works nicely, if I go to app.example.com it resolves to my application server at Webbynode. However, what I'd like to do is have example.com and www.example.com go to my static site, hosted back at Dreamhost, whilst still having any other domain go to my app. What I've done to try and achieve this is set up these DNS "NS" entries at Webbynode, trying to get Dreamhost to resolve these domain names: (empty) ns1.

Role of multiple domain controllers in a domain

I wanted to know the use of having multiple domain controllers for one domain And some more questions related to it are: If there are multiple domain controllers can computer information in one domain controller visible to others? If there are two computers having a saame domain name shown as xyz in control panel is there a possibility of them to be in a different domain? PS:I am new to AD and could not find straight forward answer.

lamp - Website is down and unable to login via SSH / Putty on Ubuntu 14.04 x64 Digitalocean

I have a Magento website on degitalocean.com and I am unable to login via SSH or SFTP. Almost every alternate day I'm facing this problem and I have to restart my droplet via digitalocean account to make website working. Few days ago it was giving MYSQL connectivity error which i resolved by increasing droplet RAM to 2GB and by creating SWAP file. Can anyone please suggest solution for this issue? Also let me know if you want to see any code files. I have added few lines from error.log please see if this can be help 150808 10:18:38 [Note] Plugin 'FEDERATED' is disabled. 150808 10:18:38 InnoDB: The InnoDB memory heap is disabled 150808 10:18:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins 150808 10:18:38 InnoDB: Compressed tables use zlib 1.2.8 150808 10:18:38 InnoDB: Using Linux native AIO 150808 10:18:38 InnoDB: Initializing buffer pool, size = 128.0M 150808 10:18:38 InnoDB: Completed initialization of buffer pool 150808 10:18:47 InnoDB: highest supported fil

linux - Nagios NRPE on Ubuntu 12.04 "Unable to read output"

Right to the point. NRPE.CFG Modifications: Added Nagios Host to Allowed: allowed_hosts=127.0.0.1,192.168.1.10 Removed # in front of command_prefix=/usr/bin/sudo After that i have reloaded the service. /etc/init.d/nagios-nrpe-server restart I have also edited /etc/sudoers # User privilege specification root ALL=(ALL:ALL) ALL nagios ALL=NOPASSWD: /usr/lib/nagios/plugins/ Running: $ ./check_users -w 5 -c 10 USERS OK - 1 users currently logged in |users=1;5;10;0 Works and i get my results Running: su nagios -c "./check_users -c 2 -w 2" Resolves in nothing. From the Nagios host to the new remote system i can run: check_nrpe -H 192.168.1.20 And i get NPRE v2.12 as result. I have checked so to that Nagios is owner to the plugin folder but still no go. Any tips would be helpful. (and yes i have googled and read a 10-20 threads but still no go)

Server configuration + Memory + VMWare question

I'd like to buy this guy: DL 380 G7, http://www.pcconnection.com/IPA/Shop/Product/Detail.htm?sku=11578414&cac=Result it's coming with 24 GB q1) what should be the best way to upgrade the memory to utilize the best performance? Right now it has 6 slots occupied with 4 GB modules. Is it possible to add a new 8GB module to this config or should it be always in pairs of the same type (again from best practice point of view)? q2) I want to establish vmware esxi on it and put the following VMs on it development VM (PHP/MySQL) development - 2 GB VM production hosting of PHP 5.0 sites - 6GB VM production hosting of PHP 5.3 sites - 6GB VM production hosting of ASP.NET sites (couple, low traffic) - 6 GB mercurial repositories (source control) - 2 GB I don't have two much load from production VMs, approximate traffic per month on all these 3 VMs is planned at around 200-300GB with 20-30k of visitors across all sites deployed in these VMs anything you will suggest to make system mo

hard drive - SATA Disk not recognized in brand new server

We bought a HP Proliant ml350 g6 server for our work place. It comes with RAID 5 supportted by SAS disks. It also has SATA 750 GB disk. We wanted to install windows server 2008, but the SATA disk is not detected. I thought it was a driver issue and installed the OS on the SAS disk. But even after installing the driver for SATA controller, the SATA disk is not detected. I even tried a diferent slot for the SATA disk, but in vain. Has this got anything to do with the BIOS settings. I see that all the SATA PCI have IRQ as 10. Should I change the IRQ's ? I think the DVD drive is also SATA. I have given all the info I know. Please help me out. I am dejected with this. Answer Using the array configuration utility in the boot loader to create a logical disk worked. With this the SATA disk was detected.

amazon web services - Difficulty Connecting To AWS EC2-VPC From Public Wifi

I've got an EC2 instance running on an AWS VPC (free tier), on top of which I'm running a website. I'm also using an RDS MySQL DB Instance for my database needs, and have set up security groups to allow the following: EC2 Security Group - Inbound: Allows access to all HTTP traffic via port 80 Allows access to SSH traffic from 2 IP adresses I usually sit in via port 22 Allows access to all MySQL Traffic via port 3306 EC2 Security Group - Outbound: Allows access to all traffic from all ports RDS Security Group - Inbound: Allows access to MySQL Traffic from my EC2 security group over port 3306 RDS Security Group - Outbound: Allows access to all traffic from all ports Usually, when I sit outside of the 2 IPs I mentioned in the security groupt I create a new inbound rule in my default security group for the EC2 instance, which allows SSH access over port 22 from the IP in which I'm currently sitting. Today, for some odd reason, I cannot connect over SSH to the EC2 Inst

ZFS best practices with hardware RAID

If one happens to have some server-grade hardware at ones disposal, is it ever advisable to run ZFS on top of a hardware-based RAID1 or some such? Should one turn off the hardware-based RAID, and run ZFS on a mirror or a raidz zpool instead? With the hardware RAID functionality turned off, are hardware-RAID-based SATA2 and SAS controllers more or less likely to hide read and write errors than non-hardware-RAID controllers would? In terms of non-customisable servers, if one has a situation where a hardware RAID controller is effectively cost-neutral (or even lowers the cost of the pre-built server offering, since its presence improves the likelihood of the hosting company providing complementary IPMI access), should it at all be avoided? But should it be sought after?

Linux Hosting: What is the purpose of setting hostname/FQDN in hosts file?

I just bought a Linode VPS hosting plan and was following this guide to set up. In the "Setting the Hostname" section and "Update /etc/hosts " section, it says the FQDN/hostname to be set here does not need to be related with the websites I am about to host, which makes me confused. I did my own research by reading lots of articles but am still not very sure what role the hostname/FQDN is playing in my web hosting business. Here are some basic facts I've managed to find out, feel free to correct me if anything wrong: FQDN must be something like xxx.somedomain.com, if "xxx." is omitted then it is not a FQDN. the xxx , which I think could be loosely called a subdomain, can also be referred to as "hostname", according to https://kb.iu.edu/d/aiuv . In my local machine, by adding the following line to the hosts file 63.117.14.58 www.yahoo.com whatever every network requests for "www.yahoo.com" or "whatever" will be redirecte

php - Can't seem to get mod_rewrite to set an environment variable

This is just plain weird. I have put the following in an .htaccess file: RewriteRule ^a-file-on-the-server$ index.php [E=let_me_in:test] And in my PHP script, I have the following: print_r($_ENV); ...which prints out all the environment variables. When I go to mydomain.com /a-file-on-the-server , I get the output: Array ( [DOCUMENT_ROOT] => ******** [GATEWAY_INTERFACE] => CGI/1.1 [HTTP_ACCEPT] => application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 [HTTP_ACCEPT_CHARSET] => ISO-8859-1,utf-8;q=0.7,*;q=0.3 [HTTP_ACCEPT_ENCODING] => gzip,deflate,sdch [HTTP_ACCEPT_LANGUAGE] => en-US,en;q=0.8 [HTTP_CACHE_CONTROL] => max-age=0 [HTTP_CONNECTION] => keep-alive [HTTP_COOKIE] => ******** [HTTP_HOST] => ******** [HTTP_USER_AGENT] => Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.99 Safari/533.4 [PATH] => /bin:/usr/bin [QUERY_

networking - Barriers to IPv6 deployment: addressing

There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about pa

monitoring - What Warning and Critical values to use for check_load?

Right now I am using these values: # y = c * p / 100 # y: nagios value # c: number of cores # p: wanted load procent # 4 cores # time 5 minutes 10 minutes 15 minutes # warning: 90% 70% 50% # critical: 100% 80% 60% command[check_load]=/usr/local/nagios/libexec/check_load -w 3.6,2.8,2.0 -c 4.0,3.2,2.4 But these values are just picked almost random. Does anyone have some tested values? Answer Linux load is actually simple. Each of the load avg numbers are the summation of all the core's avg load. Ie. 1 min load avg = load_core_1 + load_core_2 + ... + load_core_n 5 min load avg = load_core_1 + load_core_2 + ... + load_core_n 15 min load avg = load_core_1 + load_core_2 + ... + load_core_n where 0 < avg load < infinity . So if a load is 1 on a 4 core server, then it either means each core is used 25% or one core is 100% under load. A load of 4 means all 4 cores are under 100% load. A load of >4 means the s

email - What should I check in my server to be ready for sending newsletters to thousands of my site users?

I have a Windows 2008 dedicated server, and I am going to write a service to send newsletters from this server. Is it possible to send thousands of mails at the same time? What configurations should I check in the server to make sure things will work fine? Update Send all these mails within 2 days. Answer I'm looking at your question from a slightly different viewpoint. One of our servers is a P4, 2GB server running SBS2003 (Exchange), and that will diligently send 3-4 thousand emails in an afternoon. The emails simply get queued and delivered in turn. Technical/configuration tasks: You need a reverse DNS entry for the IP address that the mailserver sends from. Installing DKIM will help with delivery to some ISPs including Yahoo. Adding an SPF record will also help (another user on serverfault says this counts against you - he works for an anti-spam company and says they find spammers are more likely to use a correctly-defined SPF record than non-spammers although

mac osx - multiple port-based apache vhosts on osx 10.6 not resolving properly

I have a few local versions of development websites on my local mac, and want to provide access to them from a browser through vhosts, as well as using the live (web) versions from time to time. I have read many examples of people doing similar things by changing the URL, and having apache listen to the unique URL to serve from a local location. I have always done it using the same URL, but a different port, and while it works seamlessly on windows, I can't get it working on the mac. (Let's say) I have two websites: amazingwebsite.com facebookiller.org I want to access the local versions by using the same URL, by enabling the browser's proxy (with one click) which I have set to 8080. apache is set to Listen *:8080 in httpd.conf. In httpd-vhosts.conf (which is getting loaded) I have: NameVirtualHost *:8080 ServerAdmin webmaster@amazingwebsite.com ServerName amazingwebsite.com ServerAlias www.amazingwebsite.com DocumentRoot "/Users/username/Development

apache 2.2 - The A to Z of setting up a Linux box for secure local hosting

I am in the process of reinstalling the OS on a machine that will be used to host a couple of applications for our business. The applications will be local only; access from external clients will be via VPN only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but I am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer , if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 MySQL: 5.0.77 (

windows - Folder ACLs lockdown. Unable to take ownership

this should be a trivial issue, except this time it isn't. I have a folder on a non-boot drive in a Windows Server 2008 R2 machine which is completely stuck. Using the domain administrator account, I'm unable to: delete it take ownership of it (takeown gives ERROR: Access denied!!) change the permissions on it (access denied) When checking the effective permissions it shows that: Administrator (Domain Admin) has FULL permissions on the folder The Domain Admins group also has FULL permissions But in the security tab it shows that the Domain Admins group lacks the Special Permissions. The folder does not inherit its ACLs from the parent, while its children inherit its ACLs. The owner (which is not the domain admin but a random domain user) has the following rights on it: read permissions change permissions Except that not even logging onto the server with the owner credentials will allow me to change the permissions or the owner. I always get ACCESS DENIED when using icacls or ta

windows server 2008 - Disallow Non-authoritative requests to DNS caches

I need to configure the following setting for my DNS Server. This server is also my Domain Controller ( Window Server 2008 R2 Standard ) A) Non-authoritative requests to DNS caches should not be allowed and configure DNS to prevent cache snooping by refusing to answer non-recursive queries as server and never consult the cache when responding to non-RD queries. Do you know, what setting need to be done to achieve the above task. Thanks & Regards, Param

Sending 10,000 emails?

(I don't know whether to submit this to SuperUser or StackOverflow.) My friend and I have a subscriber list -- strictly opt-in, for people interested when we release new projects -- and we may want to send out a first email this year. How would one best send an email to over 10,000 people? Perhaps there is a good, I suppose paid service, which would do this for one? The key being that the emails ought arrive, with good chances of not being immediately labeled as spam due to the sender or something (as it's definitely not spam). When I simply send emails from my 1&1 server, they often immediately make it into the spam folder, even though when sending the same email through say Google Appspot, it arrives fine. FWIW, the email addresses are not yet verified (people simply signed up for the list by providing an email, and optionally a name). Analyzing what bounced may be a pro. BTW, we will provide an opt-out link with every mail. Thanks for any pointers, and sorry if this does

nginx domains using SSL cert listening but inaccessible

I have multiple domains and multiple domains using SSL certs running under nginx. All domains are basically using the exact same config, substituting for names of course, except for the HTTPS-enabled domains which I have the SSL settings specified. Between these two domains the config is also the same for SSL with exception of file names for keys and such. Each website is also running on it's own dedicated IP. (all of them) All my non-SSL sites are working just fine. I can access them without any problems. All my SSL sites get a 521 error from CloudFlare. (Strict SSL is on, just fyi) One of the domains I had previously set up had been working just fine. Even if I remove the other SSL-enabled domain it still doesn't work now. The only config change I made was adding a new domain that was also using a SSL cert. When I test the config with nginx it says everything is fine. When I check netstat I can see those IPs in the listening over 443. I don't see any errors in /var/log/sy

linux - packet queue performance discrepencies with BIND nameserver

Background: I've inherited a high volume caching nameserver environment (Redhat Enterprise Linux 5.8, IBM System x3550) that has inconsistent ring buffer settings: 1020 for eth0 and 255 for eth1. eth0 is connected to switch 1 of its local datacenter, eth1 is connected to switch 2 of the same. Every server in the cluster alternates between whether eth0 or eth1 is the active interface, and every cluster is located in a different region. The ring buffers obviously need to be made consistent. Here's where things get trickier: I discovered the problem above when researching why a number of the nameservers are frequently logging "error sending response: unset" errors, which the ISC knowledgebase suggests is related to outbound congestion . Servers with the higher ring buffer setting (1020) drop fewer packets on ifconfig (as one would expect), but tend to log the above error with great frequency, ~20k times a day in one of my highest load groups. We'll call this '

Apache "No Permission" - 403 forbidden

I've accidentally performed a wrong chown update this morning and now my /var/www permissions are all wrong. I'm unable to access anything anymore, apache will always say I do not have permission to view this page, like: You don't have permission to access / on this server. (even after chmodding everything to 777, or chowning it to www-data) Does anyone have any clue on what's going wrong? Answer A number of things could be going wrong. First thing is to look in your error log (maybe in /var/log/apache2/error_log ) and look for the Apache reason for failing to serve this location. Next is to check your directory permissions up to your document root. E.g. if your document root is in /var/www/htdocs then you need to ensure the Apache user has +x permissions on the directories / , /var , /var/www , and /var/www/htdocs . Test whether you can access these directories yourself: su www-data ls / ls /var ls /var/www ls /var/www/htdocs exit Are you sure www-dat