Skip to main content

Posts

Showing posts from April, 2019

datacenter - What is the proper way to manage cabling behind patch panels?

itemprop="text"> I have a 4-post 19" rack with a 72-port 2U quickport patch panel where horizontal structured cabling terminates. The cables are bundled and enter the rack at the rear. From the back of the rack, they need to get near the front of the rack directly behind the patch panel, fan out to the whole width of the patch panel, and somehow have enough slack so that they can be terminated. What is the proper way to accomplish this? How should the bundle of cable be supported on its way from the rear to the front of the rack? What considerations determine the ideal distance from the front of the rack to begin fan out? How can I prevent droop in the fanned-out section directly behind the patch panel from interfering with the rack unit below? Answer I like to use

datacenter - What is the proper way to manage cabling behind patch panels?

I have a 4-post 19" rack with a 72-port 2U quickport patch panel where horizontal structured cabling terminates. The cables are bundled and enter the rack at the rear. From the back of the rack, they need to get near the front of the rack directly behind the patch panel, fan out to the whole width of the patch panel, and somehow have enough slack so that they can be terminated. What is the proper way to accomplish this? How should the bundle of cable be supported on its way from the rear to the front of the rack? What considerations determine the ideal distance from the front of the rack to begin fan out? How can I prevent droop in the fanned-out section directly behind the patch panel from interfering with the rack unit below? Answer I like to use 1U front and rear cable management above and below my patch panels. This mostly applies to 2-post installs... As for the cabling bundle, it should be secured zip-tied and routed appropriately... However, a 4-post exampl

nginx - Custom domain name and wildcard DNS

I have a web application that uses subdomains for user profiles (ex. noodles.example.com). I have a wildcard DNS configured in Nginx. Everything is working fine with that. How would a user be able to point his own domain to my server IP address and get his profile? How would such a scenario work?

nginx - Custom domain name and wildcard DNS

I have a web application that uses subdomains for user profiles (ex. noodles.example.com). I have a wildcard DNS configured in Nginx. Everything is working fine with that. How would a user be able to point his own domain to my server IP address and get his profile? How would such a scenario work?

domain name system - Windows Server - DHCP/DNS Updates - purging outdated DNS records

We have Windows 2008 Servers running and providing DHCP and DNS Services to the network. So far everything is working great. The problem is, the clients which are provided with IPs from DHCP are automatically listed in DNS with their respective hostnames. Clients are listed 2-4 times with the same name but different IPs. There is always one right IP/hostname combination and 2-4 outdated one. Is there any easy automated way to get rid of all the outdated ones? I assume there must be some way of making an expiration time in the DNS when outdated records get purged?

domain name system - Windows Server - DHCP/DNS Updates - purging outdated DNS records

We have Windows 2008 Servers running and providing DHCP and DNS Services to the network. So far everything is working great. The problem is, the clients which are provided with IPs from DHCP are automatically listed in DNS with their respective hostnames. Clients are listed 2-4 times with the same name but different IPs. There is always one right IP/hostname combination and 2-4 outdated one. Is there any easy automated way to get rid of all the outdated ones? I assume there must be some way of making an expiration time in the DNS when outdated records get purged?

debian - systemd does not notice apache starts

itemprop="text"> My debian-system has been infected with the systemd-virus... When I issue the "service apache2 start" command to start apache, it thinks it failed, but apache is running just fine. So "service apache2 stop" does nothing, because system thinks apache has not started. To stop apache, I have to issue "killall apache2" # service apache2 start Job for apache2.service failed. See 'systemctl status apache2.service' and 'journalctl -xn' for details. # systemctl status apache2.service â apache2.service - LSB: Apache2 web server Loaded: loaded (/etc/init.d/apache2) Active: failed (Result: exit-code) since Thu 2016-06-09 15:49:43 CEST; 32s ago Process: 7513 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE) Jun 09 15:49:43 apache2[7513]:

debian - systemd does not notice apache starts

My debian-system has been infected with the systemd-virus... When I issue the "service apache2 start" command to start apache, it thinks it failed, but apache is running just fine. So "service apache2 stop" does nothing, because system thinks apache has not started. To stop apache, I have to issue "killall apache2" # service apache2 start Job for apache2.service failed. See 'systemctl status apache2.service' and 'journalctl -xn' for details. # systemctl status apache2.service â apache2.service - LSB: Apache2 web server Loaded: loaded (/etc/init.d/apache2) Active: failed (Result: exit-code) since Thu 2016-06-09 15:49:43 CEST; 32s ago Process: 7513 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE) Jun 09 15:49:43 apache2[7513]: Starting web server: apache2 failed! Jun 09 15:49:43 apache2[7513]: The apache2 instance did not start within 20 seconds. Please read the log files to discover problems ... (warning). Ju

HP smart array P812i and storage works enclosure D2700 BAD PERFORMANCE

I have HP DL360 G6 with Smart Array P812i and Storage Works D2700 enclosure. Enclosure has 25x148GB 15K SAS 6G drives in RAID10 mode. P812i has 512mb cache (50% read/50% write) Crystal Disk mark shows ~500MB/s sequential read speed, other tests shows simmilar results. FileCopy speed ~300MB/s read and ~500MB/s write. I expected better results in this HW configurations, lets say 1500-2000MB/s sqeuntial read speed. Application which runs on that HW need lots of I/O performance (sequential read) and current performance do not cover price of HW. Can anyone tell me, is the current system performance normal, or can system be tuned for better performance and how thanx

HP smart array P812i and storage works enclosure D2700 BAD PERFORMANCE

I have HP DL360 G6 with Smart Array P812i and Storage Works D2700 enclosure. Enclosure has 25x148GB 15K SAS 6G drives in RAID10 mode. P812i has 512mb cache (50% read/50% write) Crystal Disk mark shows ~500MB/s sequential read speed, other tests shows simmilar results. FileCopy speed ~300MB/s read and ~500MB/s write. I expected better results in this HW configurations, lets say 1500-2000MB/s sqeuntial read speed. Application which runs on that HW need lots of I/O performance (sequential read) and current performance do not cover price of HW. Can anyone tell me, is the current system performance normal, or can system be tuned for better performance and how thanx

linux - How to quickly find out if a *nix server is running OK?

itemprop="text"> Often when I find myself in front of a unix/linux (or any other *nix variant) console and have to quickly diagnose the server's condition, I just can't remember everything that should be checked. I'll try a vmstat, some ps/top manoeuvres, read procinfo and some log files (boot & sys), but what I'd really like is a quick way to view Cpu, Hard disk and Physical Memory condition. I seem to know a lot of it already is present in vmstat, but somehow I miss the ease of server 2008 where you can find a nice resource monitor while even the task manager itself can provide a quick peek on the system condition (and not even talking about server 2008's monitoring graph tools). Any suggestion, or am I just being lame because vmstat really is the grail ? /> Edit: Well thanks for the feedba

linux - How to quickly find out if a *nix server is running OK?

Often when I find myself in front of a unix/linux (or any other *nix variant) console and have to quickly diagnose the server's condition, I just can't remember everything that should be checked. I'll try a vmstat, some ps/top manoeuvres, read procinfo and some log files (boot & sys), but what I'd really like is a quick way to view Cpu, Hard disk and Physical Memory condition. I seem to know a lot of it already is present in vmstat, but somehow I miss the ease of server 2008 where you can find a nice resource monitor while even the task manager itself can provide a quick peek on the system condition (and not even talking about server 2008's monitoring graph tools). Any suggestion, or am I just being lame because vmstat really is the grail ? Edit: Well thanks for the feedback, everyone. I should add that I'm not really talking about constant monitoring (where nagios is a very good proposition), but about an occasional walk to a server - not necessarily mine

amazon ec2 - Setting Up ELB with SSL - What is Backend Authentication?

I started setting up Amazon's Elastic Load Balancing Service for my server pool and I need to setup HTTPS/SSL. I have all my SSL Certificates setup, but then I come to the step for backend authentication and I'm unsure what certificate is required with the "Backend Authentication". Is it my sites private key, public key, or do I need to generate a new key on the server? Thank you for the assistance.

amazon ec2 - Setting Up ELB with SSL - What is Backend Authentication?

I started setting up Amazon's Elastic Load Balancing Service for my server pool and I need to setup HTTPS/SSL. I have all my SSL Certificates setup, but then I come to the step for backend authentication and I'm unsure what certificate is required with the "Backend Authentication". Is it my sites private key, public key, or do I need to generate a new key on the server? Thank you for the assistance.

io - Acceptable I/O speeds for 6 x 250GB SSDs in RAID 10

itemprop="text"> I'm running CentOS 7 (XFS filesystem) on a dell server with a PERC H700 raid controller. Inside this server I have 6 x Samsung 850 Evo 250GB SSDs (yes they are consumer drives however, this is a home server. In any case, I performed a DD test and am getting speeds of around 550MB/s which would be the approximate write speed of a single SSD yet these drives are in RAID 10.... where one would expect more. Output of a write test: [root@localhost] sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 1.95942 s, 548 MB/s Output of a read test: [root@localhost]# dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 0.171463 s,

io - Acceptable I/O speeds for 6 x 250GB SSDs in RAID 10

I'm running CentOS 7 (XFS filesystem) on a dell server with a PERC H700 raid controller. Inside this server I have 6 x Samsung 850 Evo 250GB SSDs (yes they are consumer drives however, this is a home server. In any case, I performed a DD test and am getting speeds of around 550MB/s which would be the approximate write speed of a single SSD yet these drives are in RAID 10.... where one would expect more. Output of a write test: [root@localhost] sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 1.95942 s, 548 MB/s Output of a read test: [root@localhost]# dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 0.171463 s, 6.3 GB/s Would anyone be able to shed some light on this situation as to whether this is an acceptable write speed? I'm rather puzzled as to what to do here. Appreciate your help :) Answer I could close this as a

IPv6 AAAA vs. CNAME for same domain name

itemprop="text"> I reach my home site through a DynDNS name, and also have tunneled IPv6 there. In the DNS zone, I have: myhomesite CNAME example.dyndns.org. How do I simultaneously point "myhomesite" to an AAAA record? If I trivially make it myhomesite CNAME example.dyndns.org. myhomesite AAAA 2001:db8::1:2:3:4 the zone is invalid (CNAME and other data). Can you suggest a way for having the CNAME record and the AAAA record visible behind the same domain name? What I'm not looking for is an ".ipv6."-infixed record, which already is in place. Answer I copy from DNS for href="http://zytrax.com/books/dns/" rel="noreferrer">Rocket Scientists : CNAME RRs cannot have any other RRs with the same name, for example,

IPv6 AAAA vs. CNAME for same domain name

I reach my home site through a DynDNS name, and also have tunneled IPv6 there. In the DNS zone, I have: myhomesite CNAME example.dyndns.org. How do I simultaneously point "myhomesite" to an AAAA record? If I trivially make it myhomesite CNAME example.dyndns.org. myhomesite AAAA 2001:db8::1:2:3:4 the zone is invalid (CNAME and other data). Can you suggest a way for having the CNAME record and the AAAA record visible behind the same domain name? What I'm not looking for is an ".ipv6."-infixed record, which already is in place. Answer I copy from DNS for Rocket Scientists : CNAME RRs cannot have any other RRs with the same name, for example, a TXT - well that was true until DNSSEC came along and in this case RRSIG, NSEC and certain KEY RRs can now occupy the same name. Therefore what you want to do cannot be done using a CNAME. Use the CNAME to access your homesite via IPv4 and have a myhomesite-v6 AAAA record point to the IPv6 address If

cisco - ASA 5505: How do I access the DMZ web server from the inside using the public IP?

We are using a 5505 ASA Sec+ (8.2). There are three interfaces: inside (172.17.0.0/24), dmz (172.16.0.0/24) and outside (1.2.3.4 for the example). There are static NAT rules set up translating 1.2.3.4 to servers on the dmz (including 1.2.3.4:80 to 172.16.0.10:80). These work from the outside. How do I let users on the inside access the DMZ servers using the outside IP in the same way as users on the outside? Because we are using port address translation (several different servers depending on the port number), I want to avoid DNS doctoring. It does not matter whether we use NAT or not for direct inside-dmz traffic (most traffic will be through the public IP anyway). The current NAT configuration: ASA Version 8.2(5) same-security-traffic permit inter-interface same-security-traffic permit intra-interface globa

cisco - ASA 5505: How do I access the DMZ web server from the inside using the public IP?

We are using a 5505 ASA Sec+ (8.2). There are three interfaces: inside (172.17.0.0/24), dmz (172.16.0.0/24) and outside (1.2.3.4 for the example). There are static NAT rules set up translating 1.2.3.4 to servers on the dmz (including 1.2.3.4:80 to 172.16.0.10:80). These work from the outside. How do I let users on the inside access the DMZ servers using the outside IP in the same way as users on the outside? Because we are using port address translation (several different servers depending on the port number), I want to avoid DNS doctoring. It does not matter whether we use NAT or not for direct inside-dmz traffic (most traffic will be through the public IP anyway). The current NAT configuration: ASA Version 8.2(5) same-security-traffic permit inter-interface same-security-traffic permit intra-interface global (outside) 1 interface global (dmz) 2 interface nat (inside) 1 172.17.0.0 255.255.255.0 nat (dmz) 1 172.16.0.0 255.255.255.0 static (dmz,outside) tcp interface www 172.16.0.10 www

Sendmail - Multiple Domains, One Box - Blocking One Or Two Domains

itemprop="text"> I have a number of domains hosted at a web hosting service. They use sendmail to handle incoming email. I have six domains on this service (which we can call aaa.com, bbb.com and so on). Each email account has the same name and one email box. In other words, tango@aaa.com, tango@ccc.com, tango@fff.com and all the others go into one box, /var/spool/mail/tango, where my email program on my desktop picks it up. I have done very little work in sendmail. I haven't had to, and I've been warned it's a steep learning curve. But now I'm running into an issue. I was in a business situation where, for years, my email address was on the website for aaa.com. (We won't go into why this was necessary - it wasn't my preference and it's in the past.) Now I'm using tango@ddd.com instead of tango@aaa.com. I was ge

Sendmail - Multiple Domains, One Box - Blocking One Or Two Domains

I have a number of domains hosted at a web hosting service. They use sendmail to handle incoming email. I have six domains on this service (which we can call aaa.com, bbb.com and so on). Each email account has the same name and one email box. In other words, tango@aaa.com, tango@ccc.com, tango@fff.com and all the others go into one box, /var/spool/mail/tango, where my email program on my desktop picks it up. I have done very little work in sendmail. I haven't had to, and I've been warned it's a steep learning curve. But now I'm running into an issue. I was in a business situation where, for years, my email address was on the website for aaa.com. (We won't go into why this was necessary - it wasn't my preference and it's in the past.) Now I'm using tango@ddd.com instead of tango@aaa.com. I was getting about 1,000 or more pieces of spam a day, but SpamAssassin and my own email program caught about 75% of that. (Which still left stuff to delete.) Now, after

postfix - Authenticated outgoing email is marked as spam by PBL on mailserver

Users are sending email, authenticated, through the submission port on my mailserver (their domain MX record points to mailserver; postfix). What's been setup A record MX record (pointing to same mailserver for all domains) PTR record resolving to mailserver name DKIM: pass SPF: pass DMARC: pass MailScanner with clamd and spamassassin SASL authentication (mail headers mention user is authenticated) No open relay ... I see that mails are authenticated in the headers. However I see that spamassassin marks it as spam (it mentions that the IP of the client is on the RBL). When I query spamhaus I see that the client IP (which is dynamic due to mobile ISP). Zenhaus says it's on the PBL, so basically it is marked as spam as a policy based on the client IP.as Apart from that there'

How to ugrade to openssl 1.0.2 within ubuntu 14.04 LTS

I need to upgrade Openssl to 1.0.2 to get a certain feature. This worked following this tutorial href="http://www.miguelvallejo.com/updating-to-openssl-1-0-2g-on-ubuntu-server-12-04-14-04-lts-to-stop-cve-2016-0800-drown-attack/" rel="nofollow noreferrer">http://www.miguelvallejo.com/updating-to-openssl-1-0-2g-on-ubuntu-server-12-04-14-04-lts-to-stop-cve-2016-0800-drown-attack/ However, HAProxy for example is still built with the old openssl version and thus does not support the ssl feature I need How do I upgrade without compiling? I tried apt-get update and upgrade and also dist-upgrade. All that did not bring me to version 1.0.2

postfix - Authenticated outgoing email is marked as spam by PBL on mailserver

Users are sending email, authenticated, through the submission port on my mailserver (their domain MX record points to mailserver; postfix). What's been setup A record MX record (pointing to same mailserver for all domains) PTR record resolving to mailserver name DKIM: pass SPF: pass DMARC: pass MailScanner with clamd and spamassassin SASL authentication (mail headers mention user is authenticated) No open relay ... I see that mails are authenticated in the headers. However I see that spamassassin marks it as spam (it mentions that the IP of the client is on the RBL). When I query spamhaus I see that the client IP (which is dynamic due to mobile ISP). Zenhaus says it's on the PBL, so basically it is marked as spam as a policy based on the client IP.as Apart from that there's nothing wrong with those emails. The other ISPs don't have this problem and the emails are then delivered properly. Now on to my questions... :) is mail send through submission port supposed to go t

How to ugrade to openssl 1.0.2 within ubuntu 14.04 LTS

I need to upgrade Openssl to 1.0.2 to get a certain feature. This worked following this tutorial http://www.miguelvallejo.com/updating-to-openssl-1-0-2g-on-ubuntu-server-12-04-14-04-lts-to-stop-cve-2016-0800-drown-attack/ However, HAProxy for example is still built with the old openssl version and thus does not support the ssl feature I need How do I upgrade without compiling? I tried apt-get update and upgrade and also dist-upgrade. All that did not bring me to version 1.0.2

Server configuration for small company

We're a small development company and we're currently looking for a server for our internal needs. The basic idea I came up with is that this machine will itself run nothing but VMWare or Virtual Server (we'll be using Windows) and host virtual machines for various capacities. We need to run: Domain Controller Microsoft SQL Server (for testing purposes, nothing really heavy) Web server (again, for testing purposes) rel="nofollow noreferrer">TeamCity , href="http://www.atlassian.com/software/jira/" rel="nofollow noreferrer">JIRA , href="http://www.atlassian.com/software/confluence/" rel="nofollow noreferrer">Confluence whatever I'm absolutely unaware of current hardware trends (but I do know how to assemble a PC, I just can&

Server configuration for small company

We're a small development company and we're currently looking for a server for our internal needs. The basic idea I came up with is that this machine will itself run nothing but VMWare or Virtual Server (we'll be using Windows) and host virtual machines for various capacities. We need to run: Domain Controller Microsoft SQL Server (for testing purposes, nothing really heavy) Web server (again, for testing purposes) TeamCity , JIRA , Confluence whatever I'm absolutely unaware of current hardware trends (but I do know how to assemble a PC, I just can't really tell the difference between all theses Conroe, Merom, Wolfdales, etc.), so I'm asking for a configuration for a machine capable of performing the said tasks. My personal wishes include lots of RAM, multicore CPU (this is common nowadays, isn't it?) and probably RAID. Other than that I'm open for any ideas.

bash - Can't run AWS CLI from CRON (credentials)

itemprop="text"> Trying to run a simple AWS CLI backup script. It loops through lines in an include file, backs those paths up to S3, and dumps output to a log file. When I run this command directly, it runs without any error. When I run it through CRON I get an "Unable to locate credentials" error in my output log. The shell script: AWS_CONFIG_FILE="~/.aws/config" while read p; do /usr/local/bin/aws s3 cp $p s3://PATH/TO/BUCKET --recursive >> /PATH/TO/LOG 2>&1 done I only added the line to the config file after I started seeing the error, thinking this might fix it (even though I'm pretty sure that's where AWS looks by default). Shell script is running as root. I can see the AWS config file at the specified location. And it all looks good to me (like I said, it runs

bash - Can't run AWS CLI from CRON (credentials)

Trying to run a simple AWS CLI backup script. It loops through lines in an include file, backs those paths up to S3, and dumps output to a log file. When I run this command directly, it runs without any error. When I run it through CRON I get an "Unable to locate credentials" error in my output log. The shell script: AWS_CONFIG_FILE="~/.aws/config" while read p; do /usr/local/bin/aws s3 cp $p s3://PATH/TO/BUCKET --recursive >> /PATH/TO/LOG 2>&1 done I only added the line to the config file after I started seeing the error, thinking this might fix it (even though I'm pretty sure that's where AWS looks by default). Shell script is running as root. I can see the AWS config file at the specified location. And it all looks good to me (like I said, it runs fine outside of CRON). Answer If it works when you run it directly but not from cron there is probably something different in the environment. You can save your environment interacti

nameserver - CNAME domain to another domain, but keep different SPF records for the two?

itemprop="text"> SCENARIO: mydomain.com is the main website, we do send/receive mail using address@mydomain.com. mydomain.com DNS has got an SPF record "v=spf1 a mx ~all" mydomain.net is just an alias for mydomain.com, but we do NOT send mail using address@mydomain.net. Therefor mydomain.net DNS has got an SPF record "v=spf1 -all" to acknowledge everyone it does not send mail Since mydomain.net is an alias for mydomain.com I wanted to use CNAME in DNS, thus: mydomain.net -> CNAME -> mydomain.com www.mydomain.net -> CNAME -> mydomain.com But by doing this I noticed that when testing SPF for mydomain.net href="http://www.kitterman.com/spf/validate.html" rel="nofollow noreferrer">with a DNS tool like this the SPF returned is the on

nameserver - CNAME domain to another domain, but keep different SPF records for the two?

SCENARIO: mydomain.com is the main website, we do send/receive mail using address@mydomain.com. mydomain.com DNS has got an SPF record "v=spf1 a mx ~all" mydomain.net is just an alias for mydomain.com, but we do NOT send mail using address@mydomain.net. Therefor mydomain.net DNS has got an SPF record "v=spf1 -all" to acknowledge everyone it does not send mail Since mydomain.net is an alias for mydomain.com I wanted to use CNAME in DNS, thus: mydomain.net -> CNAME -> mydomain.com www.mydomain.net -> CNAME -> mydomain.com But by doing this I noticed that when testing SPF for mydomain.net with a DNS tool like this the SPF returned is the one in mydomain.com "v=spf1 a mx ~all" and NOT as I would expect the "v=spf1 -all" Is there a way to use different SPF for the two domains, by still using CNAME Answer A CNAME means that the hostname is exactly the same as the target hostname with respect to all record types. If this

ubuntu - Unknown modprobe causing high load average

For about the last 6 months, and for about a year before that (with a 6 month hiatus), one of my servers has had a consistently high load average: 13:37:34 up 192 days, 5:44, 2 users, load average: 2.00, 2.01, 2.00 Per href="https://serverfault.com/a/139256/42461">another answer , I checked the output of ps: $ ps -eo stat,pid,user,command | egrep "^STAT|^D|^R" STAT PID USER COMMAND D< 3043 root /sbin/modprobe -Q pci:v00008086d0000293Esv000015D9sd0000D780bc04sc03i00 D< 3150 root /sbin/modprobe -Qba pnp:dPNP0401 Checking the config & loaded modules: $ modprobe -c | grep "pnp:dPNP0401" alias pnp:dPNP0401* parport_pc $ sudo modprobe -l | grep parport_pc /lib/modules/2.6.24-29-server/kernel/drivers/parport/parport_pc.ko So it appears to be a parallel port rul

ubuntu - Unknown modprobe causing high load average

For about the last 6 months, and for about a year before that (with a 6 month hiatus), one of my servers has had a consistently high load average: 13:37:34 up 192 days, 5:44, 2 users, load average: 2.00, 2.01, 2.00 Per another answer , I checked the output of ps: $ ps -eo stat,pid,user,command | egrep "^STAT|^D|^R" STAT PID USER COMMAND D< 3043 root /sbin/modprobe -Q pci:v00008086d0000293Esv000015D9sd0000D780bc04sc03i00 D< 3150 root /sbin/modprobe -Qba pnp:dPNP0401 Checking the config & loaded modules: $ modprobe -c | grep "pnp:dPNP0401" alias pnp:dPNP0401* parport_pc $ sudo modprobe -l | grep parport_pc /lib/modules/2.6.24-29-server/kernel/drivers/parport/parport_pc.ko So it appears to be a parallel port rule, but I can't think of what might be connected, or why. Physical access to the server is about 2 hours drive away. Operating system is Ubuntu 8.04.4. I can't see anything obvious anywhere in /etc/ but I may not know wh

load balancing - How to balance the root domain using NS records?

I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an nslookup example.com xIP I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing example.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.example.com A [ip address of load balancer 1] ns2.example.com A [ip address of load balancer 1] www.example.com NS ns1.example.com www.example.com NS ns2.example.com All is well - when I type www.example.com, the requests get delegated to my load balancers who provide the IP address of t

load balancing - How to balance the root domain using NS records?

I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an nslookup example.com xIP I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing example.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.example.com A [ip address of load balancer 1] ns2.example.com A [ip address of load balancer 1] www.example.com NS ns1.example.com www.example.com NS ns2.example.com All is well - when I type www.example.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connection is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'example.com' (

apache 2.2 - SSL on multiple subdomains with the same IP

I installed a wildard SSL certificate on our server on which we will run multiple subdomains. So I created sub1.domain.com, sub2.domain.com, etc. I created several vhost files ( etc) and set NameVirtualHost *:443. So far all subdomains are running on a SSL connection. No problems arised. Every subdomains sees his own content and all browsers work perfect. Is this a correct way to set-up SSL on multiple subdomains? I know you should set an IP per domain with SSL, but what about subdomains? This seems to run without trouble. That was my first question. My second question is the fact I received an error after libssl has been updated on my ubuntu server. Apache didn't start anymore and gave me the error: [error] Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile] ((null):0) It

apache 2.2 - SSL on multiple subdomains with the same IP

I installed a wildard SSL certificate on our server on which we will run multiple subdomains. So I created sub1.domain.com, sub2.domain.com, etc. I created several vhost files ( etc) and set NameVirtualHost *:443. So far all subdomains are running on a SSL connection. No problems arised. Every subdomains sees his own content and all browsers work perfect. Is this a correct way to set-up SSL on multiple subdomains? I know you should set an IP per domain with SSL, but what about subdomains? This seems to run without trouble. That was my first question. My second question is the fact I received an error after libssl has been updated on my ubuntu server. Apache didn't start anymore and gave me the error: [error] Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile] ((null):0) It sound something is wrong, but everything worked perfect the last month. As a fix I found that you can add "http" after the listen in ports.conf, like this: Liste

linux - Write permission to a specific file without changing ownership

itemprop="text"> As I understand it, to give a write permission to a user I can either change the owner of the file to that user and give it "user write permission" (which I don't want to do), or keep the same owner but add this user to the file's group and give the group a write permission. But the latter will give this user permission to all other files associated with this group (whatever those permissions may be). So say if the file is owned by user1 and group user1, most user1 files also have user1 group. If I add user2 to group user1, user2 will have gained extra permissions. The only way I can think of is create a group for this specific file, change the group with chown and then add user2 to this group. Is this correct? It seems to me that this creates a lot of complexity if I have to do this for every file. I come fr

linux - Write permission to a specific file without changing ownership

As I understand it, to give a write permission to a user I can either change the owner of the file to that user and give it "user write permission" (which I don't want to do), or keep the same owner but add this user to the file's group and give the group a write permission. But the latter will give this user permission to all other files associated with this group (whatever those permissions may be). So say if the file is owned by user1 and group user1, most user1 files also have user1 group. If I add user2 to group user1, user2 will have gained extra permissions. The only way I can think of is create a group for this specific file, change the group with chown and then add user2 to this group. Is this correct? It seems to me that this creates a lot of complexity if I have to do this for every file. I come from a windows background and over there you simply right-click the file and add the user to the file's permission. So no need to create 20 groups for 20 differ

linux - Cron job ignores part of trigger url

itemprop="text"> I created a cron job to trigger a url at set times, which in turn starts a product import script. But for some reason part of the trigger url with parameters is stripped away. I set the cron job like this: /usr/bin/wget -O /dev/null http://domain.nl/wp-cron.php?import_key=XXXXXXXXXX&import_id=3&action=processing But it only runs http://domain.nl/wp-cron.php?import_key=XXXXXXXXXX . Where is the last part that actually tells the script what to do? Who knows why it behaves like this and how to get it to work? itemprop="text"> class="normal">Answer href="https://unix.stackexchange.com/q/86247/29235">The ampersand character ( & ) actually means something in Linux (well, in a Bourne-compatible shell). It means: run the c

linux - Cron job ignores part of trigger url

I created a cron job to trigger a url at set times, which in turn starts a product import script. But for some reason part of the trigger url with parameters is stripped away. I set the cron job like this: /usr/bin/wget -O /dev/null http://domain.nl/wp-cron.php?import_key=XXXXXXXXXX&import_id=3&action=processing But it only runs http://domain.nl/wp-cron.php?import_key=XXXXXXXXXX . Where is the last part that actually tells the script what to do? Who knows why it behaves like this and how to get it to work? Answer The ampersand character ( & ) actually means something in Linux (well, in a Bourne-compatible shell). It means: run the command as a background task . Because of that, you are actually telling cron to run /usr/bin/wget -O /dev/null http://domain.nl/wp-cron.php?import_key=XXXXXXXXXX in the background, and then to do action=processing . And that's what cron is doing for you - what you told it to do. To get around this, you need to escape the &am

partition - Native ZFS Configuration on Ubuntu

itemprop="text"> I'm experimenting with Native-ZFS on Ubuntu right now. Here are the drives installed on the system: 2 x 2TB 3 x 1TB a 200GB operating system disk I've got the OS installed and the stable ZFS RC for 12.04 installed via the PPA. In terms of ZFS configuration, I'd like to get the maximum theoretical capacity with 1 drive failure protection (so 5TB). I was planning on this configuration: 1 zpool: 1 4TB RAIDZ vdev: 3 x 1TB drives 2 x 1TB partitions, one from each of the 2TB drives 1 1TB Mirrored vdev: 2 x 1TB partitions, one from each of the 2TB drives First off, does this configuration make sense? Is there a better way to achieve 5TB (such as a 7 x 1TB RAIDZ2)? I'm not terribly concerned with performance (alt

partition - Native ZFS Configuration on Ubuntu

I'm experimenting with Native-ZFS on Ubuntu right now. Here are the drives installed on the system: 2 x 2TB 3 x 1TB a 200GB operating system disk I've got the OS installed and the stable ZFS RC for 12.04 installed via the PPA. In terms of ZFS configuration, I'd like to get the maximum theoretical capacity with 1 drive failure protection (so 5TB). I was planning on this configuration: 1 zpool: 1 4TB RAIDZ vdev: 3 x 1TB drives 2 x 1TB partitions, one from each of the 2TB drives 1 1TB Mirrored vdev: 2 x 1TB partitions, one from each of the 2TB drives First off, does this configuration make sense? Is there a better way to achieve 5TB (such as a 7 x 1TB RAIDZ2)? I'm not terribly concerned with performance (although I am somewhat concerned with upgradeability). Secondly, can anybody point me to a guide (or show me) the ZFS incantations to create such a (mildly complicated) pool? All of the guides I've found create a 1-1 zpool-vdev and use the entire raw disk, not par

ssl - Apache Key: Which is it using?

itemprop="text"> I'm running an Apache server on Ubuntu. When I restart it, it asks me for a pass phrase; here's what the dialog looks like: Apache/2.2.16 mod_ssl/2.2.16 (Pass Phrase Dialog) Some of your private key files are encrypted for security reasons. In order to read them you have to provide the pass phrases. Server 127.0.0.1:443 (RSA) Enter pass phrase: I've already worked out how to remove the pass phrase from the key file in question, but I can't find any information anywhere on how to determine which key file Apache is complaining about in the above dialog. I have dozens of key files on the server in question, although I don't know which ones are in active use (all I did is 'locate .pem' and ignore the false positives). Does anyone know how to track down which pem file I need to remove the

ssl - Apache Key: Which is it using?

I'm running an Apache server on Ubuntu. When I restart it, it asks me for a pass phrase; here's what the dialog looks like: Apache/2.2.16 mod_ssl/2.2.16 (Pass Phrase Dialog) Some of your private key files are encrypted for security reasons. In order to read them you have to provide the pass phrases. Server 127.0.0.1:443 (RSA) Enter pass phrase: I've already worked out how to remove the pass phrase from the key file in question, but I can't find any information anywhere on how to determine which key file Apache is complaining about in the above dialog. I have dozens of key files on the server in question, although I don't know which ones are in active use (all I did is 'locate .pem' and ignore the false positives). Does anyone know how to track down which pem file I need to remove the passphrase from? Answer If you set up the server, you should know what keys are being used. Anyway, look out for SSLCertificateKeyFile directives. If not conta

ubuntu - What is using up my RAM in VPS?

I'm running top and I see that out of 502968 KB, 48064 KB is used leaving 16884 KB free. But then when I look at the individual processes I see that mysql is consuming 9.4% of my RAM on occasion - but nothing else seems to be consuming anything. What is consuming all my RAM? Here is a screen capture from top: top - 20:46:07 up 1 min, 1 user, load average: 0.18, 0.05, 0.02 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.4 sy, 0.0 ni, 99.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 502968 total, 241236 used, 261732 free, 10488 buffers KiB Swap: 524284 total, 0 used, 524284 free, 106756 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1584 dmackey 20 0 20508 1372 1000 R 0.4 0.3 0:00.01 top 1 root 20 0 26664 2456 1340 S 0.0 0.5 0:00.69 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 r

ubuntu - What is using up my RAM in VPS?

I'm running top and I see that out of 502968 KB, 48064 KB is used leaving 16884 KB free. But then when I look at the individual processes I see that mysql is consuming 9.4% of my RAM on occasion - but nothing else seems to be consuming anything. What is consuming all my RAM? Here is a screen capture from top: top - 20:46:07 up 1 min, 1 user, load average: 0.18, 0.05, 0.02 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.4 sy, 0.0 ni, 99.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 502968 total, 241236 used, 261732 free, 10488 buffers KiB Swap: 524284 total, 0 used, 524284 free, 106756 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1584 dmackey 20 0 20508 1372 1000 R 0.4 0.3 0:00.01 top 1 root 20 0 26664 2456 1340 S 0.0 0.5 0:00.69 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.01 ksof

security - Group Policy: Administrator Rights for Specific Users on Specific Computers

itemprop="text"> I'm a programmer stuck trying to administer an Active Directory setup for a small company. The Domain Controller is running Windows Small Business Server 2008. We have a staff of field workers using tablet PC's; configuration problems with the tablet's ThinkVantage bloatware will require these users to have Administrator right when using the tablets. That's alright – it's useful for them to have broad privileges when I'm walking them through a fix over the phone, so I'm not looking for a work-around there. I would like to use Group Policy to set up the following scenario: The users in a particular security group (or organization unit) should be in the BUILTIN/Administrators group when logged in to computers in a certain security group (or organization unit). It's okay if the computers

security - Group Policy: Administrator Rights for Specific Users on Specific Computers

I'm a programmer stuck trying to administer an Active Directory setup for a small company. The Domain Controller is running Windows Small Business Server 2008. We have a staff of field workers using tablet PC's; configuration problems with the tablet's ThinkVantage bloatware will require these users to have Administrator right when using the tablets. That's alright – it's useful for them to have broad privileges when I'm walking them through a fix over the phone, so I'm not looking for a work-around there. I would like to use Group Policy to set up the following scenario: The users in a particular security group (or organization unit) should be in the BUILTIN/Administrators group when logged in to computers in a certain security group (or organization unit). It's okay if the computers have to be in an OU, but I'd prefer to assign users by group. Of course, the field workers shouldn't be Administrators on other workstations, and vanilla office sta

Route 53 email spam

Since 2 months our email address gets abused by a spam bot. It turns out that emails are send via our info@ email address to thousands of recipients of which many mails bounce back and land back in our inbox. Our domain is hosted at the German provider 1und1. I use route 53 nameservers in order to resolve the domain on heroku. Here is my mail relevant setup at route 53: MX Servers configured: 10 mx01.kundenserver.de 20 mx00.kundenserver.de SPF Record: "v=spf1 a mx ~all" TXT Record: "QH+******************" After the first spam attempt I have setup the SPF record hoping this would solve the problem but it did not. Also according to SES statistics, it looks like the emails are not send via SES so I can also guarantee that our website is working correctly and mails are not send via