Skip to main content

Posts

Showing posts from October, 2015

security - DDoS attack simulation using MININET

I´m developing some dynamic security policies using SDN, one of them consist in how the network should respond to a DDoS attack detection. So, I would like to test my policies but I'm having some problems trying to recreate an attack in a mininet topology. Is there any documentation about how to simulate a DDoS attack with mininet?

linux - Prevent outgoing spam

What are some ways to prevent spam from leaving your servers should a hosting account get compromised? Have a bunch of clients on a server with cpanel but wondering if there was a way to just prevent a chance if an account was compromised. When I meant compromised I meant, a client signs up or a client account and gets hacked and his account is used for spam. couldn't you setup some type of filter/blacklist terms in exim or spamassassin which would block/stop mail going out if it matched that?

site to site openvpn with Merlin and DD-WRT

I am trying to setup an OpenVPN site to site between site A(Server-Merlin) and site B(Client DD-WRT). The tunnel comes up and both peers are able to ping each other but when anyone on the client subnet(10.1.30.0/24) tries to ping any host on the server side(10.1.10.0/24) packets are being dropped by the server since the server doesn't know how to get to client's subnet even after adding the route. Here are the configs: Server daemon server 172.16.254.0 255.255.255.248 proto udp port 1198 dev tun21 cipher AES-256-CBC comp-lzo adaptive keepalive 15 60 verb 3 push "route 10.1.10.0 255.255.255.0" client-config-dir ccd client-to-client duplicate-cn ca ca.crt dh dh.pem cert server.crt key server.key status-version 2 status status ifconfig 172.16.254.1 255.255.255.248M management 127.0.0.1 5001M auth none Firewall - Server iptables -I INPUT 2 -p udp --dport 1198 -j ACCEPT ipt

apache 2.2 - Can't start apache2 server

I'm getting the following error message while starting apache2 server: $ sudo /etc/init.d/apache2 start * Starting web server apache2 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs Action 'start' failed. The Apache error log may have more information. [fail] Here is the output of sudo netstat -lntup : Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:28017 0.0.0.0:* LISTEN 941/mongod tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN 1355/dnsmasq tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 687/sshd tcp 0 0 127.0.0.1:631

rhel6 - Why is RAM usage so high on an idle server?

I'm investigating a server used for scientific data analysis. It's running RHEL 6.4 It has almost 200GB of RAM. It's been running very slowly for users via SSH, and after some poking around I quickly noticed that the RAM usage was sky-high. What's odd is that even in an idle state it's still using a ton of RAM: I also looked via htop and I can't see that any running process is using more than 0.1% of the RAM. So I wonder what's going on? Right now the only user-initiated process running is an rsync between two NFS-mounted shares. I tried rebooting the server and it was much more responsive for a few minutes, but then memory usage shot up again. Is there any way I can pinpoint why memory usage is so high? Answer It's high because that saves effort. It takes effort to make memory free. And if you do that, it just takes effort to make it used again. So, to save effort, modern operating systems only make memory free if they have absolu

iptables - connect from the ssh server to the remote computer using a local ip address

I have a remote Linux computer connecting on a local ssh server, creating a reverse ssh tunnel on port 5051. On the ssh server itself I run the following two commands, in order to give the remote computer a local IP address. ip addr add 192.168.1.51/24 dev eth0 iptables -t nat -A PREROUTING -d 192.168.1.51 -p tcp --dport 22 -j REDIRECT --to-port 5051 On the ssh server I have also configured GatewayPorts yes in sshd_conf. From a third computer on my network if I ssh on 192.168.1.51, I connect directly on the remote Linux computer. But from the ssh server if I ssh 192.168.1.51 I connect on the ssh server again. I don't connect on the remote computer. The only way to connect on the remote computer from the ssh server is to use ssh root@localhost -p 5051 But I don't want to do that. I want to be able to ssh 192.168.1.51 from the ssh server, and connect on the remote computer. Answer IPTables NAT table's PREROUTING chain rules are only applied to IP packets arrivi

hard drive - Reliability of S.M.A.R.T. Selftest

I let smartd check my hard drives regularly. Recently I have been notified about a failed short selftest on one of my hard-drivers. smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-0.bpo.4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital RE4 Device Model: WDC WD1003FBYX-01Y7B0 Serial Number: WD-WCAW31677053 LU WWN Device Id: 5 0014ee 25b0c013e Firmware Version: 01.01V01 User Capacity: 1.000.204.886.016 bytes [1,00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Mon Dec 7 13:13:22 2015 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECT

remote desktop - Can internet connection performance on client side of rdp connection affects server application performance?

Our company develops a Winforms based desktop application that we host on our servers. Our customers use our software via RDP. A question was recently asked if the client's internet connection could be causing a performance issue at the server. To clarify, the claim is that an operation that happens in our application that only involves our on premise SQL server takes longer when the RDP client's connection is slower. Not perceived slower, but actually slower. My initial response is absolutely not but two senior developers have indicated they've seen several situations where they couldn't rule out that something to this effect was happening. None of us think it makes any sense but I'm asking this question to rule the possibility out. The only scenario I can think of where it seems this could make any sense is if the server is waiting on some input from the client side and the internet connection is delaying that. So just to restate: Is it possible for the internet

windows - In a multi-domain forest, what EXACTLY happens when some, but not all, of the Infrastructure Masters are on Global Catalogs?

There are plenty of TechNet articles, like this one that say that phantom object don't get updated if an Infrastructure Master is also a Global Catalog, but other than that there isn't a lot of in depth information on what actually happens in this configuration. Imagine a configuration like this: |--------------| | example.com | | | | dedicated IM | |--------------| | | | |-------------------| | child.example.com | | | | IM on a GC | |-------------------| Where child has two DCs that are both global catalogs, meaning that the Infrastructure Master role is on a GC. And, example has three DCs with the Infrastructure Master role on a DC that is not a GC. I understand that it's usually best to just make everything a GC and not have to worry about this sort of thing, but assuming that's not the case - what is the exact error behavior that can be expected from a setup like this, and which domain(s) would this behavior m

virtual machines - vSphere: performance impact when a large VM has more vCPU than cores on a single physical CPU?

I'm running a vSphere private cloud configuration at a managed hosting vendor. The physical hosts have dual 14-core CPUs and 128 GB of RAM each. An application that we run can multi-thread expensive computational tasks, and I have requested the vendor to create three VMs with 20 vCPU each, and 32 GB of RAM. Note that the vCPU to physical core ratio will remain extremely low, not much bigger than 1, and total RAM will be undersubscribed by a healthy amount. Engineers at the vendor say that a 20-vCPU VM will negatively impact performance because it spans more than one physical (14-core) CPU socket, even though there is a total of 28 physical cores available on each host. This makes no sense to me, but I don't know enough about this and generally rely on the vendor recommendations. Are they correct about this warning?

virtualization - How to monitor guest virtual system from KVM host (CPU, MEM, HDD, NET, ...)

How can I monitor statistics like cpu, memory, disk or network activity from KVM host of some guest system. It needs to be from command line of the host system. Is it somehow possible? Answer you can always use virtualisation-agnostic method like munin or nagios . install agent on the guest and poll it from - for instance - host.

HP SSD on ProLiant DL360p Gen8 p420i controller - no TRIM?

We have a HP Proliant Dl360 Gen8 server with the 420i 2Gg disk controller. (646905-421) we have a couple of the HP Gen8 200GB 6G SAS SLC SSDs (653078-B21), (they run in RAID1) We run Debian6 on this server, and HP says that the "trim" command is not supported on this controller. -How will this affect the speed and lifetime of the SSD? -Does anyone know any controller that might do a better job? -has anyone run similar configuration, can you say anything on the lifetime of disks? Answer You're using (expensive) enterprise SAS SSDs. This drive is OEM by Sandisk, an LB206S , whose specifications show that it's a write-optimized drive. There is no need for TRIM. TRIM is for cheap consumer SATA disks. In addition, your drives are heavily overprovisioned and have their own wearout indicators available. This is visible from the controller using the Array Configuration Utility, the HP Smart Storage Administrator or the hpacucli and hpssacli command-line util

Globally-distributed load testing service providers

We are looking for a third-party service providers to load-test our website infrastructure, preferably across multiple geos. Any recommendations? We have done internal load testing with Load Runner and JMeter, but we want to see the impact of global latency and higher loads than we can generate in-house. Answer We have a load testing tool built on Amazon EC2 that can generate up to 50,000 concurrent vusers from Singapore, California, Virginia, and Ireland. Would that be enough geo diversity? We are the lowest cost tool that is cloud-based. 5,000 concurrent vusers costs $199 for one test or $999 for a month of unlimited test runs. If you want someone to actually create and run the tests, we have partners to help. Contact me at http://loadstorm.com via the Contact Us form or give me a call at 970-389-1899 if you are interested. Good luck with your testing and finding a fit with a vendor. Thanks.

Common wisdom about Active Directory authentication for Linux Servers?

What is the common wisdom in 2014 about Active Directory authentication/integration for Linux servers and modern Windows Server operating systems (CentOS/RHEL-focused)? Over the years since my first attempts with integration in 2004, it seems like the best-practices around this have shifted. I'm not quite sure which method currently has the most momentum. In the field, I've seen: Winbind/Samba Straight-up LDAP Sometimes LDAP + Kerberos Microsoft Windows Services for Unix (SFU) Microsoft Identity Management for Unix NSLCD SSSD FreeIPA Centrify Powerbroker ( née Likewise ) Winbind always seemed terrible and unreliable. The commercial solutions like Centrify and Likewise always worked, but seemed unnecessary, since this capability is baked into the OS. The last few installations I've done had the Microsoft Identity Management for Unix role feature added to a Windows 2008 R2 server and NSLCD on the Linux side (for RHEL5). This worked until RHEL6, where the lack of maintenan

How do I redirect subdomains to the root domain in Nginx on CentOS?

I'm using Centos with Nginx and Puma. I would like to redirect all subdomains to my main root domain, so I was following the instructions here -- https://stackoverflow.com/questions/26801479/nginx-redirect-all-subdomains-to-main-domain . However I can't get it to work. Below is my configuration upstream projecta { server unix:///home/rails/projecta_production/shared/sockets/puma.sock; } server { listen 80; server_name mydomein.com; return 301 http://mydomein.com$request_uri; root /home/rails/projecta_production/public; # I assume your app is located at this location location / { proxy_pass http://projecta; # match the name of upstream directive which is defined above proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* ^/assets/ { # Per RFC2616 - 1 year maximum expiry expires 1y; add_header Cache-Control public; # Some browsers still send conditional-GET requests if there's a

Apache Content-Type encoding changing from UTF-8 to iso-88591 from directory to directory

I have a site running on Apache 2.2.8 (Plesk 9.5.4) For this site there is a strange behaivor the root directory only has html and it is served with the following header with is great. http://globalmit.com/ Response Headers Date Wed, 04 May 2011 00:57:26 GMT Server Apache Last-Modified Mon, 04 Apr 2011 21:09:05 GMT Etag "15013bf-5a7-4a01e2b6efe40" Accept-Ranges bytes Cache-Control max-age=300 Expires Wed, 04 May 2011 01:02:26 GMT Vary Accept-Encoding Content-Encoding gzip Content-Length 564 Content-Type text/html; charset=utf-8 Then I have osTickets installed on this directory, and I made the translation to Spanish and for it to work the content type encoding needs to be set to UTF-8 which is and it is working great. http:// globalmit.com/ tickets/ Response Headers Date Wed, 04 May 2011 01:04:37 GMT Server Apache Expires Thu, 19 Nov 1981 08:52:00 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Var

Static IPv6 address in Windows unused for outgoing connections

I'm running a Windows server and trying to get it to use a static IPv6 address for outgoing connections to other IPv6 hosts (such as Gmail). I need this because Gmail requires a ptr record, and I can't set one for random addresses. The static address is configured on the host, but it also has a temporary privacy address as well as a random address from the router it seems. By default Windows uses the privacy address; it seems this is the expected behavior (and it makes perfect sense for people/users that did not set a static address, but I did!). I've tried disabling the privacy address with: netsh int ipv6 set privacy disabled This indeed gets rid of the privacy address, but I still have the random address that the router assigned. To disable this, it was said I needed to disable "router discovery" using this command: net interface ipv6 set interface 14 routerdiscovery=disabled Upon doing this, all IPv6 connectivity is lost. If I do this while pinging Gmail, it w

timezone - Where does Cron checks the time zone on Debian?

I have seen number of topics that are related to my question, but non of them answers it. Where does Cron look-up the time zone ? root@awesome:~# date Fri Feb 17 14:02:02 EET 2012 root@awesome:~# hwclock -r Fri 17 Feb 2012 14:03:39 EET -0.815689 seconds But the Cron still works on GMT zone. (I have to make every cron job +2h to make it run properly on time) Is there mistakes in time-zone configuration ? Or there are more time-zone configuration on Debian Linux, and I am configurationg in on the wrong one ? (I have configured my time zone via tzselect Answer You have to restart the cron daemon to the change in timezone to take effort for cron. Ref: http://wiki.debian.org/TimeZoneChanges Direct quote from the above link... After the zoneinfo files are updated, you may need to restart daemons and other long-running programs to get them to use the new zone information. Examples of such programs include apache, bind, cron, fetchmail -d, inetd, mailman, sendmail, a

mod rewrite - Apache mod_rewrite weird behavior in Internet Explorer

I'm attempting to setup redirection for a couple of root domains. Firstly, here is the code in my httpd-vhosts.conf file: ServerAdmin ****@example.com ServerName example.com ServerAlias example2.com RewriteEngine On RewriteCond %{HTTP_HOST} !^192\.168\.0\.1$ # This is our WAN IP RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteCond %{HTTP_HOST} !^$ RewriteRule ^/?(.*) http://www.%{HTTP_HOST}/$1 [L,R,NE] What this does is redirect the root domain of example.com or example2.com or any host other than www to www.example(2).com The part I'm having a problem with is the RewriteRule itself. the $1 is supposed to match the pattern of the RewriteRule and add it in the substitution. For example: " http://example.com/test.html " should rewrite to " http://www.example.com/test.html " It works in all modern browsers like it's supposed to except for IE8 or IE9 (I didn't test other IE versions). In IE, this works: " http://example.com " to " htt

How is raid implemented at the *disk* level?

If disks have 512-byte physical sectors, and you have 10 disks using RAID 50 with a 1MB stripe-size how does that work at the disk level ? Correct me if I'm wrong, but conceptually, there would be 2 spans each consisting of a RAID-5 array of 5 disks, one mirrored to the other. Therefore, a "stripe" would consist of 4x256KB chunks of data, plus a single 256KB of parity data per stripes? or does a "stripe" include the parity? What if you consider a 12-disk RAID 10 array? There would be 6 mirrored pairs of disks, with striping over those mirrors. So, for a 1MB stripe size, the stripe would be divided by 6, for 174,762.666 bytes per-disk, which works out to 341.333 physical sectors per stripe. Is that really 342 physical sectors per stripe? For those who wonder why I'm asking; I am attempting to determine the most efficient number of disks relative to the type of RAID, with the best stripe size. Also, I have seen https://en.wikipedia.org/wiki/Nested_RAID_

Does a domain's Glue Records get transferred when I transfer the domain to another registrar?

I have a domain, call it DOMAIN.NET , which is an Internet service provider. DOMAIN.NET has Glue Records that I put in via the existing registrar, which enable the client domains like FOO.COM , BAR.COM , BAZ.COM , etc. to use NS1.DOMAIN.NET and NS2.DOMAIN.NET as their DNS servers. For anyone who doesn't know, Glue Records are essential for the functionality of NS1, NS2, etc., not going to explain it here, but these explain it I want to transfer DOMAIN.NET to another registrar. But, do the Glue Records get transferred? My guess is no, because I would think registrars all send and manage Glue Records themselves, and send the Glue IPs directly to the Internet root servers for .Net, so how would the new registrar know about it during a domain transfer? I need to know before I transfer, because if the Glue records disappear, all my client domains that have NS1.DOMAIN.NET and NS2.DOMAIN.NET as their dns servers will likely start failing for a period of time until I get the Glue r

windows - What (if any) are the risks of renaming and domain-joining a machine at the same time?

Just a quick, basic question, on account of a difference of opinion I'm having. When joining a new [Windows] machine to an Active Directory domain, what risks are there if the rename and the domain join are done at the same time? (As opposed to renaming the machine, rebooting, joining to the domain and rebooting). On the off chance it matters, this is a 2003 Functional Level domain and forest, and concerns client machines that are primarily XP and servers that are primarily Server 2008 R2. Bonus points (bounty!) if anyone knows of a documented Microsoft Best Practice or recommendation for whether or not to do this in one reboot or two. Answer I don't find a specific Windows Vista or newer related article, but I think this would count as canonical documentation: How to change a computer name, join a domain, and add a computer description in Windows XP or in Windows Server 2003 . This documentation indicates that you can modify the computer name and domain at the

Which FQDN hostname to use for SSL certificate signing request- when using a CNAME record?

We have a subdomain ( https://portal.company.com ) that is the alias for a different hostname (defined in a CNAME record). This dynamic DNS hostname ( https://portal.dlinkddns.com ) resolves to the public (dynamic) IP address of our office. At the office, the router is configured to forward port 443 to a server running a (Spiceworks) web portal that the staff can access from home. Even if the office's public IP address changes, the subdomain will still direct staff to the web portal. Everything works great- apart from the (expected) SSL certificate error staff see when they first connect to the site. I've just purchased an SSL certificate, and am now in the process of completing a certificate signing request on the server. Which leads me to my question... When completing the certificate signing request, for " Common Name (e.g. server FQDN or YOUR name) ", what should I enter? Should I enter the canonical name ( https://portal.dlinkddns.com ) or the alias ( https://por

firewall - Setup 1:1 NAT using pfSense

pfSense box: Public IPs 208.43.30.118-.117 Private IP : 192.168.1.1 I need to provide 1:1 NAT mapping to a VM in the private network 192.168.1.5 I am unable to get 1:1 NAT working though it should be direct... The output of $ pfctl -s rules scrub in on em0 all fragment reassemble scrub in on em1 all fragment reassemble anchor "relayd/*" all block drop in log all label "Default deny rule" block drop out log all label "Default deny rule" block drop in quick inet6 all block drop out quick inet6 all block drop quick proto tcp from any port = 0 to any block drop quick proto tcp from any to any port = 0 block drop quick proto udp from any port = 0 to any block drop quick proto udp from any to any port = 0 block drop quick from to any label "Block snort2c hosts" block drop quick from any to label "Block snort2c hosts" block drop in log quick proto carp from (self) to any pass quick proto carp all keep state pass quick proto pfsync all keep s

Using IIS URL Rewrite, how to rewrite foo.bar.com -> bar.com/myapp

Our web app lies at bar.com/myapp We'll use the HTTP Host Header to work out the username So need to transparently rewrite foo.bar.com to bar.com/myapp using the URL Rewrite module in IIS But still need to be able to go to www.bar.com and see the company website and webmail.bar.com, etc. Got it working: Add a server wide Inbound Rule: Match URL: (.*) - check every URL Condition {HTTP_HOST} ^([^.]+)\.bar\.com - must be a subdomain of bar.com Condition {HTTP_HOST} Doesn't Match Pattern: webmail\.|mail\. - don't run for webmail. or mail. Action Rewrite: myapp/{R:1} - redirect to /myapp/ keeping all querystring data Answer REMOVED NON RELEVANT INFO AND EDITED SO if you are using IIS7 download the URL Rewrite Module . All the instructions and info you need are included at that link. Hope that helps.

Tools for load-testing HTTP servers?

I've had to load test HTTP servers/web applications a few times, and each time I've been underwhelmed by the quality of tools I've been able to find. So, when you're load testing a HTTP server, what tools do you use? And what are the things I'll most likely do wrong the next time I've got to do it? Answer JMeter is free. Mercury Interactive Load Runner is super nice and super expensive.

high availability - Combination of ZFS and Hardware to gain Raid 51

I'm looking into RAID solutions for a very large file server (70+TB, ONLY serving NFS and CIFS). I know using ZFS raid on top of hardware raid is generally contraindicated, however I find myself in an unusual situation. My personal preference would be to setup large RAID-51 virtual disks. I.e Two mirrored RAID5, with each RAID5 having 9 data + 1 hotspare (so we don't lose TOO much storage space). This eases my administrative paranoia by having the data mirrored on two different drive chassis, while allowing for 1 disk failure in each mirror set before a crisis hits. HOWEVER this question stems from the fact, that we have existing hardware RAID controllers (LSI Megaraid integrated disk chassis + server), licensed ONLY for RAID5 and 6. We also have an existing ZFS file system, which is intended (but not yet configured) to provide HA using RFS-1. From what I see int he documentation, ZFS does not provide a RAID-51 solution. The suggestion is to use the hardware raid to create two

rsa - Can apache use a key agent to store private keys for SSL?

For mod_ssl in apache to work, you need your RSA private key on the server. If the key is passphrase protected, you have to enter the passphrase whenever you restart apache. There is SSLPassPhraseDialog so you can store the key encrypted and have a program pass it the phrase, but that really isn't any more secure than keeping it unencrypted. I'm wondering if apache supports, or can be made to support, using a key agent for operations needing the private key, much like how ssh-agent for openssh works. That way I only need to type the passphrase to the key whenever the server itself reboots (assuming the agent doesn't die somehow during normal operations). I realize that the key is stored in memory inside the agent, and obtaining it from memory possible, but it's hard to do. Also, if the agent is actually forwarded over ssh from another host and the key is in memory over there, then obtaining the private key is impossible, if just the webserver is compromised. If the an

linux - Setting umask for all users

I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile . However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache mod_wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile : Result = 644 (Doesn't work) Files created by Apache mod_wsgi app: Result = 644 (Doesn't work

ubuntu - Samba Domain Controller - is DNS required?

I am looking into setting up a Samba domain controller using Ubuntu Server, for some Windows XP/7 clients, and I have one important question: Given the importance of DNS in a Microsoft Active Directory infrastructure, why do none of the setup guides mention configuring DNS to support a Samba domain? I have installed an LDAP/Samba server in a test network that I will be attempting to join using an XP client very soon, but I'm just confused as to how the client will actually "discover" the domain - as I know this is how AD domains work. I hope someone can shed some light on this! Answer Samba 3, the current version, does not use the active directory protocols. Instead it uses the older NT4 domain protocols. As Jasper mentioned, this uses NetBIOS for lookups which uses broadcasts. You might want to consider setting up Samba as a WINS server, if you have more than one subnet its a requirement. If you are using a Samba4 alpha which does use the AD protocols then

windows - How to setup NS Record to External Name Server from Internal DNS

We have the following scenario: Our company has an Internal DNS & an External DNS server. Both of them holds the same domain (example.com). Our Internal DNS is a Windows Server that cannot access the Internet, but has setup forwarders for "All others DNS" to the External DNS We need to setup a sub-domain (vendor.exmaple.com) to an Authoritative name server (ns1.vendor.com) from vendor and the vendor will provide the IP-address for this sub-domain. And, we have setup the following in our External DNS for Internet people who needs to access the name (vendor.example.com). vendor IN NS ns1.outsider.com So that when Internet people queries the sub-domain nslookup vendor.example.com It returns with the corresponding IP-address defined in our vendor name server (ns1.vendor.com) Now, we encountered the problem that: If we apply the same setting into our Internal DNS server, we got "Server fails" when an internal staff uses 'nslookup' to query "vendor.e

linux - slow software raid

I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it. Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec I thought I would get more speed (while reading data) from two drives. I don't know what is the problem. I'm using kernel 2.6.31-18 hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec Edit: Raid 1 Answer Take a look at the following article at nixCraf, HowTo: Speed Up Linux Software Raid Building And

zfs - zpool import: volume FAULTED with corrupted data, is it possible to save some data?

I was using Freenas 8.2 and decided to upgrade to 9.2. All seem to go well, upgraded zfs to 28 and rebooted. During next boot it seem to take forever and at some point I decided to turn off the machine. I guess this is what caused the problem. When I try to import I get the following: [root@freenas] ~# zpool import pool: vol4disks8tb id: 12210439070254239230 state: FAULTED status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using the '-f' flag. see: http://illumos.org/msg/ZFS-8000-EY config: vol4disks8tb FAULTED corrupted data raidz2-0 ONLINE gptid/3d316d16-f53e-11e1-9da5-080027dfca8a ONLINE gptid/3df02143-f53e-11e1-9da5-080027dfca8a ONLINE gptid/3eb99e55-f53e-11e1-9da5-080027dfca8a ONLINE

php fpm - Ubuntu - Nginx + php5-fpm - suddenly times out every request even after restart

Caveat - Newbie nginx/php5-fpm/ubuntu I inherited a couple linux boxes and have had the same thing happen on both. I assume it's user error, but for the life of me I don't know what I did. Symptoms: server working ok nginx to php5-fpm and back. I run a cookbook (made by others) that go some git gyrations, sym links, etc. and composure updates. Generally things just work. The most recent time, I wasn't seeing my latest code be served up (or perhaps cached somewhere??) and I did a sudo service php5-fpm restart. After this last time, nothing goes through. NGINX complains of time outs on ever single call: 2016/02/09 16:06:26 [error] 24102#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xx.xx.xx.xx, server: yyy.yyyy.com, request: "GET /v2/phpinfo.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "yyy.yyyy.com" I've restarted php5-fpm and nginx. I've updated c

linux - Server performance affected by MySQL memory consuption and CPU usage

Everyday during the peak time my server getting slow or down. Our hosting provider insisting us to upgrade the server but I think some performance tuning issue is there. Adding the process information, server configuration and my.cnf parameters below. Process Information PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 60848 mysql 20 0 34.8g 23g 6416 S 2196.2 82.1 16027:29 mysqld Dedicated Server Configuration Size: 'Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz', 2600MHz, 6 Core, Sockets: 2 Image: CentOS 6 64-bit with cPanel Fully-managed CPU: Intel Dual Xeon E5-2620 v3 Speed: 2600MHz RAM: 32067MB CPUs: 2 Physical CPUs Cores: 12 Total Cores RAID: Level 10 Disks: 4 Size: 917GB Type: SSD MySQL Configuration [mysqld] slow_query_log = 1 #long_query_time = 2 long_query_time = 2 slow_query_log_file = /var/lib/mysql/vps-slow.log performance-schema=0 max_connections = 250 max_allowe