Skip to main content

Posts

Showing posts from May, 2018

domain name system - Google apps : Unable to receive emails after changing hosting from godaddy to phpfog

I have configured my email in google apps and it was working fine till I changed my hosting from Godaddy to phpfog. I have created a new wordpress site there and added an 'A' record for my root domain to the ip addresses specified by phpfog. Now I am unable to receive emails from any in google apps mail, from accounts in yahoo, msn, etc. I am getting emails from gmail and google apps accounts. Could anyone please help me to resolve this issue? Iy domain is shameerc.com

domain name system - Google apps : Unable to receive emails after changing hosting from godaddy to phpfog

I have configured my email in google apps and it was working fine till I changed my hosting from Godaddy to phpfog. I have created a new wordpress site there and added an 'A' record for my root domain to the ip addresses specified by phpfog. Now I am unable to receive emails from any in google apps mail, from accounts in yahoo, msn, etc. I am getting emails from gmail and google apps accounts. Could anyone please help me to resolve this issue? Iy domain is shameerc.com

linux - Ping to IPv4 works but IPv6 does not

I have a hosted VPS running on arch linux. I'm trying to make outgoing connections from this server, but all of them fail. After a little bit of debugging, I figured that the reason for failed connections is that my server cannot access IPv6 addresses. Ping to IPv4 addresses work, but not to IPv6. Here is a sample. [root@li863-18 /]# nslookup google.com Server: 103.3.60.20 Address: 103.3.60.20#53 Non-authoritative answer: Name: google.com Address: 74.125.68.100 Name: google.com Address: 74.125.68.102 Name: google.com Address: 74.125.68.113 Name: google.com Address: 74.125.68.139 Name: google.com Address: 74.125.68.138 Name: google.com Address: 74.125.68.101 Name: google.com Address: 2404:6800:4003:c02::8a [root@li863-18 /]# ping 74.125.68.100 PING 74.125.68.100 (74.125.68.100) 56(84) bytes of

linux - Ping to IPv4 works but IPv6 does not

I have a hosted VPS running on arch linux. I'm trying to make outgoing connections from this server, but all of them fail. After a little bit of debugging, I figured that the reason for failed connections is that my server cannot access IPv6 addresses. Ping to IPv4 addresses work, but not to IPv6. Here is a sample. [root@li863-18 /]# nslookup google.com Server: 103.3.60.20 Address: 103.3.60.20#53 Non-authoritative answer: Name: google.com Address: 74.125.68.100 Name: google.com Address: 74.125.68.102 Name: google.com Address: 74.125.68.113 Name: google.com Address: 74.125.68.139 Name: google.com Address: 74.125.68.138 Name: google.com Address: 74.125.68.101 Name: google.com Address: 2404:6800:4003:c02::8a [root@li863-18 /]# ping 74.125.68.100 PING 74.125.68.100 (74.125.68.100) 56(84) bytes of data. 64 bytes from 74.125.68.100: icmp_seq=1 ttl=50 time=1.20 ms 64 bytes from 74.125.68.100: icmp_seq=2 ttl=50 time=1.32 ms 64 bytes from 74.125.68.100: icmp_seq

sata - Smart Array P410 RAID controller fails creating logical disk with 2x1TB WD drives

itemprop="text"> I'm trying to build a new box with an Intel DQ670W Motherboard (8GB RAM + i5/2500) an HP SmartArray P410/512 RAID controller (with battery) 2x WD10EALX 1TB SATA drives I'm falling at (almost) the first hurdle. It boots the BIOS, and the SmartArray detects and shows the drives when it starts and goes into ORCA (the ROM BIOS Raid Management tool) The ORCA Utility reports the two drives correctly and allows me to select a logical disk as a RAID 1+0 disk of 985GB. BUT - when I hit ENTER to create the logical disk, I immediately get ==Configuration Error== A fatal error has occurred. Command: 51h SCSI Status: 0000h Command Status: 0004h Pressing ESC just takes me back. I've tried: other known-good SATA disks (also detected properly by OR

sata - Smart Array P410 RAID controller fails creating logical disk with 2x1TB WD drives

I'm trying to build a new box with an Intel DQ670W Motherboard (8GB RAM + i5/2500) an HP SmartArray P410/512 RAID controller (with battery) 2x WD10EALX 1TB SATA drives I'm falling at (almost) the first hurdle. It boots the BIOS, and the SmartArray detects and shows the drives when it starts and goes into ORCA (the ROM BIOS Raid Management tool) The ORCA Utility reports the two drives correctly and allows me to select a logical disk as a RAID 1+0 disk of 985GB. BUT - when I hit ENTER to create the logical disk, I immediately get ==Configuration Error== A fatal error has occurred. Command: 51h SCSI Status: 0000h Command Status: 0004h Pressing ESC just takes me back. I've tried: other known-good SATA disks (also detected properly by ORCA) using different cables using the other microSATA slot All with the same result - and I'm stumped. The microSATA fan-out cables are pukka HP ones - not cheapy ones off eBay. Am at a loss... can anyone shed any light on what's wrong? T

linux - how to disable SSH login with password for some users?

itemprop="text"> On Linux (Debian Squeeze) I would like to disable SSH login using password to some users (selected group or all users except root). But I do not want to disable login using certificate for them. edit: thanks a lot for detailed answer! For some reason this does not work on my server: Match User !root PasswordAuthentication no ...but can be easily replaced by PasswordAuthentication no Match User root PasswordAuthentication yes itemprop="text"> class="normal">Answer Try Match in sshd_config : Match User user1,user2,user3,user4 PasswordAuthentication no Or by group: Match Group users PasswordAuthentication no Or, as mentioned in the comment, by negation: Match

linux - how to disable SSH login with password for some users?

On Linux (Debian Squeeze) I would like to disable SSH login using password to some users (selected group or all users except root). But I do not want to disable login using certificate for them. edit: thanks a lot for detailed answer! For some reason this does not work on my server: Match User !root PasswordAuthentication no ...but can be easily replaced by PasswordAuthentication no Match User root PasswordAuthentication yes Answer Try Match in sshd_config : Match User user1,user2,user3,user4 PasswordAuthentication no Or by group: Match Group users PasswordAuthentication no Or, as mentioned in the comment, by negation: Match User !root PasswordAuthentication no Note that match is effective "until either another Match line or the end of the file." (the indentation isn't significant)

Proxy HTTPS requests to a HTTP backend with NGINX

itemprop="text"> I have nginx configured to be my externally visible webserver which talks to a backend over HTTP. The scenario I want to achieve is: Client makes HTTP request to nginx which is redirect to the same URL but over HTTPS nginx proxies request over HTTP to the backend nginx receives response from backend over HTTP. nginx passes this back to the client over HTTPS My current config (where backend is configured correctly) is: /> server { listen 80; server_name localhost; location ~ .* { proxy_pass http://backend; proxy_redirect http://backend https://$host; proxy_set_header Host $host; } } My problem is the response to the client (step 4) is sent over HTTP not HTTPS. Any ideas? Answer The type of proxy you are trying to set up

Proxy HTTPS requests to a HTTP backend with NGINX

I have nginx configured to be my externally visible webserver which talks to a backend over HTTP. The scenario I want to achieve is: Client makes HTTP request to nginx which is redirect to the same URL but over HTTPS nginx proxies request over HTTP to the backend nginx receives response from backend over HTTP. nginx passes this back to the client over HTTPS My current config (where backend is configured correctly) is: server { listen 80; server_name localhost; location ~ .* { proxy_pass http://backend; proxy_redirect http://backend https://$host; proxy_set_header Host $host; } } My problem is the response to the client (step 4) is sent over HTTP not HTTPS. Any ideas? Answer The type of proxy you are trying to set up is called a reverse proxy. A quick search for reverse proxy nginx got me this page: http://intranation.com/entries/2008/09/using-nginx-reverse-proxy/ In addition to adding

Apache Named VirtualHosts with wildcards

itemprop="text"> I want to map www.example.com to a specific virtual host and then I want all other subdomains of example.com to go to another virtual host. To do this I created these hosts: *:80> ServerName www.example.com *:80> ServerName example.com ServerAlias *.example.com Now the selection of which host is served seems rather random. If I restart apache sometimes I will get one host and other times another. What am I doing wrong? Thanks! Update: If I run apache2ctl -S on this configuration I get this outpu: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server www.example.com (/etc/apache2/sites-enabled/dev:3) port 80 namevhost www.example.com (/etc/apache2/sites-enabled

Apache Named VirtualHosts with wildcards

I want to map www.example.com to a specific virtual host and then I want all other subdomains of example.com to go to another virtual host. To do this I created these hosts: ServerName www.example.com ServerName example.com ServerAlias *.example.com Now the selection of which host is served seems rather random. If I restart apache sometimes I will get one host and other times another. What am I doing wrong? Thanks! Update: If I run apache2ctl -S on this configuration I get this outpu: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server www.example.com (/etc/apache2/sites-enabled/dev:3) port 80 namevhost www.example.com (/etc/apache2/sites-enabled/dev:3) port 80 namevhost example.com (/etc/apache2/sites-enabled/dev:22) After much digging around I decided to disable the mono applications that I had running and low and behold it started serving files from the correct site. T

PCie ssds vs raid?

itemprop="text"> I am new to the whole SSD thing, but I don't understand how the following works? I need to be very specific for this question to make sense. 1 x LSI MegaRAID SAS 9361-8i 570$ 8 x Ultrastar SSD800MH MLC 200GB 8x1450 = 11.600$ Total cost: 12.170$ Expected performance in RAID 0: read : 140x8 = 1.120.000 IOPS write : 100x8 = 800.000 IOPS /> space : 200x8 = 1.6TB On the other hand we have: ioDrive2 Duo 1.2TB SLC Total cost: 28.500$ /> Expected performance: /> read : 580.000 IOPS /> write : 535.000 IOPS /> space : 1.2TB One will say: RAID 0 will fail . But the truth is ioDrive2 Duo can also fail, so you have to buy 2 and RAID 1 them. I understand the different between SLC and MLC (performance an

PCie ssds vs raid?

I am new to the whole SSD thing, but I don't understand how the following works? I need to be very specific for this question to make sense. 1 x LSI MegaRAID SAS 9361-8i 570$ 8 x Ultrastar SSD800MH MLC 200GB 8x1450 = 11.600$ Total cost: 12.170$ Expected performance in RAID 0: read : 140x8 = 1.120.000 IOPS write : 100x8 = 800.000 IOPS space : 200x8 = 1.6TB On the other hand we have: ioDrive2 Duo 1.2TB SLC Total cost: 28.500$ Expected performance: read : 580.000 IOPS write : 535.000 IOPS space : 1.2TB One will say: RAID 0 will fail . But the truth is ioDrive2 Duo can also fail, so you have to buy 2 and RAID 1 them. I understand the different between SLC and MLC (performance and durability), but the Ultrastar drives seems really solid and unless you torment them, they won't die. All in all, what is wrong with my calculations? Why people buy those PCIe cards and they don't build arrays of drives? It's more simple to manage, but it costs more than double? Answe

Weird memory usage in Oracle Linux

I have a oracle linux 7 running Pentaho (an java application). This machine has 23 GB of ram and improve pentaho performance I'm trying to configure java to alocate 10 GB of RAM to pentaho ( -Xms10240m ). But after I start pentaho I get this. biserver-ce]# free -g total used free shared buff/cache available Mem: 23 4 0 0 18 18 Swap: 9 0 9 The math don't match (4GB used memory + 0 free memory isn't equal to 23G). The swap memory is unused and I don't understand why of 18G of data in cache. In another machine with the same config but for test this not happening. The java process running root 12409 1 2 Jul26 ? 00:21:16 java - Djava.util.logging.config.file=/opt/pentaho/server/biserver-ce/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dserver -X

Weird memory usage in Oracle Linux

I have a oracle linux 7 running Pentaho (an java application). This machine has 23 GB of ram and improve pentaho performance I'm trying to configure java to alocate 10 GB of RAM to pentaho ( -Xms10240m ). But after I start pentaho I get this. biserver-ce]# free -g total used free shared buff/cache available Mem: 23 4 0 0 18 18 Swap: 9 0 9 The math don't match (4GB used memory + 0 free memory isn't equal to 23G). The swap memory is unused and I don't understand why of 18G of data in cache. In another machine with the same config but for test this not happening. The java process running root 12409 1 2 Jul26 ? 00:21:16 java - Djava.util.logging.config.file=/opt/pentaho/server/biserver-ce/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dserver -Xmx10240m -Xms10240m -XX:MaxPe

high availability - Understanding the nameserver aspect of a DNS based failover system

As part of a project I'm involved in, system is required with as close to 99.999% uptime as possible (the system involves healthcare). The solution I am investigating involves having multiple sites which in turn have their own load balancers and multiple internal servers, and their own replicated database which is synchronised with every other site. What sits in front of all of this is a DNS based failover system that redirects traffic if a site goes down (or is manually taken down for maintenance). What I'm struggling with however is how the DNS aspect functions without preventing a single point of failure. I've seen talk of floating IPs (which present that point of failure), various managed services such as DNSMadeEasy (which don't provide the ability to fully test their failover process during their free trial, so I can't verify if it's righ

high availability - Understanding the nameserver aspect of a DNS based failover system

As part of a project I'm involved in, system is required with as close to 99.999% uptime as possible (the system involves healthcare). The solution I am investigating involves having multiple sites which in turn have their own load balancers and multiple internal servers, and their own replicated database which is synchronised with every other site. What sits in front of all of this is a DNS based failover system that redirects traffic if a site goes down (or is manually taken down for maintenance). What I'm struggling with however is how the DNS aspect functions without preventing a single point of failure. I've seen talk of floating IPs (which present that point of failure), various managed services such as DNSMadeEasy (which don't provide the ability to fully test their failover process during their free trial, so I can't verify if it's right for the project or not) and much more, and have been playing around with simple solutions such as assigning multiple A

hp - PostgreSQL server: 10k RPM SAS or Intel 520 Series SSD drives?

itemprop="text"> We will be expanding the storage for a PostgreSQL server and one of the things we are considering is using SSDs (Intel 520 Series) instead of rotating discs (10k RPM). Price per GB is comparable and we expect improved performance, however we are concerned about longevity since our database usage pattern is quite write-heavy. We are also concerned about data corruption in case of power failure (due to SSDs write cache not flushing properly). We currently use RAID10 with 4 active HDDs (10k 146GB) and 1 spare configured in the controller. It's a HP DL380 G6 server with P410 Smart Array Controller and BBWC. What makes more sense: upgrading the drives to 300GB 10k RPM or using Intel 520 Series SSDs (240GB)? Answer If you're using a server equipped with a Smart Array P400 controller, you're

hp - PostgreSQL server: 10k RPM SAS or Intel 520 Series SSD drives?

We will be expanding the storage for a PostgreSQL server and one of the things we are considering is using SSDs (Intel 520 Series) instead of rotating discs (10k RPM). Price per GB is comparable and we expect improved performance, however we are concerned about longevity since our database usage pattern is quite write-heavy. We are also concerned about data corruption in case of power failure (due to SSDs write cache not flushing properly). We currently use RAID10 with 4 active HDDs (10k 146GB) and 1 spare configured in the controller. It's a HP DL380 G6 server with P410 Smart Array Controller and BBWC. What makes more sense: upgrading the drives to 300GB 10k RPM or using Intel 520 Series SSDs (240GB)? Answer If you're using a server equipped with a Smart Array P400 controller, you're dealing with a G5-era 300-series ProLiant (DL360 G5, DL380 G5, etc.) or a G4/G5-era 500-series ProLiant (DL580, ML570). All of those systems were eclipsed in 2009 or before, so y

virtualization - Hosting a ZFS server as a virtual guest

itemprop="text"> I'm still new to ZFS. I've been using Nexenta but I'm thinking of switching to OpenIndiana or Solaris 11 Express. Right now, I'm at a point of considering virtualizing the ZFS server as a guest within either ESXi, Hyper-V or XenServer (I haven't decided which one yet - I'm leaning towards ESXi for VMDirectPath and FreeBSD support). The primary reason being that it seems like I have enough resources to go around that I could easily have 1-3 other VMs running concurrently. Mostly Windows Server. Maybe a Linux/BSD VM as well. I'd like the virtualized ZFS server to host all the data for the other VMs so their data could be kept on a physically separate disks from the ZFS disks (mount as iscsi or nfs). The server currently has an AMD Phenom II with 6 total cores (2 unlocked), 16GB RAM (maxed out) and an

virtualization - Hosting a ZFS server as a virtual guest

I'm still new to ZFS. I've been using Nexenta but I'm thinking of switching to OpenIndiana or Solaris 11 Express. Right now, I'm at a point of considering virtualizing the ZFS server as a guest within either ESXi, Hyper-V or XenServer (I haven't decided which one yet - I'm leaning towards ESXi for VMDirectPath and FreeBSD support). The primary reason being that it seems like I have enough resources to go around that I could easily have 1-3 other VMs running concurrently. Mostly Windows Server. Maybe a Linux/BSD VM as well. I'd like the virtualized ZFS server to host all the data for the other VMs so their data could be kept on a physically separate disks from the ZFS disks (mount as iscsi or nfs). The server currently has an AMD Phenom II with 6 total cores (2 unlocked), 16GB RAM (maxed out) and an LSI SAS 1068E HBA with (7) 1TB SATA II disks attached (planning on RAIDZ2 with hot spare). I also have (4) 32GB SATA II SSDs attached to the motherboard. I'm

domain name system - Windows Server 2003 DNS

We use windows server 2003 for DNS on our network. The forward DNS entries ("A" records) for windows machines on the domain are populated automatically. However, the reverse DNS entries ("PTR" Records) are not. The reverse lookup zone exists, and I can add entries to it manually, but it doesn't automatically populate the client IP are still in forward lookup zone DC.local. Dynamic updates are enabled for both the forward and reverse zones. On the DCHP Pool I have Enable Dns dymanic updates according to setting > Dymanically update dns A and PRT records only if requested by DCHP Clients Tick in Discard A and PRT records when leased is deleted DNS I set Scanege stales records every 3 days on Forward Lookup Zones and Reverse lookup zone 3 days

domain name system - Windows Server 2003 DNS

We use windows server 2003 for DNS on our network. The forward DNS entries ("A" records) for windows machines on the domain are populated automatically. However, the reverse DNS entries ("PTR" Records) are not. The reverse lookup zone exists, and I can add entries to it manually, but it doesn't automatically populate the client IP are still in forward lookup zone DC.local. Dynamic updates are enabled for both the forward and reverse zones. On the DCHP Pool I have Enable Dns dymanic updates according to setting > Dymanically update dns A and PRT records only if requested by DCHP Clients Tick in Discard A and PRT records when leased is deleted DNS I set Scanege stales records every 3 days on Forward Lookup Zones and Reverse lookup zone 3 days

linux - server running out of memory

itemprop="text"> src="https://i.stack.imgur.com/LylvG.jpg" alt="enter image description here">the server is running out of memory and get to the point where it starts killing the process, the total PSS memory(actual memory used from the Resident memory) consumed by top using applications is less than the total memory on the system, i want to find out where this extra memory usage is happening? any ideas, below are the output from meminfo,smem,free -m, any suggestions will be really appreciated??? cat /proc/meminfo MemTotal: 5976008 kB MemFree: 138768 kB Buffers: 2292 kB Cached: 57444 kB SwapCached: 85980 kB Active: 324332 kB Inactive: 121836 kB Active(anon): 309264 kB Inactive(anon): 77992 kB Active(file): 15068 kB Inactive(file): 43844 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal:

linux - server running out of memory

the server is running out of memory and get to the point where it starts killing the process, the total PSS memory(actual memory used from the Resident memory) consumed by top using applications is less than the total memory on the system, i want to find out where this extra memory usage is happening? any ideas, below are the output from meminfo,smem,free -m, any suggestions will be really appreciated??? cat /proc/meminfo MemTotal: 5976008 kB MemFree: 138768 kB Buffers: 2292 kB Cached: 57444 kB SwapCached: 85980 kB Active: 324332 kB Inactive: 121836 kB Active(anon): 309264 kB Inactive(anon): 77992 kB Active(file): 15068 kB Inactive(file): 43844 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 8159224 kB SwapFree: 6836184 kB Dirty: 572 kB Writeback: 0 kB AnonPages: 372160 kB Mapped: 13976 kB Shmem: 472 kB Slab:

Classless Reverse DNS with Recursion - BIND

I'm running BIND 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6 on CentOS 6.6. The only zone is for classless reverse DNS, which has been delegated. I'm no BIND or DNS expert, but as I understand it, classless reverse DNS requires recursion. With recursion set to "any", the server returns correct PTR records, but also functions as an open DNS server, which is not desired. With recursion set to localhost, all queries are denied. Recursion any: 64.19.199.56 Server: slcdns1.redacted.com Address: 64.19.199.55 Aliases: 55.199.19.64.in-addr.arpa Non-authoritative answer: 56.199.19.64.in-addr.arpa canonical name = 56.0-127.199.19.64.in-addr.arpa 56.0-127.199.19.64.in-addr.arpa name = slcdns2.redacted.com 0-127.199.19.64.in-addr.arpa nameserver = slcdns1.redacted.com 0-127.199.1

Classless Reverse DNS with Recursion - BIND

I'm running BIND 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6 on CentOS 6.6. The only zone is for classless reverse DNS, which has been delegated. I'm no BIND or DNS expert, but as I understand it, classless reverse DNS requires recursion. With recursion set to "any", the server returns correct PTR records, but also functions as an open DNS server, which is not desired. With recursion set to localhost, all queries are denied. Recursion any: 64.19.199.56 Server: slcdns1.redacted.com Address: 64.19.199.55 Aliases: 55.199.19.64.in-addr.arpa Non-authoritative answer: 56.199.19.64.in-addr.arpa canonical name = 56.0-127.199.19.64.in-addr.arpa 56.0-127.199.19.64.in-addr.arpa name = slcdns2.redacted.com 0-127.199.19.64.in-addr.arpa nameserver = slcdns1.redacted.com 0-127.199.19.64.in-addr.arpa nameserver = slcdns2.redacted.com slcdns1.redacted.com internet address = 64.19.199.55 slcdns2.redacted.com internet address = 64.19.199.56 R