Skip to main content

Posts

Showing posts from November, 2017

exim4 reveals a mail alias when remote server rejects spam

itemprop="text"> I'm running exim4 (4.76) on Ubuntu 12.0.4.4. exim4 is set up to handle mail for mydomain.com. I have aliases set up that forward a@mydomain.com to b@gmail.com. I have SpamAssassin set up to work in conjunction with exim4 (via sa-exim.conf). Sometimes spam is sent to a@mydomain.com and SpamAssassin assigns it a low enough score that it forwards it to b@gmail.com. GMail rejects the message as spam, so my exim4 server attempts to send a message back to the spam address saying: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: b@gmail.com (generated from a@mydomain.com) SMTP error from remote mail server after end of data: host gmail

exim4 reveals a mail alias when remote server rejects spam

I'm running exim4 (4.76) on Ubuntu 12.0.4.4. exim4 is set up to handle mail for mydomain.com. I have aliases set up that forward a@mydomain.com to b@gmail.com. I have SpamAssassin set up to work in conjunction with exim4 (via sa-exim.conf). Sometimes spam is sent to a@mydomain.com and SpamAssassin assigns it a low enough score that it forwards it to b@gmail.com. GMail rejects the message as spam, so my exim4 server attempts to send a message back to the spam address saying: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: b@gmail.com (generated from a@mydomain.com) SMTP error from remote mail server after end of data: host gmail-smtp-in.l.google.com [2607:f8b0:4003:c02::1a]: 550-5.7.1 [xxxx:yyyy::zzzz:aaaa:bbbb:ccccc 12] Our system has detected that 550-5.7.1 this message is likely unsolicited mail.

samba - Shared Disk in modern data center (SAN, virtualization, etc)

itemprop="text"> I'm a developer...treading here on foreign terrain. Please pardon any naivete. I work on an application that stores data both in a database and the filesystem. Context: Clustering and Network Shares In the past when we ran the application clustered (i.e. multiple application servers fronting the data), we have handled the filesystem as follows: Node A : shares the "data directory " (via samba or nfs) Nodes B,C,D, etc: mounts the "network share" and uses that at its "data directory" Reduced "disk speed" for nodes B,C,D was suboptimal, but not a big problem. Also note : The application uses its own file locking mechanism. Concurrent writes are not a problem. The Questions So in a modern data center fibr

samba - Shared Disk in modern data center (SAN, virtualization, etc)

I'm a developer...treading here on foreign terrain. Please pardon any naivete. I work on an application that stores data both in a database and the filesystem. Context: Clustering and Network Shares In the past when we ran the application clustered (i.e. multiple application servers fronting the data), we have handled the filesystem as follows: Node A : shares the "data directory " (via samba or nfs) Nodes B,C,D, etc: mounts the "network share" and uses that at its "data directory" Reduced "disk speed" for nodes B,C,D was suboptimal, but not a big problem. Also note : The application uses its own file locking mechanism. Concurrent writes are not a problem. The Questions So in a modern data center fibre-channel connects servers to SANS, what's the best way to share a "hunk of disk" amongst several servers ? Is such 'disk sharing' widely used? Any OS-specific concerns ("works on linux, but not available on Windows&quo

linux - Keep-Alive + HTTPClient

Is there a way to set the value of keep-alive = false and close the connection using HTTPClient and is this a best practice. by default in apache KeepAlive is on Accept-Ranges: bytes Vary: Accept-Encoding Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html from the client I want to set Connection = close.

linux - Keep-Alive + HTTPClient

Is there a way to set the value of keep-alive = false and close the connection using HTTPClient and is this a best practice. by default in apache KeepAlive is on Accept-Ranges: bytes Vary: Accept-Encoding Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html from the client I want to set Connection = close.

filesystems - ZFS alternative for Linux?

itemprop="text"> I'm running OpenSolaris with ZFS for my main fileserver. I originally went with ZFS because I heard so many awesome things about it: Automatic disk spanning (zpools) Software RAID (RAID-Z) Automatic pool resizing by replacing RAIDZ'd disks Block-level checksumming No practical single-volume limits "Coming Soon" deduplication After poking at OpenSolaris for a while, it really bugs me. I know Fedora/CentOS and Debian/Ubuntu far better, and I'm used to the Linux way of doing stuff vs the Solaris/BSD version. I want to switch to Linux, but I don't know what to use for my FS. I'm not willing to use FUSE or a pre-beta kernel to get ZFS. Btrfs has potential feature parity, but it's still not stable even now (months after I first looked into

filesystems - ZFS alternative for Linux?

I'm running OpenSolaris with ZFS for my main fileserver. I originally went with ZFS because I heard so many awesome things about it: Automatic disk spanning (zpools) Software RAID (RAID-Z) Automatic pool resizing by replacing RAIDZ'd disks Block-level checksumming No practical single-volume limits "Coming Soon" deduplication After poking at OpenSolaris for a while, it really bugs me. I know Fedora/CentOS and Debian/Ubuntu far better, and I'm used to the Linux way of doing stuff vs the Solaris/BSD version. I want to switch to Linux, but I don't know what to use for my FS. I'm not willing to use FUSE or a pre-beta kernel to get ZFS. Btrfs has potential feature parity, but it's still not stable even now (months after I first looked into it). What do you recommend as an equivalent of ZFS (desired features noted above) for a Linux box? Answer Have you considered NexentaStor or Nexenta core? It's actively developed now that the OpenSolaris

centos7 - fail2ban running on CentOS 7 & getting “ssh connection refused”

Is anyone successfully running fail2ban on CentOS 7 and can tell me how to do it? I tried to install fail2ban with yum install fail2ban and run it (there are no extra rules in iptables -L which seems odd according to what I found on the net). As soon as I reboot the server I can't login as root or other user via ssh. The ports are not visible when scanning and of course I get this error when I try to connect: ssh: connect to host XXX.XXX.XXX.XXX port 12321: Connection refused I changed the ssh port, but I also tried it with port 22 without luck. I wonder if someone knows a solutions to this problem? It has to be a problem with fail2ban because I didn't install anything else. /> UPDATE I can log in via ssh after reboot. But no html page is served. Output of iptables

centos7 - fail2ban running on CentOS 7 & getting “ssh connection refused”

Is anyone successfully running fail2ban on CentOS 7 and can tell me how to do it? I tried to install fail2ban with yum install fail2ban and run it (there are no extra rules in iptables -L which seems odd according to what I found on the net). As soon as I reboot the server I can't login as root or other user via ssh. The ports are not visible when scanning and of course I get this error when I try to connect: ssh: connect to host XXX.XXX.XXX.XXX port 12321: Connection refused I changed the ssh port, but I also tried it with port 22 without luck. I wonder if someone knows a solutions to this problem? It has to be a problem with fail2ban because I didn't install anything else. UPDATE I can log in via ssh after reboot. But no html page is served. Output of iptables -L : Chain INPUT (policy ACCEPT) target prot opt source destination f2b-sshd tcp -- anywhere anywhere multiport dports ssh ACCEPT all -- anywhere anywhere ctstat

Exchange not allowing emails to invalid addresses for SAP

I have a client who sends out emails from SAP via their Exchange server. Emails are received ok by recipients, however if an email is sent to an invalid address there is no bounceback to the sender. Users need these bouncebacks to work. Bounceback emails are received ok when sending from Outlook. I have traced the issue to their Exchange 2007 server. When trying to send a test email to an invalid email address using their Acronis backup system the following message is received: The server cannot accept DATA command. Other Acronis setups I work with can send emails to invalid addresses (i.e. bounceback emails are received). Any ideas?

Exchange not allowing emails to invalid addresses for SAP

I have a client who sends out emails from SAP via their Exchange server. Emails are received ok by recipients, however if an email is sent to an invalid address there is no bounceback to the sender. Users need these bouncebacks to work. Bounceback emails are received ok when sending from Outlook. I have traced the issue to their Exchange 2007 server. When trying to send a test email to an invalid email address using their Acronis backup system the following message is received: The server cannot accept DATA command. Other Acronis setups I work with can send emails to invalid addresses (i.e. bounceback emails are received). Any ideas?

CentOS php fopen permission denied error

itemprop="text"> I am not very familiar with CentOS and its specific permissions issues with httpd+php environment so that I got stuck with "failed to open stream: Permission denied" after installing website on production server. I have directory layout like that /usr/local/project ... /usr/local/project/../../public /var/www/html -> /usr/local/project/../../public ^ symlink Script tries to write to project's sub directory and gets permission denied error. I have tried 1) set 744 permissions for this folder 2) set 777 for entire project's tree 3) set open_basedir to /usr/local/project in combination with previous permissions changing Nothing helped. What can cause permissions error? Answer Check if you have SELinux enabled? If yes then di

CentOS php fopen permission denied error

I am not very familiar with CentOS and its specific permissions issues with httpd+php environment so that I got stuck with "failed to open stream: Permission denied" after installing website on production server. I have directory layout like that /usr/local/project ... /usr/local/project/../../public /var/www/html -> /usr/local/project/../../public ^ symlink Script tries to write to project's sub directory and gets permission denied error. I have tried 1) set 744 permissions for this folder 2) set 777 for entire project's tree 3) set open_basedir to /usr/local/project in combination with previous permissions changing Nothing helped. What can cause permissions error? Answer Check if you have SELinux enabled? If yes then disable it http://www.electrictoolbox.com/switch-off-selinux-centos-5/

My system generated emails[from php script] are going to spam folder in yahoo and hotmail

We have an application running on biggerrole.com domain. We have hosted it on Hostgator VPS. The application sends transactional emails to our registered users. Currently the emails go to spam folder for yahoo/hotmail users [ google users get it in inbox] I have set up SPF and domain keys. Here is the headers for Yahoo.. From admin@biggerrole.com Fri Apr 8 10:18:30 2011 X-Apparently-To: piplayan@yahoo.com via 98.138.88.135; Fri, 08 Apr 2011 03:18:31 -0700 Return-Path: X-YahooFilteredBulk: 174.122.51.197 Received-SPF: pass (mta140.mail.sp2.yahoo.com: domain of admin@biggerrole.com designates 174.122.51.197 as permitted sender) X-YMailISG: dnBjapEcZAr94Z9Ovuwwtj_hhrCu9qv5Mf_A5UxIKsF3TbZh AKN4vekfWEmGa3Bygg9E89va3xgJ1GcDxcB5I7uzKvTO0rFkdoYOBBTDP6Ks KxktHdCQHSFsNJD.dp3ItrMLw3.BEeK1wwvHV_QZAldvO3yxcTqyrQRCCe14 1eH

My system generated emails[from php script] are going to spam folder in yahoo and hotmail

We have an application running on biggerrole.com domain. We have hosted it on Hostgator VPS. The application sends transactional emails to our registered users. Currently the emails go to spam folder for yahoo/hotmail users [ google users get it in inbox] I have set up SPF and domain keys. Here is the headers for Yahoo.. From admin@biggerrole.com Fri Apr 8 10:18:30 2011 X-Apparently-To: piplayan@yahoo.com via 98.138.88.135; Fri, 08 Apr 2011 03:18:31 -0700 Return-Path: X-YahooFilteredBulk: 174.122.51.197 Received-SPF: pass (mta140.mail.sp2.yahoo.com: domain of admin@biggerrole.com designates 174.122.51.197 as permitted sender) X-YMailISG: dnBjapEcZAr94Z9Ovuwwtj_hhrCu9qv5Mf_A5UxIKsF3TbZh AKN4vekfWEmGa3Bygg9E89va3xgJ1GcDxcB5I7uzKvTO0rFkdoYOBBTDP6Ks KxktHdCQHSFsNJD.dp3ItrMLw3.BEeK1wwvHV_QZAldvO3yxcTqyrQRCCe14 1eHlvC0o2fkuW2i9s__Y.O2DXf9sjCs1mtcPsIaQUi.WnNQazqWy5O6NnUwO iT2juogJG4BLjC6Wb_FgzMf._XMEKtFjO5QApiKniiSl4crgP1XB3_UTLzwI 5CH4o7u4KY2BoJcPrXW9Yk_5l_JeIdDmA0Puvnhn4lGuk60CSO2gfCS

networking - How can I detect a DDoS attack using pfSense so I can tell my ISP who to block?

Last week my network href="https://serverfault.com/questions/411938/esxi-server-under-dos-attack-can-i-use-ssh-to-determine-where-from">was hit by a DDoS attack which completely saturated our 100 MBps link to the internet and pretty much shut down all the sites and services we host. I understand (from this experience as well as other answers ) that I cannot handle a DDoS attack such as this on my end, because even if we drop the packets they have still been sent over our link and are saturating our connection. However when this happened my ISP was (strangely enough) unable to tell me where the attack was coming from. They said if I could determine the source (E.G. via tcpdump ) I could give them IP addresses to block. But things were so overloaded that running tcpdump was impossible. I just couldn't view the outpu

networking - How can I detect a DDoS attack using pfSense so I can tell my ISP who to block?

Last week my network was hit by a DDoS attack which completely saturated our 100 MBps link to the internet and pretty much shut down all the sites and services we host. I understand (from this experience as well as other answers ) that I cannot handle a DDoS attack such as this on my end, because even if we drop the packets they have still been sent over our link and are saturating our connection. However when this happened my ISP was (strangely enough) unable to tell me where the attack was coming from. They said if I could determine the source (E.G. via tcpdump ) I could give them IP addresses to block. But things were so overloaded that running tcpdump was impossible. I just couldn't view the output. Nearly all our servers are behind a pfSense router. How can I detect a DDoS attack using pfSense so I can tell my ISP who to block? I don't want to block the attack myself, I just want to get alerts / be able to view a list of IP addresses that are using way more bandwidth tha

linux - 384 MB enough for a starter VPS?

I'm considering to rent a VPS with 384 MB memory. It would run on CentOS and would have cPanel with Apache 2 / MySQL and Phusion Passenger with nginx / sqlite. What do you think, will it have enough memory? It would serve around 10 small traffic PHP/MySQL websites and 3-4 small traffic Ruby on Rails app. Thanks for your suggestions.

linux - 384 MB enough for a starter VPS?

I'm considering to rent a VPS with 384 MB memory. It would run on CentOS and would have cPanel with Apache 2 / MySQL and Phusion Passenger with nginx / sqlite. What do you think, will it have enough memory? It would serve around 10 small traffic PHP/MySQL websites and 3-4 small traffic Ruby on Rails app. Thanks for your suggestions.

permissions - How to revert mass ownership change?

I accidentally executed the chown someuser * -R while I was at / on my server. I thought maybe by issuing chown root * -R would somehow fix it. But it seems that there are some problems. For example now the DirectAdmin is acting weird when I try to login. It says: Unable to determine Usertype user.conf needs to be repaired Is there any way that I can fix the situation? Update: I thought maybe of fixing the problem by writing a shell script which checks every file's group and sets the user accordingly, as it is my knowledge that usually all the files have same values for user and group. class="post-text" itemprop="text"> class="normal">Answer That is unfortunate! What the latter command you issued did was change the o

permissions - How to revert mass ownership change?

I accidentally executed the chown someuser * -R while I was at / on my server. I thought maybe by issuing chown root * -R would somehow fix it. But it seems that there are some problems. For example now the DirectAdmin is acting weird when I try to login. It says: Unable to determine Usertype user.conf needs to be repaired Is there any way that I can fix the situation? Update: I thought maybe of fixing the problem by writing a shell script which checks every file's group and sets the user accordingly, as it is my knowledge that usually all the files have same values for user and group. Answer That is unfortunate! What the latter command you issued did was change the ownership of all files to root. So, any files that were set up as suid (set UID, so that they run with the privileges of their owner) will run with root permissions, and also users will not own their home directories. Be very careful of this. Consider using ls to search for such files, a

amazon web services - Does AWS Network Load Balancer prevent DDoS

itemprop="text"> AWS ALB does routing based on content this means many common DDoS attacks like SYN floods and UDP reflection will be blocked. On other hand, AWS NLB does not absorb any traffic hence my backend EC2s are open for any DDoS So should I pay for AWS Shield Advanced? Answer When you're looking at that sort of monthly spend (US$3K per month) you should have an AWS sales / technical person advising you. Based on EIPs only being part of Shield Advanced, you probably won't get DDOS protection without the advanced product. However, you can get DDOS protection MUCH more cheaply from providers like CloudFlare.

amazon web services - Does AWS Network Load Balancer prevent DDoS

AWS ALB does routing based on content this means many common DDoS attacks like SYN floods and UDP reflection will be blocked. On other hand, AWS NLB does not absorb any traffic hence my backend EC2s are open for any DDoS So should I pay for AWS Shield Advanced? Answer When you're looking at that sort of monthly spend (US$3K per month) you should have an AWS sales / technical person advising you. Based on EIPs only being part of Shield Advanced, you probably won't get DDOS protection without the advanced product. However, you can get DDOS protection MUCH more cheaply from providers like CloudFlare.

domain name system - Initial Windows 2008 VPS Setup

itemprop="text"> I've just got my first VPS server, yay! It'll primarilly be used for my own hosting (I'm a Web Application Developer) and for friends & family. It's just been all setup fro me bare bones and I have RDP to jump on and play around. But not that I've installed the basic roles, database engine, hMailServer (not fully configured yet) etc, I've feeling slightly in over my head. When I signed up I provided these settings: Host name: myhostname.co.nz />NS1 Prefix: barry NS2 Prefix: terry I then received my two IP addresses (say): 155.255.355.555 155.255.355.556 Confusion area 1: I think the main confusion is around DNS and how all that jazz works... I added the DNS role and follow some basic instructions from here: href="http://www.mywebhostingblog.net

domain name system - Initial Windows 2008 VPS Setup

I've just got my first VPS server, yay! It'll primarilly be used for my own hosting (I'm a Web Application Developer) and for friends & family. It's just been all setup fro me bare bones and I have RDP to jump on and play around. But not that I've installed the basic roles, database engine, hMailServer (not fully configured yet) etc, I've feeling slightly in over my head. When I signed up I provided these settings: Host name: myhostname.co.nz NS1 Prefix: barry NS2 Prefix: terry I then received my two IP addresses (say): 155.255.355.555 155.255.355.556 Confusion area 1: I think the main confusion is around DNS and how all that jazz works... I added the DNS role and follow some basic instructions from here: Install & Configure Windows DNS Service Was I correct in following that? Is there a better tutorial out there? Note I replaced details in the tutorial with the settings above. So the DNS Manager looks something like: (same as parent folder) Start of A

security - What are main steps doing forensic analysis of linux box after it was hacked?

itemprop="text"> What are main steps doing forensic analysis of linux box after it was hacked? Lets say it is a generic linux server mail/web/database/ftp/ssh/samba. And it started sending spam, scanning other systems.. How to start searching for ways hack was done and who is responsible? itemprop="text"> class="normal">Answer Here are some things to try before rebooting: First of all, if you think you might be compromised unplug your network cable so the machine can't do further damage. Then, if possible refrain from rebooting , as many traces of an intruder can be removed by re-booting. If you thought ahead, and had remote logging in place, use your remote logs, not the ones on the machine, as it's all too easy for someone to tamper with the log

security - What are main steps doing forensic analysis of linux box after it was hacked?

What are main steps doing forensic analysis of linux box after it was hacked? Lets say it is a generic linux server mail/web/database/ftp/ssh/samba. And it started sending spam, scanning other systems.. How to start searching for ways hack was done and who is responsible? Answer Here are some things to try before rebooting: First of all, if you think you might be compromised unplug your network cable so the machine can't do further damage. Then, if possible refrain from rebooting , as many traces of an intruder can be removed by re-booting. If you thought ahead, and had remote logging in place, use your remote logs, not the ones on the machine, as it's all too easy for someone to tamper with the logs on the machine. But if you don't have remote logs, examine the local ones thoroughly. Check dmesg , as this will be replaced upon reboot as well. In linux it is possible to have running programs - even after the running file has been deleted. Check for these w

What is the correct syntax to run cron every 4 hours?

I have the following syntax (which I think is correcT?) but it runs the command every minute! * */4 * * * /cmd.sh class="post-text" itemprop="text"> class="normal">Answer 0 0,4,8,12,16,20 * * * /cmd.sh That's probably how I would do it. This will run the job every 4 hours, on the hours of 00:00, 04:00, 08:00 12:00, 16:00, 20:00. This is just a little more verbose way of writing */4, but it should work the same.

What is the correct syntax to run cron every 4 hours?

I have the following syntax (which I think is correcT?) but it runs the command every minute! * */4 * * * /cmd.sh Answer 0 0,4,8,12,16,20 * * * /cmd.sh That's probably how I would do it. This will run the job every 4 hours, on the hours of 00:00, 04:00, 08:00 12:00, 16:00, 20:00. This is just a little more verbose way of writing */4, but it should work the same.

routing - multi-source monitoring companies

itemprop="text"> I've mentioned before on here that I'm using Pingdom, and am quite happy with it. For the price it's awesome. One of the features that took us to them is that they have monitoring sites all over the world. Our hope was that this would give us a cheap way to tell if something in our routing is b0rked, and some part of the world can't see us. Unfortunately, they'll only alert if two different sites can't see you. What I'm looking for is a similar monitoring system that will tell me if any individual site can't get to me. Some logic on their side to tell the difference between me being out and them being out would be great, but I'll take it even without. [Edit] Some clarification: I've only got one site (both logically and physically) that I want to monitor from many places. [Ed

routing - multi-source monitoring companies

I've mentioned before on here that I'm using Pingdom, and am quite happy with it. For the price it's awesome. One of the features that took us to them is that they have monitoring sites all over the world. Our hope was that this would give us a cheap way to tell if something in our routing is b0rked, and some part of the world can't see us. Unfortunately, they'll only alert if two different sites can't see you. What I'm looking for is a similar monitoring system that will tell me if any individual site can't get to me. Some logic on their side to tell the difference between me being out and them being out would be great, but I'll take it even without. [Edit] Some clarification: I've only got one site (both logically and physically) that I want to monitor from many places. [Edit 2] I'm happy to pay for this service. I'm already paying Pingdom, and probably would continue to do so even with this new service, assuming that they don'

Configure Apache VirtualHosts with a Load Balancer

itemprop="text"> I have two servers, private IPs, Apache 2.4. I am serving the same content in both servers and there is a load balancer in front of these servers. Load balancer uses a public IP, and there is a domain (mycompany.com) associated with it. However, the client bought a new domain and want to use the same servers to serve the new content. As far as I understand I need to configure VirtualHosts. I've read the documentation regarding VirtualHosts and it seems to be a case for name-based virtual hosts. But since the public IP for the hostname is associated with the balancer, I do not know how I should configure the private servers in order that they be able to know how to solve which content to serve. Appreciate the guidance. itemprop="text"> class="normal">

Configure Apache VirtualHosts with a Load Balancer

I have two servers, private IPs, Apache 2.4. I am serving the same content in both servers and there is a load balancer in front of these servers. Load balancer uses a public IP, and there is a domain (mycompany.com) associated with it. However, the client bought a new domain and want to use the same servers to serve the new content. As far as I understand I need to configure VirtualHosts. I've read the documentation regarding VirtualHosts and it seems to be a case for name-based virtual hosts. But since the public IP for the hostname is associated with the balancer, I do not know how I should configure the private servers in order that they be able to know how to solve which content to serve. Appreciate the guidance. Answer Apache does not need to resolve anything regarding DNS. Just make sure each new virtualhost for the new domains have the appropiate " ServerName " entry reflecting that new domain, this way Apache HTTPD will know where to deliver the req

domain name system - If DNS Failover is not recommended, what is?

itemprop="text"> As a followup question to his very popular question: href="https://serverfault.com/questions/60553/why-is-dns-failover-not-recommended">Why is DNS failover not recommended? , I think it was agreed that DNS failover is not 100% reliable due to caching. However the highest voted answer did not really discuss what is the better solution to achieve failover between two different data centers. The only solution presented was local load balancing (single data center). So my question is quite simply what is the real solution to cross data center failover? Answer A whole data center would need to go down or be unreachable for this to apply. Your backup at another data center would then be reached by routing the IP addresses to the other data center. This would happen through

domain name system - If DNS Failover is not recommended, what is?

As a followup question to his very popular question: Why is DNS failover not recommended? , I think it was agreed that DNS failover is not 100% reliable due to caching. However the highest voted answer did not really discuss what is the better solution to achieve failover between two different data centers. The only solution presented was local load balancing (single data center). So my question is quite simply what is the real solution to cross data center failover? Answer A whole data center would need to go down or be unreachable for this to apply. Your backup at another data center would then be reached by routing the IP addresses to the other data center. This would happen through the BGP route announcements from the primary data center no longer being provided. The secondary announcements from the secondary data center would then be used. Smaller businesses are generally not large enough to justify the expense of portable IP address allocations and their own autonom

Bandwidth with SAS expanders and RAID controllers

itemprop="text"> Having a RAID controller that has eight internal SATA3 lanes, you can get 6 Gb/s on all eight drives. What if I connect a 24 port SAS expander to an eight port RAID controller, do I still get max throughput of 8 x 6 Gb/s, or am I able to get 24 x 6 Gb/s, assuming the expander is rated for 6 Gb/s on all ports? Of course the PCIe bandwidth is going to limit it, as well as the RAID controller but is this right theoritically speaking? PCIe 2.0 x8 has bandwidth of 4000 MB/s and PCIe 3.0 x8 has 7880 MB/s. As an example, I was thinking of buying LSI MegaRAID 9271-8i for my home server. It has eight internal SATA 6 Gb/s lanes. With that one I am able to connect eight hard drives and they can work on their limits in terms of transfer rates because there is one 6 Gb/s lane for each drive available. But in the future the stor

Bandwidth with SAS expanders and RAID controllers

Having a RAID controller that has eight internal SATA3 lanes, you can get 6 Gb/s on all eight drives. What if I connect a 24 port SAS expander to an eight port RAID controller, do I still get max throughput of 8 x 6 Gb/s, or am I able to get 24 x 6 Gb/s, assuming the expander is rated for 6 Gb/s on all ports? Of course the PCIe bandwidth is going to limit it, as well as the RAID controller but is this right theoritically speaking? PCIe 2.0 x8 has bandwidth of 4000 MB/s and PCIe 3.0 x8 has 7880 MB/s. As an example, I was thinking of buying LSI MegaRAID 9271-8i for my home server. It has eight internal SATA 6 Gb/s lanes. With that one I am able to connect eight hard drives and they can work on their limits in terms of transfer rates because there is one 6 Gb/s lane for each drive available. But in the future the storage capacity might be too low. I thought I could just add a SAS expander, like Intel RES2SV240. It is a 24 port expander rated for 6 Gb/s per port. So do I get the full poten

ssl - Why Nginx calls for invalid certificate in non-existent subdomains just to redirect to 404?

itemprop="text"> Have two server blocks. default_server block inside http block of nginx.conf : server { server_name _; listen 80 default_server; listen [::]:80 default_server ipv6only=on; listen 443 default_server; return 404; } include /etc/nginx/sites-enabled/*; start="3"> A working domain/website block inside sites-enabled : server { listen 80; listen 443; server_name example.com; return 301 https://www.$server_name$request_uri; } server { listen 80; listen 443 ssl; root /var/www/example.com/htdocs/; index index.html index.htm; server_name www.example.com; } (I have this setup to redirect all non-www to www and all http to https) start="4"> I have a cert for both non-www and www for