Skip to main content

security - Why should I firewall servers?



PLEASE NOTE: I'm not interested in making this into a flame war! I understand that many people have strongly-held beliefs about this subject, in no small part because they've put a lot of effort into their firewalling solutions, and also because they've been indoctrinated to believe in their necessity.



However, I'm looking for answers from people who are experts in security. I believe that this is an important question, and the answer will benefit more than just myself and the company I work for. I've been running our server network for several years without a compromise, without any firewalling at all. None of the security compromises that we have had could have been prevented with a firewall.




I guess I've been working here too long, because when I say "servers", I always mean "services offered to the public", not "secret internal billing databases". As such, any rules we would have in any firewalls would have to allow access to the whole Internet. Also, our public-access servers are all in a dedicated datacenter separate from our office.



Someone else asked a similar question, and my answer was voted into negative numbers. This leads me to believe that either the people voting it down didn't really understand my answer, or I don't understand security enough to be doing what I'm currently doing.



This is my approach to server security:




  1. Follow my operating system's security guidelines before connecting my server to the Internet.


  2. Use TCP wrappers to restrict access to SSH (and other management services) to a small number of IP addresses.


  3. Monitor the state of this server with Munin. And fix the egregious security problems inherent to Munin-node in its default configuration.



  4. Nmap my new server (also before connecting my server to the Internet). If I were to firewall this server, this should be the exact set of ports incoming connections should be restricted to.


  5. Install the server in the server room and give it a public IP address.


  6. Keep the system secure by using my operating system's security updates feature.




My philosophy (and the basis of the question) is that strong host-based security removes the necessity of a firewall. The overall security philosophy says that strong host-based security is still required even if you have a firewall (see security guidelines). The reason for this is that a firewall that forwards public services to a server enables an attacker just as much as no firewall at all. It is the service itself that is vulnerable, and since offering that service to the entire Internet is a requirement of its operation, restricting access to it is not the point.



If there are ports available on the server that do not need to be accessed by the whole Internet, then that software needed to be shut down in step 1, and was verified by step 4. Should an attacker successfully break into the server through vulnerable software and open a port themselves, the attacker can (and do) just as easily defeat any firewall by making an outbound connection on a random port instead. The point of security isn't to defend yourself after a successful attack - that's already proven to be impossible - it's to keep the attackers out in the first place.



It's been suggested that there are other security considerations besides open ports - but to me that just sounds like defending one's faith. Any operating system/TCP stack vulnerabilities should be equally vulnerable whether or not a firewall exists - based on the fact that ports are being forwarded directly to that operating system/TCP stack. Likewise, running your firewall on the server itself as opposed to having it on the router (or worse, in both places) seems to be adding unnecessary layers of complexity. I understand the philosophy "security comes in layers", but there comes a point where it's like building a roof by stacking X number of layers of plywood on top of each other and then drilling a hole through all of them. Another layer of plywood isn't going to stop the leaks through that hole you're making on purpose.




To be honest, the only way I see a firewall being any use for servers is if it has dynamic rules preventing all connections to all servers from known attackers - like the RBLs for spam (which coincidentally, is pretty much what our mail server does). Unfortunately, I can't find any firewalls that do that. The next best thing is an IDS server, but that assumes that the attacker doesn't attack your real servers first, and that attackers bother to probe your entire network before attacking. Besides, these have been known to produce large numbers of false positives.


Answer



Advantages of firewall:




  1. You can filter outbound traffic.

  2. Layer 7 firewalls (IPS) can protect against known application vulnerabilities.

  3. You can block a certain IP address range and/or port centrally rather than trying to ensure that there is no service listening on that port on each individual machine or denying access using TCP Wrappers.

  4. Firewalls can help if you have to deal with less security aware users/administrators as they would provide second line of defence. Without them one has to be absolutely sure that hosts are secure, which requires good security understanding from all administrators.


  5. Firewall logs would provide central logs and help in detecting vertical scans. Firewall logs can help in determining whether some user/client is trying to connect to same port of all your servers periodically. To do this without a firewall one would have to combine logs from various servers/hosts to get a centralized view.

  6. Firewalls also come with anti-spam / anti-virus modules which also add to protection.

  7. OS independent security. Based on host OS, different techniques / methods are required to make the host secure. For example, TCP Wrappers may not be available on Windows machines.



Above all this if you do not have firewall and system is compromised then how would you detect it? Trying to run some command 'ps', 'netstat', etc. on local system can't be trusted as those binaries can be replaced. 'nmap' from a remote system is not guaranteed protection as an attacker can ensure that root-kit accepts connections only from selected source IP address(es) at selected times.



Hardware firewalls help in such scenarios as it is extremely difficult to change firewall OS/files as compared to host OS/files.



Disadvantages of firewall:





  1. People feel that firewall will take care of security and do not update systems regularly and stop unwanted services.

  2. They cost. Sometimes yearly license fee needs to be paid. Especially if the firewall has anti-virus and anti-spam modules.

  3. Additional single point of failure. If all traffic passes through a firewall and the firewall fails then network would stop. We can have redundant firewalls, but then previous point on cost gets further amplified.

  4. Stateful tracking provides no value on public-facing systems that accept all incoming connections.

  5. Stateful firewalls are a massive bottleneck during a DDoS attack and are often the first thing to fail, because they attempt to hold state and inspect all incoming connections.

  6. Firewalls cannot see inside encrypted traffic. Since all traffic should be encrypted end-to-end, most firewalls add little value in front of public servers. Some next-generation firewalls can be given private keys to terminate TLS and see inside the traffic, however this increases the firewall's susceptibility to DDoS even more, and breaks the end-to-end security model of TLS.

  7. Operating systems and applications are patched against vulnerabilities much more quickly than firewalls. Firewall vendors often sit on known issues for years without patching, and patching a firewall cluster typically requires downtime for many services and outbound connections.

  8. Firewalls are far from perfect, and many are notoriously buggy. Firewalls are just software running on some form of operating system, perhaps with an extra ASIC or FPGA in addition to a (usually slow) CPU. Firewalls have bugs, but they seem to provide few tools to address them. Therefore firewalls add complexity and an additional source of hard-to-diagnose errors to an application stack.



Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able