Skip to main content

Posts

Showing posts from November, 2014

Apache seems to be using old expired certificate even though new one is installed

Apache 2.2.3/mod_ssl/CentOS 5.5 VPS Our certificate expired on 2011-10-06, and even though we have seemingly installed the new one correctly, browsing to the site still shows an expired certificate! I've tried deleting my browser cache and using several different browsers. Relevant lines from the ssl.conf file (I've excluded those commented out.): Listen 127.0.0.1:443 SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 # Note - I tried disabling SSLSessionCache with the "none" setting but it didn't help. SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /var/certs/gentlemanjoe.com/new2011/gentlemanjoe.com.crt SSLCertificateKeyFile /var/certs/gentlemanjoe.com/new2011/gentlemanjoe.com.key SSLCertificateChainFile /var/certs/gentlemanjoe.com/new2011/gd_bundle.crt SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \

kvm virtualization - Is it safe to boot a linux server from a USB drive?

I have an HP DL360 G7 server I plan to install KVM and ZFS on. The purpose is going to be a lab in a box. I have the 8 drive bays in the front loaded with 4 drives and an SSD (for the ZFS ZIL cache). My goal was to keep the disk array away from the actual OS disk. What I am wonder is weather or not it is safe to boot a Linux server installation from a USB drive for "production" use. The server has an embedded USB / SD card reader on the motherboard for VMware and other embedded solutions. This raises a question to me because once VMware is loaded it stay in memory. On the other hand a Linux install does not (atleast not 100%). I am concerned if I load the OS on a USB drive (or SD card) I will burn the SD card out. Can anyone please give me some insight on this? I am wondering what my options are. The way I see it my options currently are make Linux boot from the ZFS array or use a USB drive. The first option would be okay if I could make grub play nice with ZFS root booting.

linux - When IP aliasing how does the OS determine which IP address will be used as source for outbound TCP/IP connections?

I have a server running Ubuntu Server with four IP addresses aliased on a single NIC. eth0 192.168.1.100 eth0:0 192.168.1.101 eth0:1 192.168.1.102 eth0:2 192.168.1.103 (Using 192.168.x.x for sake of example, assume these are NAT-ed to a range of public IP addresses) One of our clients publishes their inventory via FTP, so we log in nightly to download a large file from their server. Their firewall expects our (passive) FTP connection to be made from 192.168.1.100. Given that my server logically has four IP addresses on a single adapter, how does the operating system determine which IP address is used as source for outbound TCP/IP connections? Let's say I ssh into my server on 192.168.1.101 and run FTP interactively. Will the outbound TCP/IP connection use 192.168.1.101 because the OS knows that's the interface over which my shell is connected? What if the FTP task is run non-interactively via a cron job where there is no shell? As you can probably tell, th

linux - Debian huge memory consumption

I have a debian server for 6 days now. It's consuming huge amount of memory - 2GB of 4GB available at the moment. It keeps reserving next 100 MB each day. Here is what top command says (sorted by res column): top - 00:50:27 up 6 days, 8:27, 1 user, load average: 0.03, 0.04, 0.06 Tasks: 116 total, 1 running, 115 sleeping, 0 stopped, 0 zombie Cpu(s): 1.8%us, 0.1%sy, 0.0%ni, 98.0%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3972480k total, 1984072k used, 1988408k free, 356180k buffers Swap: 7815612k total, 0k used, 7815612k free, 1404292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6790 mysql 20 0 147m 23m 5796 S 0 0.6 20:59.79 mysqld 6320 root 20 0 13496 9476 1700 S 0 0.2 0:12.96 miniserv.pl 26855 root 20 0 22636 7768 4684

linux - Weird SSH, Server security, I might have been hacked

I am not sure if I've been hacked or not. I tried to log in through SSH and it wouldn't accept my password. Root login is disabled so I went to rescue and turned root login on and was able to log in as root. As root, I tried to change the password of the affected account with the same password with which I had tried to log in before, passwd replied with "password unchanged". I then changed the password to something else and was able to log in, then changed the password back to the original password and I was again able to log in. I checked auth.log for password changes but didn't find anything useful. I also scanned for viruses and rootkits and the server returned this: ClamAV: "/bin/busybox Unix.Trojan.Mirai-5607459-1 FOUND" RKHunter: "/usr/bin/lwp-request Warning: The command '/usr/bin/lwp-request' has been replaced by a script: /usr/bin/lwp-request: a /usr/bin/perl -w script, ASCII text executable Warning: Suspicious file types found in

linux - RAID iostat showing 0.00 for wait's and %util

I have a EC2 server running ubuntu 11.10 with 6 raid devices, 5 with 8 EBS drives each as part of a RAID 10 setup, the other one with 4 EBS drives as part of a RAID 0 setup. I am finding that even though each of the individual EBS drive devices are showing correct iostat's that the md devices are showing 0.00 for avgqu-sz, await, r_await, w_await, svctm and %util. The other stats for the md devices (rrqm/s, wrqm/s, r/s, w/s, rkB/s, wkB/s, avgrq-sz) all seem to be correct. Any ideas on how I might be able to get stats for the missing columns?

routing - How can I route SSH traffic based on a subdomain?

I have a home server running ubuntu 10.04 that is running two services: an SSH service a dockerized gogs service I would like to essentially reverse-proxy incoming SSH connections based on the subdomain. For instance, I'd like for ssh connections made via ssh user@mydomain.com to be forwarded to port 2222 and those made via ssh user@gogs.mydomain.com to be forwarded to port 10022 . In essence, I'd like something analogous to nginx for SSH traffic. How can this be achieved? Answer This is impossible. SSH has no notion of a Host header as is present in HTTP. The best you can do is port-based routing.

Stopping incoming spam with sendmail

I am having an issue due to a "smart" sysadmin that made some choices while I was away for two months: Spam. I manage probably close to 10,000 web/mail sites. He decided to allow all mail to everyone of those domains go to /dev/null if the user did not exist instead of bouncing it back. Which is OK in some cases but the problem with that is that it says recipient OK for unknown users which makes spammers believe they are hitting a valid address. So, with all that said I am now seeing TONS of attempted spam coming into all of these sites and I can't figure out a fix on server a by server basis. Right now they are back to getting a user unknown so bandwidth on the network has dropped a decent amount since the actual content is not being delivered, however since the mail is still making it to me I am losing a good amount of bandwidth on DNS lookups per message as well as my inital bounceback. Doesn't seem like it would take a lot but with the volume of sites we

high availability - How to setup Traefik for HA? Need a reverse-proxy in front of Traefik?

I am trying to setup Traefik on a production site, and I'm struggling with some high availability issues. I think we still need a reverse-proxy in front of the Traefik cluster. Here are the potential setups that I've considered, and why the reverse-proxy seems to be needed: Setup DNS A records to point to each of the Traefik nodes for load balancing and failover. This practice is discouraged according to multiple sites including this SO question and this SF question . Even using a service like DNSMadeEasy seems to be discouraged due to DNS caching and TTL issues. Point one DNS record to one of the nodes running Traefik. That node becomes a SPOF. My nodes are running on CoreOS, which reboots after every update, so we would be guaranteed to have a few minutes of downtime each week. We could move the DNS record to an alternate node whenever downtime is expected. This would be a pain to manage manually. I can envision a solution paired with locksmithd that handles this automatical

Centos iptables configuration for Wordpress and Gmail smtp

Let me start off by saying that I'm a Centos newby, so all info, links and suggestions are very welcome! I recently set up a hosted server with Centos 6 and configured it as a webserver. The websites running on it are nothing special, just some low traffic projects. I tried to configure the server as default as possible, but I like it to be secure as well (no ftp, custom ssh port). Getting my Wordpress to run as desired, I'm running into some connection problems. 2 things are not working: installing plugins and updates through ssh2 (failed to connect to localhost:sshportnumber) sending emails from my site using the Gmail smtp (Failed to connect to server: Permission denied (13)) I have the feeling that these are both related to the iptables configuration, because I've tried everything else (I think). I tried opening up the firewall to accept traffic for ports 465 (gmail smtp) and ssh port (lets say this port is 8000), but both the issues remain. Ssh connections from the ter

linux - Dynamic DNS for professional servers

This might sound like a unusual subject but recently I had major issues with changing DNS servers for our web apps and certain nameservers not releasing our domain names quick enough. So I've been thinking about either signing up for DDNS servers and putting all our IP's/Domains on there so we can change quickly if needed - good idea? Secondly is they any need or benefit to creating my own dynamic name server and if so what are the major benefit? Sorry if this sounds like a open question but I am interested in the concept of DDNS for all of our domains Answer DDNS is just the same as regular DNS, it just uses a very low time to live. When migrating a server you have to put your own TTL very low (a few days before). One can not guarantee a fast propagation to every single DNS server on the WAN. There will always be a delay.

storage - Are SSD drives as reliable as mechanical drives (2013)?

SSD drives have been around for several years now. But the issue of reliability still comes up. I guess this is a follow up from this question posted 4 years ago, and last updated in 2011. It's now 2013, has much changed? I guess I'm looking for some real evidence, more than just a gut feel. Maybe you're using them in your DC. What's been your experience? Reliability of ssd drives UPDATE: It's now 2016. I think the answer is probably yes (a pity they still cost more per GB though). This report gives some evidence: Flash Reliability in Production: The Expected and the Unexpected And some interesting data on (consumer) mechanical drives: Backblaze: Hard Drive Data and Stats Answer This is going to be a function of your workload and the class of drive you purchase... In my server deployments, I have not had a properly-spec'd SSD fail. That's across many different types of drives, applications and workloads. Remember, not all SSDs ar

Apache memory usage optimization

Apache is using too much of my server memory causing it to crash. I have 4GB of RAM in the server. I'm trying to fine tune Apache settings in order to improve it's performance but I'm quite new at this. I was trying to follow this article's advice but I'm not sure how to calculate things and it seems I'm making it worse. My top reads like: 11697 apache 15 0 322m 37m 4048 S 0.0 0.9 0:00.52 httpd 13602 apache 15 0 323m 37m 3944 S 0.0 0.9 0:00.50 httpd 11786 apache 15 0 322m 36m 4052 S 0.0 0.9 0:00.50 httpd 12525 apache 15 0 322m 36m 4040 S 0.0 0.9 0:00.63 httpd 11806 apache 15 0 322m 36m 3952 S 0.0 0.9 0:00.42 httpd 11731 apache 15 0 322m 36m 4036 S 0.0 0.9 0:00.46 httpd 11717 apache 16 0 322m 36m 3956 S 0.0 0.9 0:00.54 httpd 11659 apache 15 0 322m 36m 3980 S 0.0 0.9 0:00.49 httpd So, it would be MaxClients = 3000/ (322-37) = 10 Is that right? Also, what should be the values f

does adding heaps of drives to a raid 0 increase performance?

does adding heaps of drives to a raid 0 increase performance? i know that two drives in a striped raid will usually be faster than a single drive but will i notice a difference in performance between say, 2 drives in a striped raid and 8? is there a general limit to the number of drives in the raid before you really don't get any more benefit? a similar question has been asked here Does adding more drives to a RAID 10 Array increase performance? but i'm really asking if adding many drives to a raid 0 has improvements over just adding say 2 or 4. does the performance keep increasing?

MySQL Tuning -- High Memory Usage

I'm trying to tune a mysql db using mysqltuner. Mysqltuner is advising that I increase the join_buffer_size and the query_cache_size. At the same time, however, it is warning that my max memory usage is high, which it is at 200%+ of installed RAM (which is 2GB). The bind I'm in is of course that if I do what mysqltuner says, the memory usage will shoot up even higher. So what do I do here? Is the problem rather not with mysql but with the apps running on this server that are evidently requring mysql to do a huge amount of cacheing? How would you mysql administrator experts out there proceed from here? See the mysqltuner report below along with my current [mysqld] settings: MySqlTuner report: MySQLTuner 1.2.0 - Major Hayden Bug reports, feature requests, and downloads at http://mysqltuner.com/ Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped vers

ZFS: SAS v SATA - does it matter?

I understand from NEC's white paper, Silent data corruption in disk arrays , that: some SAS drives should have a "T10-DIF" feature to detect silent data corruption; whereas "There is no standard for ATA-based drives (including SATA) that protects against silent data corruption at the SCSI level in the storage technology stack." The point of that white paper is to inform people of NEC's proprietary technology for protecting against silent data corruption in SATA drives. However, ZFS seems to provide at least equivalent protection, and is preferable to me as it is not proprietary (except for Oracle's most recent ZFS revisions). I have two questions: Am I right in thinking that using ZFS with T10-DIF SAS drives would give an additional layer of protection against silent data corruption as compared to using just one of those two technologies alone? Given that T10-DIF SAS drives do not seem to be readily available, what reasons are there - if any - to prefer

apache 2.2 - RewriteCond not matching on my IP when matching on %{REMOTE_ADDR}

I want a mod_rewrite rule not to be executed when traffic is hitting the web-server from internal network. The web server is an apache 2.2 The following RewriteCond is meant to guard the rewrite rule. RewriteCond %{REMOTE_ADDR} !=192\.168\.[0-15]\.[1-255] If I access the web server using ip 192.168.15.173, the rule doesn't seem to kick in and thus the rewrite rule is executed despite my internal address. Where is my mistake? Matching for the simpler: RewriteCond %{REMOTE_ADDR} !=192\.168\.15\.173 fails as well. I used the "Blocking of Robots" example in http://httpd.apache.org/docs/trunk/rewrite/access.html to build the rule. Am I missing something? Edit: I already tried to investigate using rewrite logging, but that didn't bring up any useful information. That's what happens during the request: 192.168.15.173 - - [12/Jun/2013:13:50:17 +0200] [example.com/sid#7f3c6afb5e30][rid#7f3c6f864b68/initial] (2) init rewrite engine with requested ur

Configure IPv6 - CentOS OVH - Can't ping

I can't ping my IPv6 from outside and inside of the server, i can't ping any IPv6. My IPv6 : 2001:41d0:2:XXXX::/64 I use CentOS 6.6 with a Xen Kernel that supports IPv6. My provider is OVH and i followed several guides : http://guide.ovh.com/Ipv4Ipv6 and http://www.cyberciti.biz/faq/rhel-re...configuration/ and here my config : cat /etc/sysconfig/network ... NETWORKING_IPV6=yes cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 BOOTPROTO=static ... IPV6INIT=yes IPV6_AUTOCONF=no IPV6ADDR="2001:41D0:2:xxxx::/64" IPV6_DEFAULTGW="2001:41d0:2:xxff:ff:ff:ff:ff" cat /etc/sysconfig/network-scripts/route6-eth0 ... net.ipv6.conf.all.autoconf = 0 net.ipv6.conf.default.autoconf = 0 net.ipv6.conf.eth0.autoconf = 0 net.ipv6.conf.all.accept_ra = 0 net.ipv6.conf.default.accept_ra = 0 net.ipv6.conf.eth0.accept_ra = 0 net.ipv6.conf.all.accept_redirects=0 net.ipv6.conf.all.router_solicitations=1 net.ipv6.conf.default.proxy_ndp=1 net.ipv6.conf.all.proxy_ndp=1 net.i

storage - What are the options for attaching DAS units greatly exceeding 62TB Virutal Disks max size limit on vSphere 6.0/6.5?

Our new DAS unit is on the horizon and I'm looking for the best way to configure the storage. We currently use vSphere 6.0 but will have to upgrade to 6.5 when Update 1 is released as all HDDs in the new DAS are 512e (sector size is 4096 emulated as 512) which is only supported in 6.5. The total capacity of two Dell MD1280 (populated with 10TB HDDs) will greatly exceed 64 TB max size of Datastore, many times. Depends on the number of hard drives we decide to use/buy. Initially I thought about creating multiple Virtual Disks in iDRAC RAID5 -> 7 x 10TB = 60TB with 1 drive failure Then create Datastore and Virtual Disks with vCenter and then span them with LVM on the VM (all VMs are Linux) This is the same and only option I was given by DELL VMWare tech support. The problem here is that if I fully populate the both units, I would lose 24 drives (240TB) for parity and still only one of them can fail per each Virtual Disk. Any other RAID level only adds to the loses. I used RDM appr

response time - Linux/Apache performance very slow even on local network

I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to b

firewall - ftp tls firewalled :(

My FTP(s) isn't working when my firewall is enabled. I have always had my iptables set up for me in the past, I learnt roughly how to set one up yesterday, but I've missed a rule that this requires. Here is my iptables.rules # Generated by iptables-save v1.4.4 on Tue Nov 16 23:23:50 2010 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -m state -i eth0 --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 20:21 -j ACCEPT -A INPUT -p tcp -m tcp --dport 989:990 -j ACCEPT -A INPUT -p tcp -m tcp -i eth0 --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp -i eth0 --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp -i eth0 --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp -i eth0 --dport 10000 -j ACCEPT -A INPUT -p icmp -i eth0 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-port-unreachable COMMIT # Completed on Tue Nov 16 23:23:50 2010 # Generated by iptables-save v1.4.4 on Tue Nov 16 23:23:50 2010 *mangle :PREROUTING ACCEPT [95811:

firewall - Dedicated Server hit with viruses

Since setting up my dedicated server I have been hit with many viruses. 1 would eat up my bandwidth and another is currently sending out trojans to any outgoing mail from my mail server. Is there a way to set up a server to prevent this from happening? I have ClamAV installed, I have IP addresses blocked on my iptables. But that doesn't seem to be enough. I'm just wondering what other people do when they set up a dedicated server. Thanks! Answer Sounds like you're talking about rootkits, trojans and worms - not viruses (since this appears to be a Linux server not a MSWindows box). ClamAV is an anti-virus tool while it does go some way to detecting other types of malware it's abilities are very limited. Indeed, unless your are running samba on the server (which would be a really dumb thing to do) or are allowing anyone to upload files (again, dumb) there's no point in using ClamAV. The first thing to do is to get the server wiped clean and reinstalle

network attached storage - Building/Maintaining a Custom FreeNAS- and ZFS-based NAS

I need a NAS for a company of ~30 people. We make games (a lot of large files, large git repositories, not much else.) I'm thinking of buying cheap components and building it myself, using something like FreeNAS. (Assume that it is a viable option, price-wise.) I'll list my needs and concerns below, but my main question is this: is it easy enough to build and administer, or should I just buy a commercial NAS (I have used Synology and it fits all my needs.) My needs: Data integrity is paramount (obviously!) but I don't need 100% uptime. If the CPU dies, I can take 2 hours to replace it. It should not need constant tinkering and maintenance. I want to stick the box in a corner somewhere and just log into it once a month or so when I want to add a repo or user. That being said, I can tolerate a moderate amount of work and complexity to set up the box, or a new service; but after that I want it to "Just Work". I want to start off with a bunch of different sized hard

ubuntu - Tracking down rogue disk usage

I found several other questions regarding the theory behind my problem (e.g. this , this ), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G ( du ) and 345G ( df ). Where are the remaining 334G ? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906

kvm virtualization - free up not used space on a qcow2-image-file on kvm/qemu

we are using kvm/qemu with qcow2-images for our virtual machines. qcow2 has this nice feature where the image file only allocates the actually needed space by the virtual-machine. but how do i shrink back the image file, if the virtual machine's allocated space gets smaller? example: 1.) i create a new image with qcow2 format, size 100GB 2.) i use this image to install ubuntu. installation needs about 10 gb, the image-file grows up to about 10GB. nothing unexpected so far. 3.) i fill up the image with about 40 GB of additional data. the image-file grows up to 50GB. i am ok with that :-) 4.) this is where it gets strange: i delete all of the 40GB data on the image, but the image-size still eats up 50GB. question: how do i free up that 40GB of data and shrink the image to the only needed 10 GB? thanks in advance, berni Answer The image will not shrink automatically, since when you delete files, you don't actually delete data (this is why undelete works). Qemu has a

lamp - mysql always using maximum connection

I have LAMP server having 4 core CPU and 32 GB RAM.We are running a large website on it. I have following issues now in my server. When I use Mysqlreport tool to monitor the mysql server i am always seeing the connection usage as below. And the users reporting connection issues in the website. _ Connections _________________________________________________________ Max used 251 of 250 %Max: 100.40 Total 748.71k 3.5/s But when I use "show process list" command it will output nothing. We are using MyISAM engine for all our DBs. My Mysql Config File is pasted below: ###################### [mysqld] max_connections = 250 set-variable=local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-name-resolve skip-bdb wait_timeout = 60 thread_cache_size = 100 table_cache = 1024 key_buffer = 384M log_slow_queries=/mysql-log/mysql-slow.log query-cache-size=512M query-c

Active Directory Domain Naming

Planning a new domain and I keep seeing that best practice is to name the forest/domain after a subdomain of our publicly registered domain. So if we own and use company.com publicly, we should use something to the effect of ad.company.com for our AD DS domain. The reasons I'm gathering for this: To avoid split-horizing DNS To avoid the requirement for "www" to access the publicly hosted website at www.company.com. But the problem I see with this connecting to resources differently whether users are onsite or offsite. So unless I'm missing something, when on the LAN to connect to public "webapp-1" the users will use webapp-1.ad.company.com and when offsite it would be "webapp-1.company.com". Do most environments use hair-pinning on the router so the users don't ever use the internal domain to access resources? Rely on the search domains? Managing split DNS doesn't bother me and the "www" isn't a big concern. Can someone put

linux - Cent OS: How do I turn off or reduce memory overcommitment, and is it safe to do it?

From time to time "my" server stalls because it runs out of both memory and swap space. (it keeps responding to ping but nothing more than that, not even ssh). I'm told linux does memory overcommitment, which as far as I understand is the same as banks do with money: it grants to processes more memory than actually available, assuming that most processes won't actually use all the memory they ask, at least not all at the same time. Please assume this is actually the cause why my system occasionally hangs, let's not discuss here whether or not this is the case (see What can cause ALL services on a server to go down, yet still responding to ping? and how to figure out ). So, how do I disable or reduce drastically memory overcommitment in CentOS? I've read there are two settings called vm.overcommit_memory (values 0, 1, or 2) and vm.overcommit_ratiom but I have no idea where I have to find and change them (some configuration file hopefully), what values should I