Skip to main content

linux - Max file descriptor limit not sticking



Our cloud based systems use Java and require greater than 1024 for the max file descriptor limit. On one of our virtual systems, every time we try to make this change, I can get it to change and it will be persistent across the first reboot (have not tested multiple), but if we stop and start our java app, the limit seems to get reset to 1024.



System info:
Linux mx.xen16.node01002 3.1.0-1.2-xen #1 SMP Thu Nov 3 14:45:45 UTC 2011 (187dde0) x86_64 GNU/Linux



Here are the steps I took:



edited /etc/sysctl.conf and appended fs.file-max = 4096




Before applying, checked the limit for the process (PID 1530):



root 1530 6.7 31.0 1351472 165244 pts/0 Sl 17:12 0:58 java



cat /proc/1530/limits



Limit                     Soft Limit           Hard Limit           Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes

Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes unlimited unlimited processes
Max open files 1024 1024 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 16382 16382 signals

Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us


Now I apply the change with:



sudo sysctl -p




fs.file-max = 4096 (ok it should be set)



I now do a reboot



When the system comes back up, the java app starts automatically and using the new PID I check the limits again.



Limit                     Soft Limit           Hard Limit           Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes

Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 3818 3818 processes
Max open files 4096 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 3818 3818 signals
Max msgqueue size 819200 819200 bytes

Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us


The limit is set correctly, however if I start and stop the java app, the limit defaults back to 1024. There is not an issue with the java app. This is one of SEVERAL identical cloud based systems around the world, each one a copy of this one. This VM resides at Gigatux. We have several other identical systems at Giga running the same OS, same version, and same app and app version. Only this one is behaving strangely. Please help.



* UPDATE *



I removed the statement from the end of the sysctl.conf per David's recommendation. If I issue ulimit -n, the limit is indeed set to 4092. If I look in /etc/security/limits.conf, you can see the limits configured here as well.




*               hard    nofile          4096
* soft nofile 4096


Yet if I restart the java process, it still defaults back to 1024.



agent@mx:~$ cat /proc/2432/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds

Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes unlimited unlimited processes
Max open files 1024 1024 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks

Max pending signals 16382 16382 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
agent@mx:~$ echo -n $SHELL ' ' && ulimit -n
/bin/bash 4096


* UPDATE *




Ok, I think this may be fixed. I changed the following:



*               hard    nofile          4096
* soft nofile 4096


back to the following:



root               hard    nofile          4096

root soft nofile 4096


and the issue appears to be resolved


Answer



Ok, I think this may be fixed. I changed the following:



*               hard    nofile          4096
* soft nofile 4096



back to the following:



root               hard    nofile          4096
root soft nofile 4096


and the issue appears to be resolved


Comments

Popular posts from this blog

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...

iLO 3 Firmware Update (HP Proliant DL380 G7)

The iLO web interface allows me to upload a .bin file ( Obtain the firmware image (.bin) file from the Online ROM Flash Component for HP Integrated Lights-Out. ) The iLO web interface redirects me to a page in the HP support website ( http://www.hp.com/go/iLO ) where I am supposed to find this .bin firmware, but no luck for me. The support website is a mess and very slow, badly categorized and generally unusable. Where can I find this .bin file? The only related link I am able to find asks me about my server operating system (what does this have to do with the iLO?!) and lets me download an .iso with no .bin file And also a related question: what is the latest iLO 3 version? (for Proliant DL380 G7, not sure if the iLO is tied to the server model)

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...