Skip to main content

linux - Max file descriptor limit not sticking



Our cloud based systems use Java and require greater than 1024 for the max file descriptor limit. On one of our virtual systems, every time we try to make this change, I can get it to change and it will be persistent across the first reboot (have not tested multiple), but if we stop and start our java app, the limit seems to get reset to 1024.



System info:
Linux mx.xen16.node01002 3.1.0-1.2-xen #1 SMP Thu Nov 3 14:45:45 UTC 2011 (187dde0) x86_64 GNU/Linux



Here are the steps I took:



edited /etc/sysctl.conf and appended fs.file-max = 4096




Before applying, checked the limit for the process (PID 1530):



root 1530 6.7 31.0 1351472 165244 pts/0 Sl 17:12 0:58 java



cat /proc/1530/limits



Limit                     Soft Limit           Hard Limit           Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes

Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes unlimited unlimited processes
Max open files 1024 1024 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 16382 16382 signals

Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us


Now I apply the change with:



sudo sysctl -p




fs.file-max = 4096 (ok it should be set)



I now do a reboot



When the system comes back up, the java app starts automatically and using the new PID I check the limits again.



Limit                     Soft Limit           Hard Limit           Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes

Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 3818 3818 processes
Max open files 4096 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 3818 3818 signals
Max msgqueue size 819200 819200 bytes

Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us


The limit is set correctly, however if I start and stop the java app, the limit defaults back to 1024. There is not an issue with the java app. This is one of SEVERAL identical cloud based systems around the world, each one a copy of this one. This VM resides at Gigatux. We have several other identical systems at Giga running the same OS, same version, and same app and app version. Only this one is behaving strangely. Please help.



* UPDATE *



I removed the statement from the end of the sysctl.conf per David's recommendation. If I issue ulimit -n, the limit is indeed set to 4092. If I look in /etc/security/limits.conf, you can see the limits configured here as well.




*               hard    nofile          4096
* soft nofile 4096


Yet if I restart the java process, it still defaults back to 1024.



agent@mx:~$ cat /proc/2432/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds

Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes unlimited unlimited processes
Max open files 1024 1024 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks

Max pending signals 16382 16382 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
agent@mx:~$ echo -n $SHELL ' ' && ulimit -n
/bin/bash 4096


* UPDATE *




Ok, I think this may be fixed. I changed the following:



*               hard    nofile          4096
* soft nofile 4096


back to the following:



root               hard    nofile          4096

root soft nofile 4096


and the issue appears to be resolved


Answer



Ok, I think this may be fixed. I changed the following:



*               hard    nofile          4096
* soft nofile 4096



back to the following:



root               hard    nofile          4096
root soft nofile 4096


and the issue appears to be resolved


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able