Skip to main content

apache 2.2 - How Would I Restrict a Linux Binary to a Limited Amount of RAM?

I would like to be able to limit an installed binary to
only be able to use up to a certain amount of RAM. I don't want it to get
killed if it exceeds it, only that that would be the max amount that it could
use.
I would like the process to die once it reaches a certain amount of
RAM, preferably before the server starts to swap
heavily.



The problem I am facing is that I am
running an Apache 2.2 server with PHP and some custom code that a developer is writing
for us. The problem is that somewhere in there code they launch a PHP exec call that
launches ImageMagick's 'convert' to create a resized image
file.



I'm not privy to a lot of details to the
project or the code, but need to find a solution to keep them from killing the server
until they can find a way to optimize the
code.



I had thought that I could do this with
/etc/security/limits.conf and setting a limit on the apache user, but it seems to have
no effect. This is what I used:



www-data hard as
500




If I understand it correctly,
this should have limited any apache user process to a maximum to 500kb, however, when I
ran a test script that would chew up a lot of RAM, this actually got up to 1.5GB before
I killed it. Here is the output of 'ps auxf' after the setting change and a system
reboot:




USER PID %CPU %MEM VSZ RSS
TTY STAT START TIME COMMAND
root 5268 0.0 0.0 401072 10264 ? Ss 15:28 0:00
/usr/sbin/apache2 -k start
www-data 5274 0.0 0.0 402468 9484 ? S 15:28 0:00 \_
/usr/sbin/apache2 -k start
www-data 5285 102 9.4 1633500 1503452 ? Rl 15:29
0:58 | \_ /usr/bin/convert ../tours/28786/.….
www-data 5275 0.0 0.0 401072
5812 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k
start



Next I thought I
could do it with Apache's RlimitMEM setting, but get the same result of it not getting
limited. Here is what I have in my apache.conf
file:



RLimitMEM 500000
512000



It wasn't until many hours later that I
figured out that if the process actually reached that amount that it would die with an
OOM error.



Would love any ideas on how to set
this limit so other things could function on the server, and all of them could play
together nicely.

Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able