Skip to main content

centos - How bad is it really to install Linux on one big partition?

itemprop="text">

We will be running CentOS 7 on our new
server. We have 6 x 300GB drives in raid6 internal to the server. (Storage is largely
external in the form of a 40TB raid box.) The internal volume comes to about 1.3TB if
formatted as a single volume. Our sysadmin thinks it is a really bad idea to install the
OS on one big 1.3TB partition.




I am a
biologist. We constantly install new software to run and test, most of which lands in
/usr/local. However, because we have about 12 non-computer savvy biologists using the
system, we also collect a lot cruft in /home as well. Our last server had a 200GB
partition for /, and after 2.5 years it was 90% full. I don't want that to happen again,
but I also don't want to go against expert
advice!



How can we best use the 1.3TB available
to make sure that space is available when and where it's needed but not create a
maintenance nightmare for the sysadmin??


class="post-text" itemprop="text">
class="normal">Answer



The
primary (historical) reasons for partitioning are:




  • to
    separate the operating system from your user and application
    data
    . Until the release of RHEL 7 there was no supported
    upgrade path and a major version upgrade would require a
    re-install and then having for instance /home and other
    (application) data on separate partitions (or LVM volumes) allows you to easily preserve
    the user data and application data and wipe the OS partition(s).


  • Users can't log in properly and your
    system starts to fail in interesting ways when you completely run out of disk space.
    Multiple partitions allow you to assign hard reserved disk space for the OS and keep
    that separate from the area's where users and/or specific applications are allowed to
    write (eg /home /tmp/ /var/tmp/ /var/spool/ /oradata/ etc.) ,
    mitigating operational risk of badly behaved users and/or
    applications.



  • Quota.
    Disk quota allow the administrator to prevent an individual user of using up all
    available space, disrupting service to all other users of the system. Individual disk
    quota is assigned per file system, so a single partition and thus a single file-system
    means only 1 disk quotum. Multiple (LVM) partitions means multiple file-systems allowing
    for more granular quota management. Depending on you usage scenario you may want for
    instance allow each user 10 GB in their home directory, 2TB in the /data directory on
    the external storage array and set up a large shared scratch area where anyone can dump
    datasets too large for their home directory and where the policy becomes "full is full"
    but when that happens nothing breaks either.


  • Providing dedicated IO
    paths
    . You may have a combination of SSD's and spinning disks and would
    do well to address them differently. Not so much an issue in a general purpose server,
    but quite common in database setups is to also assign certain spindles (disks) to
    different purposes to prevent IO contention, e.g. seperate disk for the transaction
    logs, separate disks for actual database data and separate disks for temp space.
    .


  • Boot
    You may have a need for a separate /boot partition.
    Historically to address BIOS problems with booting beyond the 1024 cylinder limit,
    nowadays more often a requirement to support encrypted volumes, to support certain RAID
    controllers, HBA's that don't support booting from SAN or file-systems not immediately
    supported by the installer etc.


  • Tuning
    You may have a need for different tuning options or even completely different
    file-systems.




If you use
hard partitions you more or less have to get it right at install time and then a single
large partition isn't the worst, but it does come with some of the restrictions above.



Typically I recommend to partition your main
volume as a single large Linux LVM physical volume and then
create logical volumes that fit your current needs and for
the remainder of your disk space, leave unassigned until
needed
.




You can than
expand those volumes and their file-systems as needed (which is a trivial operation that
can be done on a live system), or create additional ones as well.



Shrinking LVM volumes is trivial but often
shrinking the file-systems on them is not supported very well and
should probably be avoided.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able