Skip to main content

amazon ec2 - How good is this MongoDB/EC2 setup?

itemprop="text">



Hi, />I wondered what was a good setup for AWS/MongoDB in terms of machines and the sizes
of their disks.



Current
setup





  • 3
    micro machines for the config servers, 1 mongos and arbiters. The 8Gb limit is almost
    reached. (and I ran the arbiters with --nojournal)

  • per
    shard : a replica set of 2 machines m1.large with 8Gb for system + 20Gb for
    data

  • everything is on
    EBS.



Questions




  1. is
    20Gb too big or too small ? Should I go with 100Gb for example
    ?

  2. Am I supposed to inform mongodb about the 20Gb (or
    other) disk limit ?


  3. Do you see anything wrong
    that I don't see ? Im new to mongodb and aws but Im an ok experienced
    SWE



Plan of
use



My database should allow a 100
qps (mostly writes), and should grow up to 1Tb over the next 3 years. The plan is to add
as many shards as needed, more or less manually (with scripts), when we see that more
memory is needed on the database.



We will also
run a few mapreduce over this and have some scripts that do aggregates with the data
over the past 15 minutes, every 15 minutes.



We
are a very small company, spending up to a few hundred $ per month on our servers would
be ok but we can't go crazy on
cash.




We hope that we won't have to
manually take care of too many machine failures, manually taking care of things once a
month would be fine.



Thanks for telling me what
you think about
that.



Thomas



Answer




First your specific
questions:






is 20Gb too big or too small ? Should I go with 100Gb for example
?




This completely
depends on your data requirements and how many documents you intend to insert. If you
intend to have 5GB of documents then you should be fine, even with overheads for
replication (oplog is 5% of free space) and storage (there is always an empty file
pre-allocated for each database). If you plan to have 10-12GB of data (and remember you
have to store indexes, journal, logs as well) then I would go for a larger disk.



Since you say you plan to grow to 1TB in a year
then you will probably exceed 20GB inside a month and need to increase disk anyway,
hence it will probably be easier to go for 100GB immediately. At 1TB in a year, assuming
constant growth, that will only give you about 1 month of room (1TB per year ~= 83GB per
month).




Am I
supposed to inform mongodb about the 20Gb (or other) disk limit
?





No,
there have been improvements in how it handles the situation but MongoDB will
currently just use all available space until there is none left - you need to monitor
your disk space independently.




Do you see
anything wrong that I don't see ? Im new to mongodb and aws but Im an ok experienced
SWE




Never use
micro instances for anything in production - in particular do not use them for config
servers. Your config servers are critical for the operation of a sharded cluster. But,
no need to take my word for it - see page 6 of the href="http://info.10gen.com/rs/10gen/images/AWS_NoSQL_MongoDB.pdf" rel="nofollow
noreferrer">updated Amazon
whitepaper:






T1.micro instances are not recommended for production MongoDB deployments,
including arbiters, config servers, and mongos shard
managers.




Generally
I would recommend reading through the whitepaper and following the guidelines therein -
you'll find recommendations for Linux settings (readahead, hugepages etc.), storage,
pIOPS and more. Also worth checking out are the href="http://docs.mongodb.org/manual/administration/production-notes/#production-notes"
rel="nofollow noreferrer">Production Notes - some duplication, but it's
updated more often than a whitepaper.



Finally,
get some idea of your href="http://docs.mongodb.org/manual/faq/storage/#what-is-the-working-set" rel="nofollow
noreferrer">working set size for your database (per shard) - that will
dictate how much RAM you need, which is really the key to selecting instance size on EC2
for MongoDB. You may have enough with 8GB, but if not you will see significant
performance hits for hitting disk.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...