Skip to main content

What Makes Cloud Storage (Amazon AWS, Microsoft Azure, google Apps) different from Traditional Data center storage networking (SAN and NAS)?

itemprop="text">

Some confusion because of my question
so to make it simple :



"What kind
of storage do big cloud providers use and why?"




As far as i understand, however I am
not able to find any kind of official Storage networking differences between typical
data centers and clouds, all cloud providers are using DAS different from the typical
data centers.



Even DAS has many disadvantages
than SAN or NAS, i want to learn the details why clouds using DAS either for storage or
application purposes.



Any resource or
description will be appreciated to make me
clear.



EDIT: While
reading the paper "Networking Challenges and Resultant Approaches for Large Scale Cloud
Construction,David Bernstein and Erik Ludvigson (Cisco)" they mention
that,






Curiously we do not see Clouds from the major providers using NAS or SAN. The
typical Cloud architecture uses DAS, which is not typical of Datacenter storages
approaches.




But
here there is a conflict: in my opinion and also stated later in the paper, Clouds
should use SAN or NAS because of DAS is not appropriate when a VM moves to another
server yet still needs to access storage from original
server.



What are other reasons effects clouds to
prefer DAS, NAS or SAN?
what kind of storage do big cloud providers use and
why?



Answer




This answer has been edited
after the question was
clarified.






What are other reasons effects clouds to prefer
DAS




Where "DAS"
means Direct Attached Storage, i.e. SATA or SAS harddisk
drives.



Cloud vendors all use DAS because it
offers order-of-magnitude improvements in price/performance. It is a case of href="http://en.wikipedia.org/wiki/Horizontal_scaling#Scale_horizontally_.28scale_out.29"
rel="nofollow noreferrer">scaling
horizontally.



In short, SATA harddisk
drives and SATA controllers are cheap commodities. They are mass-market products, and
are priced very low. By building a large cluster of cheap PCs with cheap SATA drives,
Google, Amazon and others obtain vast capacity at a very low price point. They then add
their own software layer on top. Their software does multi-server replication for
performance and reliability, monitoring, re-balancing replication after hardware
failure, and other things.




You could
take a look at MogileFS as a simpler representative of the kind of software
that Google, Amazon and others use for storage. It's a different implementation of
course, but it shares many of the same design goals and solutions as the large-scale
systems. If you want to, here is a jumping point for learning more about href="http://en.wikipedia.org/wiki/Google_File_System" rel="nofollow
noreferrer">GoogleFS.





stated later in the paper, Clouds should use SAN or NAS because of DAS is not
appropriate when a VM moves to another
server




There are 2
reasons why SAN's are not used.



1)
Price.

SAN's are hugely expensive at large scale. While they
may be the technically "best" solution, they are typically not used at very large scale
installations due to the
cost.




2) The CAP
Theorem

href="http://www.julianbrowne.com/article/viewer/brewers-cap-theorem" rel="nofollow
noreferrer">Eric Brewer's CAP theorem shows that at very large scale you
cannot maintain strong consistency while keeping acceptable reliability, fault
tolerance, and performance. SAN's are an attempt at making strong consistency in
hardware. That may work nicely for a 5.000 server installation, but it has never been
proved to work for Google's 250.000+
servers.



Result:
So
far the cloud computing vendors have chosen to push the complexity of maintaining server
state to the application developer. Current cloud offerings do not provide consistent
state for each virtual machine. Application servers (virtual machines) may crash and
their local data be lost at any time.



Each
vendor then has their own implementation of persistent storage, which you're supposed to
use for important data. Amazon's offerings are nice examples; href="http://aws.amazon.com/rds/" rel="nofollow noreferrer">MySQL, href="http://aws.amazon.com/simpledb/" rel="nofollow noreferrer">SimpleDB,
and Simple Storage
Service
. These offerings themselves reflect the CAP theorem -- the MySQL
instance has strong consistency, but limited scalability. SimpleDB and S3 scale
fantastically, but are only eventually consistent.



Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able