[Novalug] Server Recommendation!

Brian Steisslinger brian.steisslinger@gmail.com
Thu Nov 30 09:36:17 EST 2006


Yes it is entirely dependent on your application, but if your doing
video\large imag streaming you may have high throhgput but low I\O numbers
where as an OLTP database may have high I\O and low throughput. More often
than not though IOPS tend to get ignored by purchasers and they build by
capacity which causes crappy performance.

I've seen HP EVA's with 20 servers on them only push 30 or 40mb's total
across Fibre Channel during normal operations but the disks are incredibly
busy because it's alot of small I\O database stuff so you have thousands of
small i\o's, not to mention then you worry about service times also. The
only time I really worry about throughput is when I am dealing with backups
especially when you consider most enterprise tape drives can handle in
excess of 60 to 70 mbps and a disk subsystem not being able to provide
enough data to keep the tape drive writing can be detrimental to tape drive
performance.

For the random vs sequential writes it really depends on the application.
Take exchagne for example I see fairly sequential writes on the transaction
log luns but random writes on the database luns.  Thats because the database
files don't keep everything clean and basically pages are freeed from the
database as items are deleted and instead of growing the data file new data
is stuffed into old page locations. Most RDBMs are like this which is why
Oracle says seperate your logs from the data files and stripe and mirror
everything (SAME).  As for the caching that is a whole other story because
you need to also account for the fact that most Open Systems do alot of
weird things at the file system level anyways so the data you see at the
physical level may be entirely different then the application produces.

This is a really good sample chapter from a book that discusses all of this
much better than I could ever hope to imagine, alot of the concepts transfer
down to even low end raid arrays.

http://www.phptr.com/articles/article.asp?p=481867&rl=1

On 11/30/06, Michael Stone <mstone@mathom.us> wrote:
>
> On Thu, Nov 30, 2006 at 07:19:04AM -0500, Brian Steisslinger wrote:
> >Actually don't trust potential sustained throughput as a metric of
> storage
> >performance.  Most Open systems applications don't saturate an Ultra320
> bus
> >much less a SAS 3GB/s link.  Mostly you need to worry about I\O per
> second.
>
> It really depends on your application, doesn't it? I've got some that
> need all the bandwidth they can get (and others that need iops and some
> that need both). Regardless of which you need, a crummy raid controller
> won't deliver. :) What using sustained throughput as a metric gets you
> is a relatively quick and easy way to tell whether you've got really
> horrible issues. (If you're getting 150MBps you might not know whether
> that's good for your application. If you're getting 30, something's
> wrong.)
>
> >A good rule of thumb is each drive can do about 100 I\Os per second so in
> >theory a 6 disk stripe on a Dell 2850 has a potential of 600 I\Os per
> >second.  Now if I drive 600 36K writes per second my total throughput is
> >about 160MB\s which is only half of a single U320 bus.  In reality more
> than
> >likely the drive will be on separate buses so you can see the bandwidth
> move
> >from U320 to SAS really doesn't help of course that is depending on your
> >workload. From my experience though you don't see huge block write unless
> >you are working with large files.
>
> Or you're dealing with a decent caching controller, or you have enough
> memory to buffer writes. It's fairly unusual to be doing random
> synchronous writes. Reads are a different story, but writes should tend
> to be fairly sequential unless you're dealing with real low volume (in
> which case it doesn't matter).
>
> Mike Stone
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.firemountain.net/pipermail/novalug/attachments/20061130/3fa18192/attachment.htm>


More information about the Novalug mailing list