a couple of RAID5 questions.

Heeter

Overclocked Like A Mother
Joined
8 Jul 2002
Messages
2,732
Hi all,

I got a couple of RAID5 questions, I am using a Promise FastTrak SX8300 RAID card. This is for a LAMP/VMWare/Terminal/File server using ubuntu8.0.4LTS Server.

1- If I have 8 x500gig RAID5, what will my actual storage capacity be?

2- Will it more advisable to use a 2x500RAID1, 6x500RAID5 setup, or straightup 8x500RAID5?

The mobo is a Intel5000SATAR. I do have my Intel Server chassis and hot swap cage already.


Thanks

Heeter
 
depends how many disks you use as parity only and hot-spare. You can count on at maximum 7x500GB
 
What would you recommend for a setup?

Thanks LofLA


Heeter
 
depends on what your going to use it for - personally i would use 2 for parity/spare the rest as a raid5
 
RAID5 by definitions uses 1 disk for parity... thus an 8 drive array would have 3.5TB of raw space.. if you choose to set another disk as an hot spare, you would be down to 3TB of raw space in a RAID5 w/hot spare.

What are you going to be using this for? when you mention VMWare, which version of VMware? Workstation/Server/VI3/ESX 3i? What will the I/O be like on the file server part of it? Heavy write? or heavy read? or a mix of both? Since you mention VMWare, are these fileserver/LAMP pieces going to be running in a vmware guest or native on the host?

How important is performance vs data protection?

If you want a "catch-all" go with a single RAID5. If you want performance, you can look at splitting I/O and putting VMware vmdk's on a RAID10, OS load on a RAID1, and everything else on a RAID5 - actually, you don't have enough disks for that so scratch that.
 
1 disk parity 1 disk spare - is what i was referring too, just in case my above response wasn't clear ;)
 
RAID5 by definitions uses 1 disk for parity... thus an 8 drive array would have 3.5TB of raw space.. if you choose to set another disk as an hot spare, you would be down to 3TB of raw space in a RAID5 w/hot spare.

What are you going to be using this for? when you mention VMWare, which version of VMware? Workstation/Server/VI3/ESX 3i? What will the I/O be like on the file server part of it? Heavy write? or heavy read? or a mix of both? Since you mention VMWare, are these fileserver/LAMP pieces going to be running in a vmware guest or native on the host?


Thanks Fitz,

I am running all the servers on guest appliances. My setup will be Linux Server onto bare metal, then VMWare Server on top of Linux. I cannot seem to install ESXi onto bare metal. Been trying for a while.

The file server will experience heavy traffic on both directions. The presents servers are starting to fail on load, leaving users hanging.

Heeter
 
If the servers are failing under load now, do you know why they are failing? Also, are the current servers virtualized already?
 
if they are failing on naked hardware they'll do worse on virtual machines :)
 
they are PII455mghz/PIII733mghz machines, put together when there was only 21 workstations in the office, now there is 73 workstations and counting........


Heeter
 
if they are failing on naked hardware they'll do worse on virtual machines :)

Depends on if it is an upgrade or not :p If the old server was a p4 2.4 Ghz, and the new one is now a Dual Quad Core Xeon and VMWare allows the guest to use 2, it could be an improvement.

That being said, be aware of the write hole with RAID 5, and that if anything does go wrong your RAID controller may not be smart enough to catch it before it corrupts data.
 
Are you going to be booting the RAID5 or will it be used purely for data with a separate OS disk? If you're booting the RAID5 are you using an EFI mobo or a BIOS one?
 
Bah..

if space is an issue, go with a single RAID5 array. Otherwise, I would say run your base OS load on a RAID1 mirror and place your vmdk's on a RAID10 array with the remaining disks.
 
I'd agree with fitz, if you can run your OS on a smaller RAID1 and then place your data on a separate array you'll probably end up with less headaches.

If you're going for a single array from which you are going to be booting and the array capacity size is > 4TB then you have to "carve" out a boot volume on to which you'll install the OS. This essentially makes the array appear as two drives to the OS. This is only really a concern if you are using a BIOS/MBR based boot environment as opposed to an EFI/GPT based boot environment.
 
RAID 10


3X 500 RAID 0 times two, with two hot spares, faster for larger data writes and read, and just as redundant plus faster rebuilds.
 
Thanks Guys,

I am going to put the esxi and the vmk's on the 2 x 500 mirrors, then dropping all the files onto the the 4 x 500 raid 10.

Why do you need that much storage? I've never came close to hitting the cap on 120

Why? Because that is all I have laying around here for SATAII drives, nothing but 500's, dozens and dozens of them. And the two HP proliant servers that I ordered, have those as well, so I don't have to worry about mismatching hotswaps in the future.

Thanks again,

Heeter
 

Members online

No members online now.

Latest profile posts

Also Hi EP and people. I found this place again while looking through a oooollllllldddd backup. I have filled over 10TB and was looking at my collection of antiques. Any bids on the 500Mhz Win 95 fix?
Any of the SP crew still out there?
Xie wrote on Electronic Punk's profile.
Impressed you have kept this alive this long EP! So many sites have come and gone. :(

Just did some crude math and I apparently joined almost 18yrs ago, how is that possible???
hello peeps... is been some time since i last came here.
Electronic Punk wrote on Sazar's profile.
Rest in peace my friend, been trying to find you and finally did in the worst way imaginable.

Forum statistics

Threads
62,015
Messages
673,494
Members
5,621
Latest member
naeemsafi
Back