Help choosing replacement Hyper-V host machines and connected storage
-
@scottalanmiller said:
@JohnFromSTL said:
The developers are afraid of running SQL and Oracle in a virtualized environment because our current VMs are very inefficient.
And developers make infrastructure designs or have input because...... why?
That's like asking a hotel customer what building material to use in the foundation of the building. It's none of his business.
Because the owner trusts them more, developers/programmers spend < 5 minutes looking at results from a Google search and become instant experts and I am the sole IT person who squeezes blood from turnips while keeping things operational and is perceived as only wanting to spend money to buy "new" equipment and will only break things in the process.
-
Why does he have IT if he wants non-IT people doing IT instead? That's just weird. I wouldn't hire a mechanic and then have the receptionist fix the car while the mechanic explains that she doesn't know anything about cars!
-
If the developers are the experts, why are you employed?
Ask your boss the same thing (if your gutsy enough), and follow up with, asking that the developers stick to their profession.
-
@scottalanmiller said:
Why does he have IT if he wants non-IT people doing IT instead? That's just weird. I wouldn't hire a mechanic and then have the receptionist fix the car while the mechanic explains that she doesn't know anything about cars!
This is the same person who takes a $25,000 hit when trading in a car which he paid $100,000 for because the bumper-to-bumper warranty is expired and an oil change costs $250 at the Mercedes dealer. Oh yeah, the car was two years old and had fewer than 10k miles.
-
After talking with the boss he is prepared to spend some money on these servers. I've decided to split the VMs between 2 hosts and have a 4 total servers for load balancing/redundancy.
NewHost01
SQL 2005 -----------750GB
SQL 2008 R2 --------750GB
SQL 2012 -----------750GB
SQL 2014 -------------1TB
Oracle 11g ---------750GB
Oracle 12c ---------750GB
Usable Local ------4.75TBNewHost02
StorageServer (Server 2012 R2) ------2500GB
AppServices01 (Server 2012 R2) -------100GB
AppServices02 (Server 2012 R2) -------100GB
WebServices01 (Server 2012 R2) -------100GB
MSTeamServer01 (Server 2012 R2) ------100GB
ClientVM01 (Windows 10) --------------100GB
ClientVM02 (Windows 10) --------------100GB
ClientVM03 (Windows 10) ---------------80GB
ClientVM04 (Windows 10) ---------------80GB
ClientVM05 (Windows 10) ---------------80GB
ClientVM06 (Windows 7) ----------------80GB
ClientVM07 (Windows 7) ----------------80GB
ClientVM08 (Windows XP) ---------------40GB
ClientVM09 (Windows XP) ---------------40GB
Usable Local Storage ----------------3.58TBNewHost01 needs to be the more powerful of the two machines. I really like the 730xd for the shear power, but would the 720xd be a better option? I plan on installing Server Core for Windows Server 2012 R2 on two 400GB SAS SSD drives. I'll be using local storage to host the Hyper-V clients. I don't feel comfortable using SATA drives in these servers, and I have zero experience with NL SAS drives. Any thoughts on this? I will have to justify why the cheaper SATA drives aren't a good idea and will just put my foot down if necessary. I'm trying to keep the total under $25k for the four servers. Thanks to all of you for the amazing advice.
-
@JohnFromSTL said:
I plan on installing Server Core for Windows Server 2012 R2 on two 400GB SAS SSD drives.
SD card, on the main array, not on its own array and definitely not on SSD. Hypervisor goes on the slowest, cheapest storage. Every penny for capacity or performance is totally lost.
-
@JohnFromSTL said:
I don't feel comfortable using SATA drives in these servers, and I have zero experience with NL SAS drives. Any thoughts on this?
SATA is just a protocol, where are you getting a "concern" from?
NL-SAS is just a trade term for SAS at 7200 RPM, it's not something you have "experience with." It would be like saying you drive the highway regularly but don't have "experience driving at 40 MPH."
There are two types of drives, SAS and SATA. SAS are more efficient at mixed workloads, that is all. The speed of the spindles changes nothing but the speed. You no more need experience on a spindle speed than you do on a CPU frequency.
-
@JohnFromSTL said:
I will have to justify why the cheaper SATA drives aren't a good idea and will just put my foot down if necessary.
Well, start by justifying why SAS are better here. If you can't articulate to techs why SAS would be better, maybe they are not. The value of SAS is determined by the IOPS that you need and the type of workload. NL-SAS is often so close in price to SATA that we generally start there because the price increase is often around 1% while the performance is generally closer to 10%.
-
Four total servers meaning two hosts with four VMs? I'm unclear why there are four hosts.
-
@scottalanmiller said:
@JohnFromSTL said:
I don't feel comfortable using SATA drives in these servers, and I have zero experience with NL SAS drives. Any thoughts on this?
SATA is just a protocol, where are you getting a "concern" from?
NL-SAS is just a trade term for SAS at 7200 RPM, it's not something you have "experience with." It would be like saying you drive the highway regularly but don't have "experience driving at 40 MPH."
There are two types of drives, SAS and SATA. SAS are more efficient at mixed workloads, that is all. The speed of the spindles changes nothing but the speed. You no more need experience on a spindle speed than you do on a CPU frequency.
I simply haven't used them before.
-
@scottalanmiller said:
Four total servers meaning two hosts with four VMs? I'm unclear why there are four hosts.
Four servers total, two for redundancy.
-
@JohnFromSTL said:
I simply haven't used them before.
SATA drives? It's totally transparent to you. SATA is what is in desktops and laptops and nearly any SMB NAS device or SAN device. You'll normally encounter SATA at least ten to one over SAS. But other than the speed difference, they are the same drives. It's literally nothing but an "under the hood" protocol for the drives to talk to the RAID controller. Other than being listed as SATA instead of SAS in the RAID card's interface, you have no way to tell them apart.
-
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
-
@scottalanmiller said:
@JohnFromSTL said:
I simply haven't used them before.
SATA drives? It's totally transparent to you. SATA is what is in desktops and laptops and nearly any SMB NAS device or SAN device. You'll normally encounter SATA at least ten to one over SAS. But other than the speed difference, they are the same drives. It's literally nothing but an "under the hood" protocol for the drives to talk to the RAID controller. Other than being listed as SATA instead of SAS in the RAID card's interface, you have no way to tell them apart.
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
-
@scottalanmiller said:
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
Yes sir, that's the plan at least. I'm heading out to get lunch.
-
@JohnFromSTL said:
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
Chances are, NL-SAS will be what you want. But you have to see the actual prices to know.
-
@JohnFromSTL said:
@scottalanmiller said:
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
Yes sir, that's the plan at least. I'm heading out to get lunch.
Cool, now I follow.
So the next question is.... why Prod/DR failover design instead of just "cluster" design? If you treat them as clusters you can load balance and get better performance "every day" and only go to the limitations of the design in cases where something has failed.
-
@scottalanmiller said:
@JohnFromSTL said:
@scottalanmiller said:
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
Yes sir, that's the plan at least. I'm heading out to get lunch.
Cool, now I follow.
So the next question is.... why Prod/DR failover design instead of just "cluster" design? If you treat them as clusters you can load balance and get better performance "every day" and only go to the limitations of the design in cases where something has failed.
I'm not against it at all; I just haven't set one up before.
-
@scottalanmiller said:
@JohnFromSTL said:
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
Chances are, NL-SAS will be what you want. But you have to see the actual prices to know.
Still running RAID-10 for the best performance?
-
@JohnFromSTL said:
@scottalanmiller said:
@JohnFromSTL said:
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
Chances are, NL-SAS will be what you want. But you have to see the actual prices to know.
Still running RAID-10 for the best performance?
Yup, that would only change if you were REALLY on the fence with RAID 6 and only needed "anything" to tip the scales. No significant changes in performance for RAID 10 would remain the choice 99% of the time (or more) if it was the choice with same speed SATA drives before.