Ideas for how to use new, free gear from HPE?
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
It's obviously not a set of equipment meant to "improve" anyone's environment.
Speak for yourself. It would be very welcomed in my environment. With an extra chassis, 1TB of RAM per blade, and stack that baby out with 16 blades, probably can host another 500 or 600 VMs.
Hell, there's an option for this. If it's gonna be UAT, resell space on it. Stack it up, lease it out.
-
@NetworkNerd said in Ideas for how to use new, free gear from HPE?:
I remember reading something upon entering that contest that you had to be prepared to be filmed by HPE's media team on premise at your company if you won. Hopefully management is fine with that? And could you then (if you wanted to) still get rid of the equipment after being a part of the promotional HPE video?
It's pretty hard to force you to keep equipment.
-
@Shuey said in Ideas for how to use new, free gear from HPE?:
@NetworkNerd said in Ideas for how to use new, free gear from HPE?:
I remember reading something upon entering that contest that you had to be prepared to be filmed by HPE's media team on premise at your company if you won. Hopefully management is fine with that? And could you then (if you wanted to) still get rid of the equipment after being a part of the promotional HPE video?
I don't think management would have any issues at all being filmed. As far as getting rid of the equipment after being part of the video, based on the contest terms, it sounds like there's no way of selling it (at least not in the first 3 years of owning it).
No, you'd just have to return it.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
It's obviously not a set of equipment meant to "improve" anyone's environment.
Speak for yourself. It would be very welcomed in my environment. With an extra chassis, 1TB of RAM per blade, and stack that baby out with 16 blades, probably can host another 500 or 600 VMs.
Hell, there's an option for this. If it's gonna be UAT, resell space on it. Stack it up, lease it out.
That's a lot of risk... using a very expensive and fragile platform that needs a big investment to be useful. Lease that out and you need to invest a ton to have the necessary environment to support that. That's the issue here, no matter how you use it you have to invest a lot of money and do so on equipment that is sub-par and over priced. Even if you were going to lease it out, you can get better gear at lower prices (or TCO at least) by not using this gear.
So the price here might be zero. But the cost seems to high to use.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Speak for yourself. It would be very welcomed in my environment.
Because..... you are willing to use blades, already have invested in that so for better or worse this fits your environment even if blades may not be ideal for you and you already own the storage.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Speak for yourself. It would be very welcomed in my environment.
Because..... you are willing to use blades, already have invested in that so for better or worse this fits your environment even if blades may not be ideal for you and you already own the storage.
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
If I went with even 1U units, I wouldn't have near the amount of processing power that the blade system would provide. A chassis is 10U. With a 1U unit all I could hope for is 10 to 20 sockets. With a blade chassis, I get 16 blades, 16 to 32 processors fully stacked. Not to mention the single management interface for networking, storage, and so forth. Fully racked and stacked cabinet, I get 64 blades. With 1U units, I get 42 at best. If I go IBM, I can get even more with a mix and match of i, z, and x86 all in one chassis. For HP, I can get x86 and Itanium blades. Cisco UCS, only 56 blades on a 42U cabinet total but with some integrated networking. All with a single storage fabric and super easy deployment.
Blades are inappropriate for lots of folks, especially the ones who have just one system right now. For service providers, like us, we need heavy density because cabinet space costs money. Power, cooling, and such are just side benefits.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.
In a 42U standard cabinet, you can have:
64 two socket x86 blades with 2U to spare with Dell/HP for 128 processors. Plus the 2U can be used for networking gear.
42 two socket x86 1U servers for 86 processors. With no spare space for networking gear.Right now, there are no real quad socket x86 1U servers. There was a few in the past, but expensive as shit. And they have been overtaken by the higher density core per socket processors for a while now.
This is just the x86 world. The ASIC style device is not general purpose, which is what Google and Facebook use. Yeah, I can get more density of "servers" by using ARM for one and done kind of workloads, but that's not general purpose. I would be surprised if anyone in SMB does anything like that. Specialty workloads can get more and more and more into a single U of space, but when your application is SQL Server 2016 with a Sharepoint frontend, you don't need fancy shit.
Most folks will never see that level of complexity.
-
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.
-
Sell it all for $30k and buy some gear that'll really fit and work in your environment
-
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
Sell it all for $30k and buy some gear that'll really fit and work in your environment
That was covered, he's not allowed to sell it.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.
One of the big things that is often overlooked with blades is the extra gear needed to make it work. It moves the storage elsewhere, so you actually get better density of the entire workload for the SMB without blades. Only tons and tons of blades connected to few SAN get those high densities.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
Sell it all for $30k and buy some gear that'll really fit and work in your environment
That was covered, he's not allowed to sell it.
Well that sucks
I'd need to spend a whole pile of cash just to get half that stuff into my server room, let alone the electrical!
-
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
-
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
-
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
And then buy a bunch of support gear (from us) so that it works.
-
@StrongBad said in Ideas for how to use new, free gear from HPE?:
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
And then buy a bunch of support gear (from us) so that it works.
LOL Scott said that earlier too
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
The only thing a blade brings one place to view it physically. The switch gear you plug in, it's managed the same old ways. Otherwise, it's pretty much the same equipment.
We separate out our teams to play to our strengths. I'm the Microsoft expert, we have a Linux expert, storage expert, networking experts and so on. We all can do some other job, but we focus on our strengths to keep things going.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.