ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. dyasny
    3. Posts
    D
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 387
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      Right, but we aren't talking about not sharing it. So, again, you are talking about something different. I'm not sure where you are getting lost, but you are talking about totally different things than everyone else. This has nothing to do with the discussion here.

      Good, then we are at least partially on the same page

      Again, didn't scale for you, but your failures do not extend to everyone else. I'm not sure why you feel it can't scale, but it does successfully for others. The common factor here is "your attempts have failed". You have to stop looking at that as a guide to what "can't be done."

      OK, what is the largest cluster size you can run with this starwind solution reliably? Their best practice doc is very careful about mentioning scale being a problem, although they call it "inconveniences".

      Again... you are not understanding that because something can be bad doesn't mean it is always bad. This are basic logical constructs. You are missing the basic logic that absolutely no amount of observation of failures makes other people's observations of success impossible.

      I am talking about a very basic thing - storage tasks require resources. Those resources need to come from somewhere. If you don't use dedicated boxes, you have to take resources away from your VMs. It is extremely simple.

      You are assuming automated rebalance.

      Automated or manually triggered - it's a costly operation. Even if you don't run a sync cycle but do a dumb data stream from a quiesced source, you will be pushing lots of data over several layers of hardware and protocol, that does not come for free. When you replace a disk in a RAID array, you are going to suffer from performance degradation until the raid is in sync, because the hardware or software raid system will be working hard to push all the missing data to the new disk in the best case scenario, and will be generating a ton of parity and hashes in the worst. This does not come cheap.

      So you don't understand the pool risks and think that node risks alone exist and that the system as a whole carries no risks? This would explain a lot of the misconceptions around HC. The cluster itself carries risks, it's a single pool of software. Every platform vendor will tell you the same.

      I understand the risks, and losing just a storage node or just a hypervisor node is much less risk than losing both at once. I was hoping you would understand that, but I guess I shouldn't hope.

      Actually, that breaks the laws of physics. So obviously not true. SAN can't match speed or reliability of non-SAN. That's pure physics. You can't break the laws of math or physicals just by saying so.

      Really? FC at lightspeed from a couple of yards away is significantly slower than local disk traffic? Are you sure we have the same physics in mind?

      HC has EVERY possible advantage of SAN by definition, it just has to there is no way around it, but adds the advantages of reduction in risk points and adds the option of storage locality. Basic logic proves that HC has to be superior. You are constantly arguing demonstrably impossible "facts" as the bases for your conclucsions. But everyone knows that that's impossible.

      You keep talking about your assumptions as if they are the one and only possible truth. They are not. HC cannot have the advantages of a SAN because the SAN is more than just a big JBOD (and even if it were, it has the advantage of being a much larger JBOD than you could ever hope to build on a single commodity server). A SAN has tons of added functionality which it deals with without loading the hosts. If you start implementing all of that in HC, you end up spending even more local host resources on non-workload needs. So either your "basic logic" is flawed, or you simply aren't able to accept that there might be points of view besides yours.

      basically we are having a time warp discussion back to 2007 when almost everyone truly believed that SANs actually were magic and did things that could not be explained or done without the label "SAN" involved and that physics or logic didn't apply.

      I'm not the one talking about "magic sauce" here, remember? I am actually talking about implementation specifics and how they are not simple (because I know these details and technologies well enough to discuss them and see no magic in them)

      I get it, storage can be confusing. But arguing against 15+ years of information that is well established and just acting like it hasn't happened and just reiterating the myths that have been solidly debunked and ignoring that this is all well covered ground just makes it seem crazy.

      Have you noticed how you never have any real arguments instead what I see is "this is known for N years!" and "this is the one and only logic!"? I get it, being defied with solid technical arguments can be confusing, but please try to bear with me here instead of just defaulting the the usual non-arguments. Can you explain to me, how keeping large storage volumes synchronized over a network has no overhead and consumes no host resources please? It's a simple question, and I will not accept "magic" as an answer. Saying that pushing large amounts of data across a network comes at no cost is pretty much defying the laws of physics, so I'd like to know how exactly you expect to circumvent them.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      Maybe you are running into these problems due to bad products, planning or design, but you are hoisting problems you've had onto everyone else where we aren't experiencing these problems.

      You are "projecting". I'm not saying you or people you know having implemented HC and had it not work well. But you are confusing a bad implementation or planning with the architecture being bad. The two are different things.

      Or maybe I'm just speaking from experience, and there's plenty of it. Local storage that is not shared is great, but it doesn't scale and pretty much kills all the nice features you can have in a virtualized DC - live migration, HA, all those things you don't care about in SMBs I suppose. Getting that storage replicated in a scalable fashion is hard, simple CBT pushed over network (what the folks at Linbit basically do) does not scale. And hard tasks require resources, those resoutces have to come from somewhere.

      Ceph and Gluster are both known to be bad for that, that should have been known going into the project. That someone didn't head you off at the pass shows that the mistakes and oversights were happening early on. We could easily have warned you that that wasn't meant to have good local performance.

      Mixing any distributed storage solution with any other workload is known to be bad, this is exactly what I'm saying. I've come into those projects when they were already implemented and got things working by breaking up those overloaded hosts into hardware that was doing one job and doing it well on either side.

      Okay... so thebig question is... since this is not part of HC... why? Stop doing that. You can never have a rational, useful discussion about HC until you talk about HC and not something else.

      But I am, at least at scale. DRBD and any similar system does not scale. When things are small (SMB level again) this is peanuts, we can do anything because our tasks are smaller than the hardware we can get. What happens at scale though?

      No one anywhere recommends 200 nodes in a single cluster. If you think this is a good idea, SAN, SDS, HC, or otherwise, we are on different pages. That's a scale that literally no one, not MS, not StarWind, not VMware, not RedHat recommends as a single failure domain. DO they support it? Yup. Do they think you are crazy? Yup.

      200 nodes is small for the scale I typically deal with. Red Hat has solutions that can deal with this kind of scale easily. I know of a few other companies that do. MS, VMW and probably StarWind do not, because of the nature of their clustering implementation, but that's basically all about how you manage locking.

      Even in the enterprise, which you claim to know, they often use workloads scopes of this size for performance and safety. The larger the pool, the bigger the problems.

      Not really. In a large pool, a dead node simply gets easily replaced. The effect is very small.

      If you want to get into giant pools you have to pick your battles.

      I usually am in those numbers, but ok

      Let's talk reasonable size, like 10-80 nodes. If you need screaming performance that no SAN can match, then you are looking at StarWind network RAID which does that.

      OK, so we have a network RAID, a bunch of blocks get streamed to other nodes when writes occur on one. When all there is is pushing block across, things are simple. What happens when a node dies, and I have to suddenly rebalance the data distribution? How is consistency kept? How does the system decide which blocks get streamed where? Even in a 10 node cluster, it would be plain out stupid to keep all the data replicated to everywhere, 10x the data on local disks would be too expensive

      If you just want cheap pooled storage, you look at CEPH (and there are accelerators for that to make it fast,if you need to).

      Here we have a distributed system, which starts at a least a core per RBD, and 32Gb or RAM to even get started properly. In SMB, I doubt you see many monstrous hypervisors with hundreds of cores, so what is there left to run your actual VMs?

      At truly giant scale, the only real benefit to totally external storage, is when speed and reliability are so unimportant that you are willing to sacrifice them to a huge degree to save a few dollars. But with HC's low cost today, even that is approaching the impossible.

      Only having a good storage fabric can give you excellent speed and very low latencies, and as for reliability - you can build whatever you want on the SAN side depending on your requirements. The only good thing about HC is local storage access, and it isn't really that far ahead of any decent fabric anyway, if at all.

      Because you don't have all those things. Large chunks over the network takes like no resources. No idea where you think the overhead comes from, but for most of us, things like copying data is not a high overhead activity. It's a dedicated network in most cases, with offload engines on the NICs, and things like tiering and such take extremely little overhead (if done well.) These just aren't CPU or RAM intensive activities.

      That is simply not true. Pushing large amount of data over the network is not cheap, and that is in the case of simple streaming. When you start running synchronizations and tiering stuff gets harder. And when you have to rebalance (which ceph does often) you need even more resources. Yes, you can dedicate NICs to just that (and those NICs will not be there to provide more bandwidth to the workload traffic) but in order to push large amounts of data into the NICs you also need CPU cycles and and RAM. It's CS 101, there are no free rides.

      Those aren't HA, so not applicable to the discussion. HCI is assumed to be replicating to other nodes, something those providers don't provide. They are stand alone compute nodes. Very different animal.

      My point exactly. If HC was so great, why wouldn't they be using it?

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      @dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      HCI doesn't hold up in the real world. See, I can keep doing this too

      Except it does, tons of places are using it, and show me ANY example where it didn't shine... any. WHile somewhere, one must exist, dollars to donuts you can't find one.

      I've worked with several Openstack and K8s clusters where the storage was local to the hypervisors, served from Ceph/Gluster/AFS/SheepDog. Horrible experience each time.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      This just doesn't hold up in the real world.

      HCI doesn't hold up in the real world. See, I can keep doing this too 🙂

      Most companies and workloads are not trying to do things where this makes sense at all.

      Maybe in the SMB space there is less planning and more "lets just deliver something and let someone else support it", but it doesn't work in the enterprise.

      All this SDS like this is doing is making new SAN pools, right back to the complexity, costs, and risks that we had before.

      Exactly. SDS or SANs - you can pick whatever you want and suits you best. But if you are going to be running a replicated distributed storage service on hardware that is already quite busy running VMs, you'll end up in trouble as soon as you go anywhere near capacity.

      If your design creates all this overhead, whether you address it in one box or many, chances are that design itself is the flaw. Not always, but generally. But whether you have RAID or RAIN, the overhead to do this stuff just isn't there when implemented well.

      How exactly can there be no overhead, when you are synchronizing large chunks of blocks over the network? Especially with high replication factors? Add encryption to that, add all the extra logic for local tiering, and there's no wonder the minimal sizing for SDS is so high.

      Now sure, if we only look at totally garbage solutions, we can make any design seem like a problem. but you have to separate good design from good products. Bad products exist even in well designed solutions.

      Right, but because we measure it and there isn't means that there isn't. Just because you claim a load that no one has, which is all you are doing, doesn't make it real. You have created problems that no one faces and are acting like we are all impacted by them.

      More sophistry. Can you be more specific please, instead?

      You are literally claiming that contrary to all evidence, common sense, and industry knowledge, that software RAID is not just a huge load, but so large that we now need not only hardware RAID cards to do it, but entire hardware RAID servers!

      I'm not talking about simple software RAID, I'm talking about keeping a distributed storage system in sync, and then on top of that keep constantly rebalancing to satisfy the local tiering requirements. And while doing all that juggling, also ensure the system remains resilient to node failure. This is a lot of work, unless like @Dashrender says there is magic at play. I don't believe in magic.

      So to skip the rest of your quotes, in general, what you are saying is that a system which is essentially something like a simple KVM with DRBD, is the perfect solution. I am saying sure, for two nodes. How about 200?

      Do you think AWS/GCP/Azure are running HCI solutions for example?

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      This is they myth. In most HCI it adds no appreciable load. As long as you believe that things like storage and networking are going to create a lot of load, yes, this is going to seem like a point of risk, although even then things like RAID cards fixed that in the era where that was true.

      But since it doesn't add load, and actually adds less load than splitting it out, this logic is backwards.

      I already answered that above. Just because you say it doesn't add any load, doesn't mean it doesn't.

      SDS isn't part of HC. This might be a root of your confusion. This is why some HC, like the one for whom the thread is about doesn't do this and just does RAID. Overhead is ridiculously low.

      How exactly do they deal with the HA side of things? With RAID, and a host going down, all the VMs using that host go down, RAID or not.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      That's a joke?

      You should have seen my British friend tell it 🙂

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @Dashrender this is not about the basket/eggs thing, consolidation is well and good, but HCI adds a massive load on each host, and the resources for that load have to come from somewhere. SDS is not easy and it does demand CPU, RAM and network resoruces. SDN is just as bad. Lump it all into the same host, and you've got nowhere to run VMs adequately, that's my point.

      There's a very old joke - a man is pulled over by a policeman, as he was driving with one hand and hugging his girlfriend with the other. The policeman says "Sir, you are doing two things and both of them badly". This is exactly why HCI is wrong. Yes, if all you have is a single machine, you'll be lumping all your workloads on it, but if you are building a real datacenter, you better do the networking stack properly, using the right hardware: even if it's going to be some opensource SDN like Calico and not a suitcase of money sent to cisco, you should dedicate the correctly spec'd hardware to that, the same goes for the storage stack - you want to run on commodity hardware using opensource SDS software - be my guest, but dedicate those hosts to SDS and spec them out to fit the task, and the same goes for the workload-bearing machines, whether they will be KVM hypervisors or a docker swarm or an overprices vmware cluster - that's immaterial. If you do the HCI thing, you cannot spec the hardware to the task, you end up running all of those services and workloads on the same set of hosts, and all those tasks will be sharing that hardware, either competing for resources, or cutting available un-utilized resouces away from where they could be needed.

      Yes, the nicer HCI systems can try to keep the data they serve balanced so that it is at least partially local to the workload, but in a properly build virtual DC this is not a problem. Infiniband, FC and even FCoE make latency moot, and throughputs can be much higher than over a local SAS or even NVMe channels.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      HC was always a thing, though, that's the thing. That it got buzz is different. We've had HC all along, just people didn't call it anything.

      OK, just so we're on the same page here, are you saying we should simply install a bunch of localhosts and be done, for all the types of workloads out there?

      No, it's separating it that is the bad idea.

      No, it's mixing it that is the bad idea. See, I can also do this 🙂

      Separate means less performance and more points of failure.

      It would seem so, but in fact, you already have to run those services (storage, networking, control plane) anyway, and they all consume resources, and a lot of them. And then you dump the actual workload on the same hosts as well, so either you simply have much less to assign to the workload and the services, or they have to compete for those resources. Either is bad, and when one host fails, EVERYTHING on it fails. So you have to not just deal with a storage node outage or a controller outage, or a hypervisor outage, but with all of them at the same time. How exactly is that better for performance and MTBF?

      It's just like hardware and software RAID... when tech is new you need unique hardware to offload it, over time, that goes away. This has happened, at this point, with the whole stack. And did long ago, there was just so much money is gouging people with SANs that every vendor clung to that as long as they could.

      I'm not saying SANs are the answer to everything, I'm saying loading all the infrastructure services plus the actual workload on a host is insane. If you have a cluster of hosts providing FT SDN, and another cluster providing FT SDS and a cluster of hypervisors using those service to run workloads using the networking and storage provided, I'm all for it. This system can easily deal with an outage of any physical component, without triggering chain reactions across the stack. But this is just software defined infrastructure, not HCI.

      But putting those workloads outside of the server make it slower, costlier, and riskier. There's really no benefits.

      Again, I don't care much for appliance-like solutions. A SAN or a Ceph cluster, I can use either, hook it up to my hypervisors and use the provided block devices. But if you want me to run the (just for example here) Ceph RBD as well as the VMs and the SDN controller service on the same host - I will not take responsibility for such a setup.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      HCI isn't just shared storage. It's shared everything.

      Great, so we are also running the SDN controllers on all the hosts. Even an OVN controller is a huge resource hog. A Neutron controller in Openstack is even worse. And then the big boys come in, have you tried to build an Arista setup?

      I am not talking theory here, I'm talking implementation, as someone who built datacenters and both public and private clouds at scale. Running the entire stack on each host, along with the actual workload is a horrible idea.

      What do you mean, mixing everything? The magic sauce is what makes tools like Starwinds vSAN an amazing tool.

      Sounds like marketing bs to me, sorry 🙂 Magic sauce? Really?

      It works with the hypervisor to manage all of your hosts from a single interface. Should any host go down, those resources are offline, but the VM's that may have been on there are moved to the remaining members of the HCI environment (of multiple physical hosts).

      Sounds like any decently built virtualized DC solution, from proxmox to ovirt to vcenter and xenserver. How is it "magic" exactly?

      The easiest way I can think to explain your rational @dyasny is to pretend I'm building a server, but because I don't trust the RAID controller that I can purchase for my MB, I purchase a bunch of external disks, plug those into another MB and then attach that storage back to my server via iSCSI over the network.

      This is a ridiculous example. What you describe is instead of having a server with a disk controller, disks , GPU and NICs, I'd install a single card that is a NIC, a GPU and can store data. So that instead of the PCI bus accessing each controller separately with better bandwidth, all the IO and different workloads are driven through a single PCI bus channel. And then use "magic" to install several of those hybrid monster cards in the hopes of making them work better.

      How is this safer, more reliable and cheaper than just adding all of the physical resources into a single server? Then combining 2, 3 or however many of the identical servers together with some magic sauce and managing it from a single interface?

      There you go with the magic sauce koolaid again.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      Hell your desktop or laptop is hyperconverged.

      Everything is self contained.

      Yup, this is all just marketing hype. In the real world, a standalone host is just a standalone host, it was before HCI was a thing and will be after.
      Also note, I always use the term HCI, not just HC, and I always mean it to be exactly what it is being sold as - a way of building virtualized infrastructure so that the shared storage in use, is provided by the same machines that host the workloads, off of their internal drives. I could get into the networking aspect of things, but that will only make my point stronger - mixing everything on a single host is a bad idea.

      posted in Starwind
      D
      dyasny
    • RE: What is the Latest With SodiumSuite?

      BTW, I would love to hear about your team's experience with Scylla so far, this is an interesting use case

      posted in SodiumSuite
      D
      dyasny
    • RE: What is the Latest With SodiumSuite?

      @scottalanmiller said in What is the Latest With SodiumSuite?:

      So the new rewrite is to use a wide column DB, ScyllaDB that you might have heard of

      hehe, nice 🙂

      And a very real consideration that Salt may not be used under the hood at all, or only temporarily, as more power and functionality is desired with more control.

      I'm no fan of Salt, but this is quite often a tradeoff. I see a lot of products trying to implement their own DSL, and failing, while others simply give the user a means to just insert their own code, be it regular bash/python/whathaveyou or a DSL, basically, like a standard command window in jenkins. This seems to work and raise less questions about choices

      posted in SodiumSuite
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      Actually, it basically is. Because HCI is essentially just "logical design". It's not some magic, it's just the obvious, logical way to build systems of any scale. One can easily show that every stand alone server is HCI, too. Basically HCI encompasses everything that isn't IPOD or just overbuild SAN infrastructure which has a place, but is incredibly niche.

      HCI is the only logical approach to 95% of the world's workloads. Just loads and loads of people either get by with terrible systems, or use HCI and don't realize it.

      But the real issue is that HCI alternatives come with massive caveats and have only niche use cases that make sense.

      Thanks for proving my point 🙂 When all you have is a hammer, everything starts looking like a nail, eh?

      Absolutely. So having the storage be local, not remote, carries the real benefits. HCI doesn't imply replication any more than SAN does. Most do, of course, and if you want FT that's generally how you do it.

      Now you are confusing basic local storage with HCI. If I install a bunch of ESXi servers using their local disks, with local-only VMs, am I running an HCI setup?

      For databases that do need the platform, rather than the application, to handle HA or FT, then HCI with more than one node is the best option.

      No, for those, it definitely makes more sense to use an addon that enables replication, sharding and other horizontal scaling techniques.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @Dashrender said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      I've been wondering about this very point. Clearly the CPUs in systems have gotten better and better - hell, we know because of crypto mining that ASICS are getting better and better (job specific). So why is hardware RAID slower than software?

      Because these ASICs aren't priority - mining ASICs and speed trading ASICs make money, it's a worthwhile investment. A RAID controller ASIC does a job and sells a controller for $200 once, with the customer grumbling about being able to do it all in software for free anyway.

      The only thing I can come up with is trace length latency in the system. Assuming the storage is local in both cases, I would expect a modern, currently developed RAID ASIC would match or trash a CPU doing the same task - the difference then being that the RAID controller has to then hand the data off to the RAM and CPU for actual processing - so there 'might' be a step saving by having the CPU doing it all.

      Not really. Depending on the RAID, there are few things to do - mirror writes and balance reads for raid1(+N), and calculating parity for striped arrays. None of this is very specific and would be much better in a separate ASIC, given a powerful enough generic CPU. The operations are in any case happening under the driver level, transparently for the IO issuing layer.

      posted in Starwind
      D
      dyasny
    • RE: What Are You Watching Now

      The Chernobyl series. Damn that thing takes me back to when I was a kid

      posted in Water Closet
      D
      dyasny
    • RE: What is the Latest With SodiumSuite?

      @scottalanmiller said in What is the Latest With SodiumSuite?:

      No, just in a total rewrite.

      Interesting, why rewrite and what is it being ported to now?

      posted in SodiumSuite
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      That's not why. They are flunking because their product sucks, doesn't compete with anyone else in the market, is on shaky legal ground, and they threaten their customers and are an evil organization that threatens lawsuits against anyone that exposes them.

      Yeah, I am quite aware of how bad they are as a company, but their marketing is basically "we are HCI" (just like Mirantis used to be "we are Openstack") and HCI is, well, not there. It is yet anothe niche approach to doing specific things and not the solution to everything under the sun, like the people pushing it claim.

      HC is without a doubt the best thing out there, the concept is straightforward and obvious and nothing comes close, it's just the market has calmed down and people care about products, not marketing hype.

      No, it is just another thing and nothing more. In a perfect world, all applications would be distributed and SANs or HCI would not be required, so all we'd have is a bunch of servers with local storage, running local workloads that are able to multi-master and replicate across those hosts nicely. This is the ideal workload for all the modern stuff, managed by k8s/dcos/mesos/swarm/etc. For everything else, in some cases you are much better off running on a massive SAN or a distributed SDS, and in some you can benefit from using replicated local storage, however, replicated local storage will always consume resources that will be taken away from the actual workloads. It will also add a lot of complexity to the overall system, after all, if a host goes down you get both a migration/restart storm and a storage rebalancing storm at the same time, and hitting the same blocks of data and machines using them.

      If you choose to swear by HCI and see it as the one and only solution for everything, you either can only see a very narrow set of tasks for infrastructure, which fit your world view, or you are taking marketing too close to the heart.

      The workloads are almost entirely separate, so having them separate actually requires more work from both components, plus introduces latency

      Thanks, that's another problem with HCI, I agree.

      Storage doesn't eat up RAM or CPU

      Lets take a look at CPU and RAM requirements for, say, ZFS? How many cores per node, and RAM per core is required? All that for a local dumb server, before we even start dealing with replication, self healing, all the madness behind RAFT etc.

      Think about a normal server, the reason that software RAID outperforms hardware RAID is because the overhead of RAID got so tiny that extra hardware for it made things slower, not faster, and that was by 2000 with the Pentium IIIS processor. Today the system performance and overhead take that to many orders of magnitude higher.

      The reason software RAID outperforms hardware these days is much simpler - hardware raid asics never got as much investment and boosting as regular CPUs, so what we have is modern massive CPUs vs RAID controllers that haven't seem much progress since the late 90s. And since nobody cares enough to invest in them or make them cheaper, they simply die out, which is well and proper.

      Virtualization (and containers too) came about because servers were getting to big for a single workload and people wanted to actually utilize their hardware better. Which led to massive workloads running on single machines, maxing them out. This is still the case, a normal hypervisor will easily see 100% utilization, will have to do all the usual resource sharing tricks and juggle tasks that come from the VMs and containers competing for CPU time and RAM pages. And now you come in and dump yet another massive workload an that same machine, and tell me it will have no impact? Don't be ridiculous.

      Here are some figures for commonly used distributed storage:
      https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/red_hat_ceph_storage_hardware_selection_guide/recommended-minimum-hardware-requirements-for-the-red-hat-ceph-storage-dashboard-hardware
      https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/quick_start_guide/system_requirements
      https://storpool.com/system-requirements

      Works for all workloads, VDI is one of the worst cases for HC. Still shows how HC is better every time, but it's where it is "better the least." VDI has almost no storage dependency, whereas something like a database has a huge storage dependency.

      VDI using a pool of stateless desktop VMs temporarily snapshotted from a single base image is the perfect use case for HCI. If you have the base image replicated across the cluster, all the VMs will be doing their reads locally.

      It's a database that shows where HC isn't just cheaper and safer, but way faster and reduces total hardware needs.

      Databases don't (or rather shouldn't) need storage replication in 2019. There are plenty of native tools for that, which are safer, cheaper and more efficient.

      Not the only one, but the assumption is that any HC system worth its salt does this essentially all of the time. It's not technically a requirement for being HC, but it would be downright idiotic for it to do anything else (except for in a failover state.) The problem with HC alternatives is that they all do the thing that would be idiotic for HC to do as their only option.

      Not the only one, but it's an obvious example. In any case, there is plenty of tech out there that makes network latency a non-issue, if need be, and the added complexity and risk of HCI is usually not worth it.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      And just in case: https://seekingalpha.com/article/4284137-nutanix-one-oversold-stock

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      AFAIK other HCI systems are growing because Nutanix are flunking. And Nutanix are flunking because HCI as a technology is not proving to be worth the effort. Distributed storage is not easy on resources, neither are VMs or containers. Having it all on a single machine is basically overloading the hardware, you either hurt the VMs or the storage or both. The only viable use case is having those VMs accessing blocks stored locally, which can work for stuff like VDI and actually improve things a bit. However, VDI is also not as popular as everyone hoped it would turn out to be, when it was all starting a decade ago.

      posted in Starwind
      D
      dyasny
    • RE: Create my own Stock,Inventory Software

      Have you tried the existing free tools like odoo or partkeepr?

      posted in Developer Discussion
      D
      dyasny
    • 1
    • 2
    • 3
    • 4
    • 5
    • 19
    • 20
    • 1 / 20