ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    3. Posts
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Groups 0

    Posts

    Recent Best Controversial
    • When Cloud is Not What You Signed Up For

      The AWS S3 outage last Tuesday confirmed the worst fears of many that bigger is not better. Three hours of outage for 150,000 or so websites and other services, because of some internal issues at S3. What we saw yet again yesterday was that a massive data center like S3 proved to be no more reliable than private data centers happily achieving five 9’s.

      alt text

      The real issue here is not that there was an outage. The outage was unfortunately just an inevitability that proves no infrastructure is invulnerable. No, the real issue is the perception that a cloud service like AWS can be made too big to fail. Instead, what we saw was that the bigger they are, the harder they fall.

      Now, I like public cloud services and I use them often. In fact, I used Google Docs to type a draft of this very blog post. However, would I trust my business critical data to public cloud? Probably not. Maybe I am old fashioned but I have had enough issues with outages of either internet services or cloud services to make me a believer in investing in private infrastructure.

      The thing about public cloud is that it offers simplicity. Just login and manage VMs or applications without ever having to worry about a hard drive failure or a power supply going wonky. And that simplicity comes at a premium with the idea that you will save money by only using what you need without having to over-provision, like you would expect to do with buying your own gear. That seems like wishful thinking to me, because in my experience, managing costs with cloud computing can be a tricky business and it can be a full-time job to make sure you aren’t spending more than you intend.

      Is the cost of managing private infrastructure even more? You must buy servers, storage, hypervisors, management solutions, and backup/DR, right? Not anymore. Hyperconverged infrastructure (HCI) is about delivering infrastructure that is pre-integrated and so easy to manage that the experience of using it is the same as using cloud. In fact, just last week I talked about how it really is a private cloud solution.

      What is the benefit of owning your own infrastructure? First: Control. You get to control your fate with the ability to better plan for and respond to disaster and failure, mitigating risk to your level of satisfaction. No one wants to be sitting on their hands, waiting, while their cloud provider is supposedly working hard to fix the outage. Second, Cost. Costs are more predictable with HCI and there is less over-provisioning than with traditional virtualization solutions. There are also no ongoing monthly premium costs for the third party who is supposed to be eliminating the risk of downtime.

      Cloud just isn’t the indestructible castle in the sky that we were meant to believe it was. Nothing is, but with HCI, you get your own castle and you get to rule it the way you see fit. You won’t be stuck waiting to see if all the king’s horses and all the king’s men can put Humpty back together again.

      posted in Scale Legion
      scaleS
      scale
    • Scale's Kevin Greenwood Recognized as CRM Channel Chief

      https://www.scalecomputing.com/press_releases/kevin-greenwood-senior-director-global-channels-at-scale-computing-recognized-as-2017-crn-channel-chief/

      Scale Computing, the market leader in hyperconverged solutions for midsized companies, announced today that CRN®, a brand of The Channel Company, has named Kevin Greenwood, Senior Director of Global Channels, to its prestigious list of 2017 Channel Chiefs. The executives on this annual list represent top leaders in the IT channel who excel at driving growth and revenue in their organizations through channel partners.

      Channel Chief honorees are selected by CRN’s editorial staff on the basis of their professional achievements, standing in the industry, dedication to the channel partner community and strategies for driving future growth and innovation. Each of the 2017 Channel Chiefs has demonstrated loyalty and ongoing support for the IT channel by consistently promoting, defending and executing outstanding channel partner programs.

      Greenwood was selected as a Channel Chief for his role in helping Scale double sales of its HC3 virtualization platform solutions over the past year exclusively through its channel of solution providers. The company continues to find success with providing midmarket organizations with game-changing hyperconverged solutions while rewarding competitive margins and marketing support to its resellers. Greenwood is part of Scale’s efforts in launching a multi-tier channel program that looks to expand the number of partners by 500 percent this year.

      “The executives on our 2017 Channel Chiefs list have distinguished themselves by building strong partner programs, developing and executing effective business strategy and helping to advance the channel as a whole,” said Bob Faletra, CEO of The Channel Company. “They represent an extraordinary group of individuals who lead by example and serve as both invaluable advocates and innovators of the IT channel. We applaud their achievements and look forward to their successes in the coming year.”

      “We couldn’t be happier for Kevin and his recognition among this year’s Channel Chiefs,” said Jeff Ready, CEO and co-founder of Scale Computing. “Kevin represents our company extremely well with his hard work and dedication. Since joining the company in 2009, Kevin has helped ensure Scale’s success through building an effective sales channel and we look forward to his continued contributions in helping expand those ranks – and the reach of our solutions – with the launch of our new multi-tiered partner program.”

      The 2017 CRN Channel Chiefs list is featured in the February 2017 issue of CRN and online at www.crn.com/channelchiefs.

      posted in Scale Legion scale kevin greenwood crn
      scaleS
      scale
    • Partner Recruitment A Big Deal for Scale Computing in Canada

      Scale Computing is making a big push into the Canadian market. Canadian VARs and MSPs might be interested in learning more about what we are doing up north.

      The restructured Scale program accommodates managed service providers for the first time, and also makes provision for referral partners.

      0_1488385994345_Jason-Collier-300.jpg
      Indianapolis-based hyper-converged vendor Scale Computing has announced a major restructuring of its channel program. While the program, which had not been tweaked since 2013, was ripe for updating, a major goal of the changes is to increase significantly the number of partners with which Scale works. While that number is approximately 250 today in North America, they would like to get it to close to 1000.

      “It was time for our partner program to grow up,” said Jason Collier, Scale Computing’s co-founder. “We sell entirely through channel partners, and have had success with the program, but we needed to make it much more robust than what it was.”

      When Scale launched its flagship HC3 hyper-converged offering, a new partner program was launched at the very beginning of 2013, coinciding with the product launch. No significant changes had been made since then, and the changes to the partner community over the last four years made it imperative that the program be updated to deal with those changes.

      “We have made accommodation in the new program for both managed service providers and referral partners, neither of which were accommodated in the old program,” said Kevin Greenwood, Senior Director of Global Channels at Scale. “We have also added an extra tier for our most productive partners to provide them with more financial rewards.”

      The 2013 program was a two-tier program, on paper.

      “Our first program was flat, with just one tier,” Collier said. “Later on, in 2013, we added a second tier, but there wasn’t much of a distinction between the two.”

      The new program has three tiers – the archetypal Silver, Gold and Platinum – and the distinctions are designed to be meaningful.

      “The differentiation is how much investment is put into the relationship,” Greenwood said.

      The Silver tier is the entry level, and is pretty much open to anyone willing to go through Scale’s training. The formalization of the training and certification is a new and important part of the program, and is a critical part of the partner investment. Even the Silver tier requires training, and the required number of people trained goes up at Gold and Platinum.

      The exception to the training requirement is Referral Partners.

      “We believe these partners, who are typically technology consulting organizations, are a part of our channel,” Greenwood said. “Before, we did not have a model for them to participate, and now we do.”

      The top Platinum tier is invitation only.

      “This is significantly less than 10 per cent of our total partner base, and 10 per cent would be the logical maximum in a fully-built out program,” Greenwood said.

      The way the program works has changed significantly as well.

      “In the past, the program was not heavily managed,” Greenwood said. “We are now actively focusing more on activities that will generate the right opportunities for both Scale and partners, including more focus on marketing, lead generation and professional services.

      0_1488386019607_Kevin-Greenwood-Scaled.jpg
      “Professional services are a huge-value add that the channel delivers,” Greenwood indicated. “Partners are already selling a lot of those services. We want the program to bring us into alignment with what partners are already doing in services around installation, migration, disaster services, and data centre optimization.”

      “For example, we are looking to franchise out our DR services to our MSP partners,” Collier said.

      The ultimate goal of the new program is to significantly increase the number of strong partners doing business with Scale.

      “We currently have about 250 partners in North America, of which around 40 to 50 per cent are active, and 20 per cent are highly active,” Collier said. “In Canada, we are crossing over the 50 mark in partners. While we have a sales and technical team in Canada, we don’t have nearly the name recognition in Canada that we do in the U.S., so lead generation activity is proportionately even more important in Canada. We also just hired a channel development manager for Canada to help enable channel partners.”

      Collier said that they are looking to get to close to 1000 partners in North America.

      “We’ve always been looking to add new partners, but this program is heavily focused on the recruitment of new partners, and we are actively seeking them,” Collier said. “We certainly see that the channel is changing, and managed services in particular are becoming critical. Part of this launch is accommodating to the realities of today’s channel market.”

      Collier described Scale as a ‘VMware killer’ that doesn’t use VMware at all, and that this is very much reflected in their partner base.

      “It very much impacts channel recruitment,” he said. “We are not looking to recruit those national partners heavily entrenched with the tier one vendors.”

      posted in Scale Legion scale scale hc3 msp var reseller partner
      scaleS
      scale
    • Is Hyperconvergence the Private Cloud You Need?

      If you are an IT professional, you are most likely familiar with at least the term “hyperconvergence” or “hyperconverged infrastructure”. You are also undoubtedly aware of cloud technology and some of the options for public, private, and hybrid cloud. Still, this discussion merits a brief review of private cloud before delving into how hyperconvergence fits into the picture.

      What is a Private Cloud?

      The basic premise behind cloud technology is an abstraction of the management of VMs from the underlying hardware infrastructure. In a public cloud, the infrastructure is owned and hosted by someone else, making it completely transparent. In a private cloud, you own the infrastructure and still need to manage it, but the cloud management layer simplifies day-to-day operation of VMs compared to traditional virtualization.

      Traditional virtualization is complicated by managing hypervisors running on individual virtual hosts and managing storage across hosts. When managing a single virtual host, VM creation and management is fairly simple. In a private cloud, you still have that underlying infrastructure of multiple hosts, hypervisors, and storage, but the cloud layer provides the same simple management experience of a single host but spread across the whole data center infrastructure.

      Many organizations who are thinking of implementing private cloud are also thinking of implementing public cloud, creating a hybrid cloud consisting of both public and privately hosted resources. Public cloud offers added benefits for pay-per-use elasticity for seasonal business demands and cloud-based applications for productivity.

      Why Not Put Everything in Public Cloud?

      Many organizations have sensitive data that they prefer to keep onsite or are required to do so by regulation. Maintaining data onsite can provide greater control and security than keeping it in the hands of a third party. For these organizations, private cloud is preferable to public cloud.

      Some organizations require continuous data access for business operations and prefer not to risk interruption due to internet connectivity issues. Maintaining systems and data onsite allows these organizations to have more control over their business operations and maintain productivity. For these organizations, private cloud is preferable to public cloud.

      Some organizations prefer the Capex model of private cloud vs. the Opex model of public cloud. When done well, owning and managing infrastructure can be less expensive than paying someone else for hosting. The costs can be more predictable for onsite implementation, making it easier to budget. Private cloud is preferable for these organizations.

      zAMcPjG-768x603.png

      How does Hyperconvergence Fit as a Private Cloud?

      For all intents and purposes, hyperconverged infrastructure (HCI) offers the same or better experience as a traditional private cloud. You could even go so far as to say it is the next generation of private cloud because it improves on some of the shortcomings of traditional private clouds. The simplicity of managing VMs in HCI is the same as traditional private clouds and brings an even simpler approach to managing the underlying hardware.

      HCI is a way of combining the elements of traditional virtualization (servers, storage, and hypervisor) into a single appliance-based solution. With traditional virtualization, you were tasked with integrating these elements from multiple vendors into to working infrastructure and dealing with any incompatibilities and managing with multiple console, etc. HCI is a virtualization solution that has all of these elements pre-integrated into more or less a turnkey appliance. There should be no need to configure any storage, configure any hypervisor installs on host servers, or manage through more than a single interface.

      Not all HCI vendors are equal and some rely on third party hypervisors so there are still elements of multi-vendor management, but true HCI solutions own the whole hardware and virtualization stack, providing the same experience as a private cloud. Users are able to focus on creating and managing VMs rather than worrying about the underlying infrastructure.

      With the appliance-based approached, hyperconvergence is even easier to scale out than traditional private clouds or even the cloud-in-a-box solutions that also provide some levels of pre-integration. HCI scalability should be as easy as plugging in a new appliance node to a network and telling it to join an existing HCI cluster of appliance nodes.

      HCI is generally more accessible and affordable than traditional private clouds or cloud-in-a-box solutions because it can start and then scale out from very small implementations without any added complexity. Small to midmarket organizations who experienced sticker shock at the acquisition and implementation costs of private clouds will likely find the costs and cost benefits of HCI much more appealing.

      Summary

      Private cloud is a great idea for any organization whose goals include the control and security of onsite infrastructure and simplicity of day-to-day VM management. These organizations should be looking to hyperconverged infrastructure as a private cloud option to achieve those goals vs traditional private cloud or cloud-in-a-box options.

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged cloud computing iaas private cloud
      scaleS
      scale
    • Love Your Scale? Drop Us a Review!

      Gartner (I know, they don't much love) has a new online review system and we'd love it if some of our Mango Fans (we can't safely abbreviate that) were to post some reviews of your experiences with the Scale HC3!

      Of course, we'd love to see reviews on MangoLassi as well: there is a reviews section, you know.

      posted in Scale Legion scale scale hc3 gartner review
      scaleS
      scale
    • What Is New in the Scale HC3 PDF

      To coincide with our webinar that will start in a few minutes, we have a PDF that highlights the latest "What's New in the HC3"

      Newest HC3 Models
      Over the past year we have introduced a number of new models to our HC3 family. First, we introduced the HC2150 and HC4150 which were the first to include flash storage. Then we retooled our HC1000 family with the HC1100 and HC1150 models to bring you HC3 and flash storage at an even more affordable price. Then we increased maximum storage capacity on nearly all of our models with new 8TB SAS drives. Most recently, we announced the new HC1150D which includes dual processors and increased capacity over the HC1150.

      Disaster Recovery Planning Service
      If you want a comprehensive disaster recovery plan for your HC3 system, this new service is for you. Including planning, identifying dependencies, prioritizing workloads, and DR testing, this service will make sure you are meeting or exceeding your SLAs for recovery and minimizing data loss. It never hurts to have expert help when planning for disaster.

      Premium Installation Service
      HC3 is known for being extremely easy to install and use, but for those who want to hit the ground running with HC3, we now offer a Premium Installation Service facilitated by our ScaleCare support engineers. This service includes planning, prerequisites, priority scheduling, and deep-dive technical training. With this training, you will be an HC3 expert by the time installation is complete.

      Single Node Appliance Configuration
      HC3 used to require a 3-node cluster as the minimum configuration but not anymore. Now, a single appliance can be deployed all by itself. While a cluster is preferable for high availability and primary production, a single appliance can be more than sufficient for disaster recovery, remote or branch offices, or even for the small “s” of the SMB.

      Bulk Actions
      You can now perform a number of actions against groups of VMs in the HC3 web interface. The action will be performed against all VMs displayed, so you can use the tagging and filtering capabilities to create groups that you commonly need to perform the same action against. The available actions include clone, snapshot, delete, and power options (power on, shutdown, etc).

      SSD Retrofit Service
      You can now retrofit your HC2000/2100 and HC4000/4100 nodes with SSD storage to take advantage of faster I/O and HyperCore Enhanced Automated Tiering (HEAT) technology. Our ScaleCare Support Team will assist you in replacing spinning HDD drives with new SSD drives to scale up your nodes and turn I/O to 11.

      What’s New in HC3

      Workspot VDI 2.0 Integration
      We are pleased to announce that we have teamed up with Workspot to offer a validated VDI solution with integration for HC3. Both solutions focus on removing complexity from infrastructure and with with Workspot, VDI can be deployed on HC3 in as little as 60 minutes with as many as 175 desktops on an HC1150 cluster.

      HyperCore Enhanced Automated Tiering (HEAT)
      In order to make the most efficient use of the new flash storage within HyperCore, we designed an automated, intelligent mechanism to allocate storage across the tiers. Not only do we begin allocating to flash by default for each virtual disk, but we allow you to tune the relative priority with a simple, intuitive slider bar. This tuning capacity allows you to increase the allocation of flash for workloads that have higher I/O requirements while decreasing the allocation for virtual disks and workloads that have minimal I/O needs.

      ScaleCare Remote Recovery Service
      We are pleased to announce a disaster recovery service that allows (for a fee) VMs from an HC3 cluster to be replicated and failed over to a remote, secure datacenter for disaster recovery. For those customers without a second site or second cluster that need reliable DR for their critical workloads, this service addresses that need on a per VM basis. It starts as low as $100 per VM per month.

      Per VM Real-Time Statistics
      Not only have we added a new IOPS statistic to the primary web interface dashboard for your cluster, but we have added per VM storage statistics for capacity utilization and IOPS that are updated in real-time. You’ve always been able to see the storage capacity usage for the entire cluster, but now you can see that statistic for each individual VM right on the top layer interface view. The new IOPS statistic will not only be useful for new and existing users on our traditional cluster nodes, but more so by users with flash storage to help tune virtual disks for maximum IOPS.

      posted in Scale Legion scale scale hc3 whitepaper
      scaleS
      scale
    • RE: Scale Webinar: What's New in HC3

      Twenty minutes, see y'all there!

      posted in Scale Legion
      scaleS
      scale
    • 5 things to think about with Hyperconverged Infrastructure

      1. Simplicity

      A Hyperconverged infrastructure (or HCI) should take no more than 30 minutes to go from out of the box to creating VM’s. Likewise, an HCI should not require that the systems admin be a VCP, a CCNE, and a SNIA certified storage administrator to effectively manage it. Any properly designed HCI should be able to be administered by an average windows admin with nearly no additional training needed. It should be so easy that even a four-year-old should be able to use it…

      2. VSA vs. HES

      In many cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see HCI vendors choosing to simply virtualize a SAN controller into each node in their architectures and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IO’s having to pass multiple times through VMs in the system and adjacent systems. This approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) consume so much CPU and RAM that they redefine inefficient – especially in the mid-market. In one case I can think of, a VSA running on each server (or node) in a vendor’s architecture BEGINS its RAM consumption at 16GB and 8 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do. With a different vendor, the VSA reserves around 50GB RAM per node on their entry point offering, and over 100GB of RAM per node on their most common platform – a 3 node cluster reserving over 300 GB RAM just for IO path overhead. An average SMB to mid-market customer could run their entire operation in just the CPU and RAM resources these VSA’s consume.

      There is a better alternative called the HES approach. It eliminates the dedicated servers, storage protocol overhead, resource consumption, multi-layer object files, filesystem nesting, and associated gear by moving the hypervisor directly into the OS of a clustered platform as a set of kernel modules with the block level storage function residing alongside the kernel in userspace, completely eliminating the SAN and storage protocols (not just virtualizing them and replicating copies of them over and over on each node in the platform). This approach simplifies the architecture dramatically while regaining the efficiency originally promised by Virtualization.

      3. Stack Owners versus Stack Dependents

      Any proper HCI should not be stack dependent on another company for it’s code. To be efficient, self-aware, self-healing, and self-load balancing, the architecture needs to be holistically implemented rather than piecemealed together by using different bits from different vendors. By being a stack owner, an HCI vendor is able to do things that weren’t feasible or realistic with legacy virtualization approaches. Things like hot and rolling firmware updates at every level, 100% tested rates on firmware vs customer configurations, 100% backwards and forwards compatibility between different hardware platforms – that list goes on for quite a while.

      4. Using flash properly instead of as a buffer

      Several HCI vendors are using SSD and Flash only (or almost only) as a cache buffer to hide the very slow IO path’s they have chosen to build based on VSAs and Erasure Coding (formerly known as software RAID 5/6/X) used between Virtual Machines and their underlying disks – creating what amounts to a Rube Goldberg machine for an IO path (one that consumes 4 to 10 disk IO’s or more for every IO the VM needs done) rather than using Flash and SSD as proper tiers with an AI based heat mapping and QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers and dynamically allocate flash as needed on the fly to workloads that demand it (up to putting the entire workload in flash). Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of Flash or Solid State. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

      5. Future proofing against the “refresh everything every 5 years” spiral

      Proper HCI implements self-aware bi-directional live migration across dissimilar hardware. This means that the administrator is not boat anchored to a technology “point in time of acquisition”, but rather, they can avoid over buying on the front end, and take full advantage of Moore’s law and technical advances as they come and the need arises. As lower latency and higher performance technology comes to the masses, attaching it to an efficient software stack is crucial in eliminating the need the “throw away and start over ” refresh cycle every few years.

      Bonus number 6. Price –

      Hyperconvergence shouldn’t come at a 1600+% price premium over the cost of the hardware it runs on. Hyperconvergence should be affordable – more so than the legacy approach was and VSA based approach is by far.

      These are just a few points to keep in mind as you investigate which Hyperconverged platform is right for your needs

      This weeks blog is brought to you by @Aconboy

      posted in Scale Legion scale scale hc3 scale blog hyperconvergence hyperconverged
      scaleS
      scale
    • Behind the Scenes: Architecting HC3

      3b533e59028ab756e12efd65b5082d27_building-blocks-personal-business-building-blocks-clipart_400-258-300x194.jpeg

      Like any other solution vendor, at Scale Computing we are often asked what makes our solution unique. In answer to that query, let’s talk about some of the technical foundation and internal architecture of HC3 and our approach to hyperconvergence.

      The Whole Enchilada

      With HC3, we own the entire software stack which includes storage, virtualization, backup/DR, and management. Owning the stack is important because it means we have no technology barriers based on access to other vendor technologies to develop the solution. This allows us to build the storage system, hypervisor, backup/DR tools, and management tools that work together in the best way possible.

      Storage

      At the heart of HC3 is our SCRIBE storage management system. This is a complete storage system developed and built in house specifically for use in HC3. Using a storage striping model similar to RAID 10, SCRIBE stripes storage across every disk of every node in a cluster. All storage in the cluster is always part of a single cluster-wide storage pool, requiring no manual configuration. New storage added to the cluster is automatically added to the storage pool. The only aspect of storage that the administrator manages is creation of virtual disks for VMs.

      The ease of use of HC3 storage is not even the best part. What is really worth talking about is how the virtual disks for VMs on HC3 are accessing storage blocks from SCRIBE as if it were direct attached storage to be consumed on a physical server–with no layered storage protocols. There is no iSCSI, no NFS, no SMB or CIFS, no VMFS, or any other protocol or file system. There is also no need in SCRIBE for any virtual storage appliance (VSA) VMs that are notorious resource hogs. The file system laid down by the guest OS in the VM is the only file system in the stack because SCRIBE is not a file system; SCRIBE is a block engine. The absence of these storage protocols that would exist between VMs and virtual disks in other virtualization systems means the I/O paths in HC3 are greatly simplified and thus more efficient.

      Without our ownership of both the storage and hypervisor by creating our own SCRIBE storage management system there is no storage layer that would have allowed us to achieve this level of efficient integration with the hypervisor.

      Hypervisor

      Luckily we did not need to completely reinvent virtualization, but were able to base our own HyperCore hypervisor on industry-trusted, open-source KVM. Having complete control over our KVM-based hypervisor not only allowed us to tightly embed the storage with the hypervisor, but also allowed us to implement our own set of hypervisor features to complete the solution.

      One of the ways we were able to improve upon existing standard virtualization features was through our thin cloning capability. We were able to take the advantages of linked cloning which was a common feature of virtualization in other hypervisors, but eliminate the disadvantages of the parent/child dependency. Our thin clones are just as efficient as linked clones but are not vulnerable to issues of dependency with parent VMs.

      Ownership of the hypervisor allows us to continue to develop new, more advanced virtualization features as well as giving us complete control over management and security of the solution. One of the most beneficial ways hypervisor ownership has benefited our HC3 customers is in our ability to build in backup and disaster recovery features.

      Backup/DR

      Even more important than our storage efficiency and development ease, our ownership of the hypervisor and storage allows us to implement a variety of backup and replication capabilities to provide a comprehensive disaster recovery solution built into HC3. Efficient, snapshot-based backup and replication is native to all HC3 VMs and allows us to provide our own hosted DRaaS solution for HC3 customers without requiring any additional software.

      Our snapshot-based backup/replication comes with a simple, yet very flexible, scheduling mechanism for intervals as small as every 5 minutes. This provides a very low RPO for DR. We were also able to leverage our thin cloning technology to provide quick and easy failover with an equally efficient change-only restore and failback. We are finding more and more of our customers looking to HC3 to replace their legacy third-party backup and DR solutions.

      Management

      By owning the storage, hypervisor, and backup/DR software, HC3 is able to have a single, unified, web-based management interface for the entire stack. All day-to-day management tasks can be performed from this single interface. The only other interface ever needed is a command line accessed directly on each node for initial cluster configuration during deployment.

      The ownership and integration of the entire stack allows for a simple view of both physical and virtual objects within an HC3 system and at-a-glance monitoring. Real-time statistics for disk utilization, CPU utilization, RAM utilization, and IOPS allow administrators to quickly identify resource related issues as they are occurring. Setting up backups and replication and performing failover and failback is also built right into the interface.

      Summary

      Ownership of the entire software stack from the storage to the hypervisor to the features and management allows Scale Computing to fully focus on efficiency and ease of use. We would not be able to have the same levels of streamlined efficiency, automation, and simplicity by trying to integrate third party solutions.

      The simplicity, scalability, and availability of HC3 happen because our talented development team has the freedom to reimagine how infrastructure should be done, avoiding inefficiencies found in other vendor solutions that have been dragged along from pre-virtualization technology.

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged
      scaleS
      scale
    • Scale Webinar: What's New in HC3

      0_1487166781326_Screenshot from 2017-02-15 14-52-52.png

      2016 was an exciting year for hyperconverged infrastructure and an amazing year for Scale Computing. We’re working hard to continue making HC3 the most advanced IT infrastructure solution on the market. Join us on Thursday, February 16th at 2:00 PM (EST) to discuss what’s new including:

      • New Releases
      • Recent Announcements
      • Our Newest Features
      • New Services
      • A Preview of What is Coming Soon

      Not able to make the webinar? Register anyway to receive a link to the recording delivered to your inbox following the event!

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged webinar
      scaleS
      scale
    • The Hyperconverged Tipping Point with Scale Computing

      Have all your wildest dreams come true? Have you found the meaning of life? Has a wave of serenity that puts you in tune with the eternal rhythm of the universe overwhelmed you, making all previous concerns look petty by comparison?

      No? Then you must have never used hyperconverged infrastructure before!

      That may be an exaggeration, but as the new hotness in the enterprise for a while (so I guess its really the old hotness), HCI certainly seems to make a lot of lofty claims. In many ways, HCI is a response to the supposed simplicity of the cloud. For organizations not ready or impractical to move to the cloud, HCI is a great way to simplify provisioning and managing physical infrastructure.....

      Youtube Video

      posted in Scale Legion hyperconvergence virtualization
      scaleS
      scale
    • Groundhog Day

      Today is Groundhog Day, a holiday celebrated in the United States and Canada where the length of the remaining winter season is predicted by a rodent. According to folklore, if it is cloudy when a groundhog emerges from its burrow on this day, then the spring season will arrive early, some time before the vernal equinox; if it is sunny, the groundhog will supposedly see its shadow and retreat back into its den, and winter weather will persist for six more weeks. (Wikipedia)

      Today the groundhog, Punxsutawney Phil, saw his shadow. Thanks, Phil.

      alt text

      Groundhog Day is also the name of a well-loved film starring Bill Murray (seen above) where his character, Phil, is trapped in some kind of temporal loop repeating the same day over and over. I won’t give the rest away for anyone who has not seen the movie, but it got me thinking. What kind of day would you rather have to live over and over as an IT professional? I’m guessing it does not include the following:

      • Manually performing firmware and software updates to your storage system, server hardware, hypervisor, HA/DR solution, or management tools.

      • Finding out one of your solution vendor updates broke a different vendor’s solution.

      • Having to deal with multiple vendor support departments to troubleshoot an issue none of them will claim responsibility for.

      • Dealing with downtime caused by a hardware failure.

      • Having to recover a server workload from tape or cloud backup.

      • Having to deal with VMware licensing renewals.

      • Thanklessly working all night to fix an issue and only receiving complaints about more downtime.

      These are all days none of us want to live through even once, right? But of course, many IT professionals do find themselves reliving these days over and over again because they are still using the same old traditional IT infrastructure architecture that combines a number of different solutions into a fragile and complex mess.

      At Scale Computing we are trying to break some of these old cycles with simplicity, scalability, and affordability. We believe, and our customers believe, that infrastructure should be less of a management and maintenance burden in IT. I encourage you to see for yourself how our HC3 virtualization platform has transformed IT with both video and written case studies here.

      We may be in for six more weeks of winter but we don’t need to keep repeating some of the same awful days we’ve lived before as IT professionals. Happy Groundhog Day!

      posted in Scale Legion
      scaleS
      scale
    • 5 Things You Might Not Know About HCI

      Hyperconverged Infrastructure (HCI) is still an emerging technology and there are a variety of approaches vendors are taking. For many IT professionals, there is still an air of mystery and misconception around HCI. Below are 5 things you might not know about the current state of HCI.

      The Meaning of Hyper

      The “hyper” in hyperconverged means hypervisor. The term hyperconvergence was intended to refer to solutions that included a virtualization hypervisor in addition to the combination of server and storage, often referred to as converged infrastructure. Well, since hyperconverged sounds so much cooler than converged, every converged infrastructure vendor, whether they included their own hypervisor or not, started adopting hyperconverged to refer to their solution. Many of these solutions still rely on third party hypervisors and don’t really meet the hypervisor criteria for hyperconverged infrastructure.

      Is it More Efficient?

      Is HCI more efficient than traditional infrastructure? It depends on the vendor. For example, some HCI vendors are still using the same inefficient virtual storage appliance (VSA) models that became popular in adapting traditional SANs to virtualization. These VSAs are notorious resource hogs often consuming large amounts of RAM and CPU that would otherwise be available for application VMs. Other vendors have brought real innovation to HCI to build new storage architecture that is designed specifically to deliver storage efficiently to the hypervisor and make the most efficient use of the hardware.

      Improved Data Protection?

      While there haven’t yet been any studies specifically on HCI vendor solutions for data protection, a study by EMC (view here) found that more vendors involved in data protection resulted in more data loss. Many HCI solutions include comprehensive backup, replication, and disaster recovery tools to protect data, so they are a single vendor for data protection. The HCI architecture lends itself to better data protection by virtue of converging so many solution components in one.

      Is HCI Less or More Expensive than Traditional Infrastructure?

      The cost of acquisition can be high with HCI and the price varies from vendor to vendor, but there is a premium added to the traditional hardware cost for the software components and convenience of the various solutions being converged. Even if the price tag is higher than a traditional infrastructure, looking at operational expenses reveals the true savings. First, it saves enormously on implementation and scaling costs, since most of the traditional integration has already been done within the architecture. Then, depending on the vendor solution to varying degrees, management and maintenance costs are reduced over the lifecycle of the solution. The ROI of an HCI solution will usually be dramatically higher than a traditional server/SAN/hypervisor solution.

      You have Flexibility to Scale Capacity

      This does vary by vendor, but some vendors provide the ability to customize each appliance in an HCI cluster. With these vendors, when you add a new node, you can build it for the resources you need. The most common need is to increase cluster storage capacity, so you can bring in a new node with high storage capacity but with lower RAM and CPU to fulfill that need. With vendors that support this flexibility, you can customize resource capacity each time to add in a new cluster node.

      Summary

      HCI is a solution that should be looked at more closely for the value proposition it represents as a replacement for traditional IT infrastructure. Mystery and misconception can always be eliminated with research and dialogue with the vendors that are pushing forward HCI as a real, next-generation infrastructure.

      posted in Scale Legion scale scale hc3 scale blog hyperconvergence virtualization
      scaleS
      scale
    • Vanee Foods - Scale HC3 Video Case Study

      https://youtu.be/MPBDTwk1Dlg
      “With Scale, you’re an immediate product matter expert.”

      Blake Vanee, Director of Business Process, Vanee Foods, Chicago

      posted in Scale Legion youtube scale scale hc3 case study hyperconvergence
      scaleS
      scale
    • How Important is DR Planning?

      Disaster Recovery (DR) is a crucial part of IT architecture but it is often misunderstood, clumsily deployed, and then neglected. It is often unclear whether the implemented DR tools and plan will actually meet SLAs when needed. Unfortunately it often isn’t until a disaster has occurred that an organization realizes that their DR strategy has failed them. Even when organizations are able to successfully muddle through a disaster event, they often discover they never planned for failback to their primary datacenter environment.

      plan-ahead-300x175.jpg

      Proper planning can ensure success and eliminate uncertainty, beginning before implementation and then enabling continued testing and validation of the DR strategy, all the way through disaster events. Planning DR involves much more than just identifying workloads to protect and defining backup schedules. A good DR strategy include tasks such as capacity planning, identifying workload dependencies, defining workload protection methodology and prioritization, defining recovery runbooks, planning user connectivity, defining testing methodologies and testing schedules, and defining a failback plan.

      At Scale Computing, we take DR seriously and build in DR capabilities such as backup, replication, failover, and failback to our HC3 hyperconverged infrastructure. In addition to providing the tools you need in our solution, we also offer our DR Planning Service to help you be completely successful in planning, implementing, and maintaining your DR strategy.

      Our DR Planning Service, performed by our expert ScaleCare support engineers, provides a complete disaster recovery run-book as an end-to-end DR plan for your business needs. Whether you have already decided to implement DR to your own DR site or utilize our ScareCare Remote Recovery Service in our hosted datacenter, our engineers can help you with all aspects of the DR strategy.

      The service also includes the following components:

      • Setup and configuration of clusters for replication
      • Completion of Disaster Recovery Run-Book (disaster recovery plan)
      • Best-practice review
      • Failover and failback demonstration
      • Assistance in facilitating a DR test

      Youtube Video

      posted in Scale Legion scale scale hc3 scale blog hyperconvergence disaster recovery youtube
      scaleS
      scale
    • HC3 VM File Level Recovery with Video

      Many of you have asked us recently about individual file recovery with HC3 and we’ve put together some great resources on how it works. We realize file recovery is an important part of IT operations. It is often referred to as operational recovery instead of disaster recovery, because the loss of a single file is not necessarily a disaster. It is an important part of IT and an important function we are able to highlight with HC3.

      First off, we have a great video demo by our Pontiff of Product Management, Craig Theriac. @craig-theriac

      Youtube Video

      Additionally, we have a comprehensive guide for performing file level recovery on HC3 from our expert ScaleCare support team. This document, titled “Windows Recovery ISO”, explains every detail of the process from beginning to end. To summarize briefly, the process involves using a recovery ISO to recover files from a VM clone taken from a known good snapshot. As you can see in the video above, the process can be done very quickly, in just a matter of minutes.

      Screenshot-2017-01-10-12.14.53-233x300.png

      Full Document

      Full disclosure: We know you’d prefer to have a more integrated process that is built into HC3, and we will certainly be working to improve this functionality with that in mind. Still, I think our team has done a great job providing these new resources and I think you’ll find them very helpful in using HC3 to its fullest capacity. Happy Scaling!

      posted in Scale Legion scale scale hc3 scale blog hyperconvergence youtube
      scaleS
      scale
    • New! – Premium Installation Service

      2017 is here. We want to help you start your new year and your new HC3 system with our new ScaleCare Premium Installation service. You’ve probably already heard about how easy HC3 is to install and manage, and you might be asking why you would even need this service. The truth is that you want your install to go seamlessly and to have full working knowledge of your HC3 system right out of the gate, and that is what this service is all about.

      First, this premium installation service assists you with every aspect of installation starting with planning, prerequisites, virtual and physical networking configuration, and priority scheduling. You get help even before you unbox your HC3 system to prepare for a worry-free install. The priority scheduling helps you plan your install around your own schedule, which we know can be both busy and complex.

      Secondly, ScaleCare Premium Installation includes remote installation with a ScaleCare Technical Support Engineer. This remote install includes a UI overview and setup assistance and if applicable, a walkthrough of HC3 Move software for workload migrations to HC3 of any physical or virtual servers. Remote installation means a ScaleCare engineer is with you every step of the way as you install and configure your HC3 system.

      Finally, ScaleCare Premium Installation includes deep dive training of everything HC3 with a dedicated ScaleCare Technical Support Engineer. This training, which normally takes around 4 hours to complete, will make you an HC3 expert on everything from virtualization, networking, backup/DR, to our patented SCRIBE storage system. You’ll basically be a PHD of HC3 by the time you are done with the install.

      0_1485918812097_training-sign-300x300.jpg

      Here is the list of everything included:

      • Requirements and Planning Pre-Installation Call
      • Virtual and Physical Networking Planning and Deployment Assistance
      • Priority Scheduling for Installations
      • Remote Installation with a ScaleCare Technical Support Engineer
      • UI Overview and Setup Assistance
      • Walkthrough of HC3 Move software for migrations to HC3 of a Windows physical or virtual server
      • Training with a dedicated ScaleCare Technical Support Engineer
        • HC3 and Scribe Overview
        • HC3 Configuration Deep Dive
        • Virtualization Best Practices
        • Networking Best Practices
        • Backup / DR Best Practices

      Yes, it is still just as easy to use and simple to deploy as ever, but giving yourself a head start in mastering this technology seems like a no-brainer.

      posted in Scale Legion scale scale hc3 scale blog hyperconvergence
      scaleS
      scale
    • RE: Scale Computing CEO On Attacking VMware's Virtualization Licensing Model

      CRNTV has a segment with @jeffready as well:

      http://www.crn.com/crntv/index1.htm?searchVideoContent=5303250057001

      Sorry that we can't embed it to watch here.

      posted in Scale Legion
      scaleS
      scale
    • Scale Computing CEO On Attacking VMware's Virtualization Licensing Model

      @JeffReady

      CRN Exclusive: Scale Computing CEO on Attacking VMware's Virtualization Licensing Model and Saving Customers $32M

      ready-jeff-scale-computing400.jpg

      The Red-Hot Hyper-Converged Market

      Jeff Ready, the CEO of Scale Computing, the hyper-converged, virtualization appliance maker which just launched its first broad-based, multi-tiered channel program, spoke with CRN about the company's relentless drive to get customers to dump VMware, the astronomical savings in VMware licensing fees that Scale has delivered to customers and the stark differences between Scale and competitors SimpliVity and Nutanix.

      Ready, a serial entrepreneur, has over the last two decades started a number of companies that reduce the complexity and cost of computing including Corvigo, an anti-spam filtering company, and Scale, which was built from the ground up as a hyper-converged, virtualization game-changer for small and midsize customers.

      Under Ready's leadership, Scale sales more than doubled over the last year with more than 1,500 customers now running the company's HC3 appliances which integrate storage, server and virtualization under its patented HyperCore Software platform.

      Read more on CRN!

      posted in Scale Legion scale scale hc3 jeff ready
      scaleS
      scale
    • RE: IOT Security Challenge

      @RojoLoco said in IOT Security Challenge:

      @scale said in IOT Security Challenge:

      @BRRABill said in IOT Security Challenge:

      I already have my submission ready.

      It's a pretty box that you can throw all that insecure crap in, and then set on your curb.

      Now, what will I do with my $25K?

      That's exactly enough for a small HC3 cluster!

      Or a small new car...

      Sure, if you want to be boring.

      posted in IT Discussion
      scaleS
      scale
    • 1
    • 2
    • 8
    • 9
    • 10
    • 11
    • 12
    • 15
    • 16
    • 10 / 16