ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    3. Posts
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Groups 0

    Posts

    Recent Best Controversial
    • In a Box

      “In a box” has become a marketing phrase used to imply a lot of different elements combined together in a neat, single package. The phrase has been used in many other ways before, often implying a restrictive situation or feeling “boxed in.” Then there is the infamous skit from Saturday Night Live with a more literal meaning. In IT, though, "in a box" has a bit of a history.

      box.jpg

      Brief IT History Overview

      Information technology can be extremely complex, even looking at the basic hardware components on which it runs: the infrastructure. Integrating the most basic compute elements such as CPU and RAM with storage, networking, operating systems, hypervisors, and security elements can often require multiple technical experts and days, if not weeks, of work. The costs can be very high, which is great news for the system integrators and service providers often employed to assist with these projects. But does it have to be this way?

      No. And it wasn’t always this way. Mainframes used to rule the IT roost before the rise of the standalone server. The mainframe was the massive, powerful machine that could handle all the computing needs for an organization. It wasn’t perfect, of course, and it ultimately gave way to the greater flexibility of the server. The limits of the server then lead to storage area networks, clustering, and more advanced concepts like grid computing.

      The server model eventually required virtualization to alleviate the overburdening costs of maintaining physical hardware for each server. Virtualization helped, but it was only the beginning. The complexity of combining all of the various components was still a huge cost sink for organizations. The cloud looked promising but was still very expensive. The only real answer seemed to be the “in a box” movement, otherwise known as converged infrastructure.

      The Variations of “Converged”

      The concept of converged or “in a box” infrastructure has been around for some time without really catching on, mainly because it failed to deliver on promised value. Here are some of the variations:

      The Pre-Configured Rack

      Often it is all of the usual components (mainly servers and storage) plugged into a rack and pre-integrated with updated drivers, operating systems, and hypervisors. It’s really no different than what you might put together yourself. Someone has simply put it together for you and priced it as a package. It saves you the trouble of having to choose the individual components separately and wonder whether they are compatible.

      Cloud in a Box

      This is built on the pre-configured rack concept but goes beyond the hypervisor in pre-configuring cloud services in addition to the virtualization layer. This is designed to allow organizations to easily implement a private cloud on-prem. Like the pre-configured rack, this is more or less what you would build yourself from various vendor components, just pre-configured for you.

      Converged Infrastructure

      This is a broadly used term but most often refers to some of the datacenter components being combined into a single appliance. This could be a combination of simple server and software-defined storage (SDS) or perhaps networking as well. What separates these from the pre-configured racks is that they are generally sold as a single vendor appliance rather than a collection of different vendor components. That being said, “converged” solutions are generally designed to be a hardware platform for a virtualization hypervisor from a different vendor.

      Hyperconverged Infrastructure

      Like converged infrastructure, hyperconverged combines various components into a single appliance but also adds the hypervisor. The hypervisor is not a separate vendor component as in converged infrastructure, but is a native component to the single-vendor solution. Hyperconverged provides the most complete single-vendor appliance delivering out-of-the-box virtualization with single-vendor support. Both the converged and hyperconverged appliance-based solutions usually have the added benefit of being easier to scale out as needed.

      HC3 Hyperconverged Infrastructure

      The HC3 solution from Scale Computing is a true hyperconverged infrastructure that is often referred to (sometimes by us) as a “datacenter in a box”. I've even talked about it often as a private cloud solution (cloud in a box), satisfying most if not all the requirements of private cloud/hybrid cloud for most organizations. It combines servers, storage, virtualization, and disaster recovery into a single appliance that can be clustered for high availability and easily scaled out. It is the easiest infrastructure solution to deploy and manage, which is why it is consistently rated and awarded as the best solution for the midmarket (where ease-of-use is so highly valued).

      Even with as much of the datacenter as we have fit into the HC3 architecture, we haven’t combined every possible component. We still rely on additional network components such as physical network switches and power supply systems that are probably best left separate. It is these additional components that create the complete datacenter component and why we have partnered with other technology vendors like APC by Schneider Electric.

      Schneider Electric is offering pre-validated and pre-configured datacenter solutions combining Scale Computing HC3 with APC Smart-UPS. This solution provides both the award-winning ease-of-use of HC3 combined with the award-winning reliability of APC power. You can read more about the partnership between Scale Computing and Schneider Electric in our press release and more about the reference architectures on the Schneider Electric website.

      While a complete “datacenter in a box” solution may or may not become a reality in the future, we believe hyperconverged infrastructure like HC3 is the right next step toward the future of IT. We’ll continue to partner with excellent vendors such as Schneider Electric to keep providing the best datacenter solutions on the market.

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged
      scaleS
      scale
    • RE: Scale Computing General News

      Scale and Schneider Electric form an Alliance from DataCenterNews Asia

      Scale Computing announced that it has been chosen as an Alliance Partner by APC by Schneider Electric supporting its award-winning hyperconverged Micro Data Centre Xpress solution.

      Designed to simplify physical deployments for Edge environments, the Micro Data Centre Xpress delivers a resilient solution for channel partners, MSPs and end users, as well as meeting the challenges of Big Data and IoT.

      Scale’s portfolio of HC3 hyperconverged solutions combines storage, servers and virtualisation in one system to automate overall management.

      This allows IT to focus on managing applications rather than infrastructure.

      posted in Scale Legion
      scaleS
      scale
    • RE: Scale Computing General News

      Scale Teams Up with Unitrends to Drive Channel HC Sales from CRN

      Scale Computing's flagship HC3 hyper-converged platform is getting a shot in the arm with the addition of Unitrends disaster recovery and backup technology for both on-premises and in the cloud.

      "This partnership is really all about the channel and our commitment to showing partners how they can make money," said Jeff Ready, CEO and co-founder of Scale Computing, in an interview with CRN.

      "With our partnership with Google and our Cloudy Unity product, Unitrends does that granular backup and recovery on the cloud basis. If you already have Scale and Cloud Unity, you might be thinking, 'I've got applications running in the cloud on part of my Scale deployment, how do I back those up?' Unitrends is a very good solution for that,'" said Ready. "We're giving partners more tools for their tool chest."

      posted in Scale Legion
      scaleS
      scale
    • Understanding Edge Computing

      edge.jpeg

      I blogged about edge computing briefly in November when we announced HC3 Edge but I wanted to return to that topic to provide more insight on the ins and outs of edge computing.

      Edge computing is a new terminology for a computing need that has been around for a long time. It is encompassing the commonly used remote office/branch office (ROBO) use cases but also includes many other types of remote, on-prem computing needs including:

      • point of sales locations
      • manufacturing facilities
      • vehicles (ships, trains, planes, etc)
      • medical facilities
      • IoT
      • and many others.

      Basically, edge computing is any computing that takes place outside your datacenter, away from your IT staff. Edge computing could involve only a few remote sites or it could be hundreds or thousands of sites, such as retail locations. Remote sites may be across town or around the world. Regardless of the distance, these sites will all have some of the same needs and requirements. Some of these requirements are:

      • Easy, rapid deployment

      For any remote sites, but particularly when there are dozens or hundreds of sites, you need a solution that can be deployed easily and rapidly. If it will take days or weeks to deploy at each site, it may not be viable.

      • High availability

      The solution must be resilient when it comes to hardware failures and other types of outages. You’ll want systems that can continue operating, for example, if a drive fails or if internet connectivity is lost. You want technology to enhance your operations, not slow them down or stop them.

      • Disaster recovery

      Your data is important and you need the ability to protect it should disaster strike. Loss of data at a single remote site can come at a very high cost to your business and operations. Make sure your solution has the ability to protect data to a DR site.

      • Ease of use

      It is unlikely that remote sites will have IT staff on-prem so the easier the systems are to use, the more the on-site staff can assist in managing the systems. When there are dozens or hundreds of sites, the more that can be done by on-site, non-IT staff, the easier the systems will be to manage.

      • Remote management

      With only non-IT staff on-prem at remote sites, trained IT pros will need to do some of the management tasks. Being able to do most, if not all, of these tasks remotely is critical not only because of the cost of travelling to these sites but also for minimizing downtime because of delayed response times due to travel.

      Edge Computing with Micro-Datacenters

      Micro-datacenters are a big part of fulfilling edge computing needs. Not all edge computing use cases are the same but it is common to need a number of server/application workloads per site. A micro-datacenter should encompass all of the requirements I listed above and it just so happens that hyperconverged infrastructure is a great fit for a micro-datacenter.

      With simplicity, scalability, and high availability being core concepts in hyperconverged infrastructure, it meets the edge computing profile, but not all hyperconverged solutions can actually scale down to fit the micro-datacenter model. This is largely because these solutions are designed around enterprise-scale architectures and are using storage architectures that are resource heavy and become even more inefficient as they scale down. The resource consumption of a virtual storage appliance, for example, can steal too many compute resources from hypervisor to efficiently run VMs in a smaller system.

      Why Not Cloud Computing?

      Cloud computing is great for many purposes and can be part of an edge computing plan. However, the key factors to think about with edge computing are performance and network connectivity.

      Remote sites will likely not have the same levels of network connectivity as the main office/datacenter. Also, the more widespread the remote sites are, the more likely that connectivity issues will affect sites. If remote sites are dependent on cloud computing to operate, then network outages or cloud outages will kill those operations.

      Some edge computing use cases have very specific performance requirements that are not always compatible with cloud computing performance capabilities. On-prem computing resources can provide more fine-tuned and reliable performance for these edge computing needs.

      An edge computing strategy may well include some cloud computing services but it will most certainly include on-prem compute resources like micro-datacenters.

      HC3 Edge

      Scale Computing announced HC3 Edge in 2017 to provide custom-sized hyperconverged infrastructure systems for micro-datacenter implementations. As one of the lowest cost and easiest to use infrastructure solutions in the market, Scale Computing has already been deployed in distributed enterprise environments which fall under the edge computing definition. HC3 Edge enhances the hyperconverged offering from Scale Computing to encompass systems sized specifically for the edge computing use cases of specific organizations.

      The HC3 HyperCore operations system is lightweight as is the storage architecture which allows efficient computing performance across a variety of micro-datacenter sizes and configurations. Partnering with hardware providers such as Lenovo, Dell, and Supermicro allows a variety of hardware options that can be right-sized for nearly any use case.

      HC3 Edge may be only one of many options for implementing a micro-datacenter but it excels particularly in rapid deployment, high availability, ease-of-use, scalability, and remote management.

      Summary

      Edge Computing is an IT infrastructure component that is getting a lot more attention as IT continues to grow and encompass every area of business and operation. With IoT on the rise, edge computing will continue to grow as an area of hardware and software solutions that can best meet the needs of these edge computing use cases.

      posted in Scale Legion scale scale hc3 edge computing
      scaleS
      scale
    • RE: Scale Computing General News

      Scale Forms Partnership with APC for Micro Data Center in a Box from CRN

      Scale Computing has formed a new partnership with APC by Schneider Electric to create a turnkey hyper-converged solution that executives say will lead to higher margins for channel partners and enable solution providers to scale capacity and computing power at the edge quickly.

      "This is pretty powerful," said Bill Barnier, sales manager at Bloomfield Hills, Mich.- based solution provider Data Partner, who partners with Scale. "They can do power back up, the compute and the storage, all in one box. That's powerful to be able to put all of that in one appliance at the remote edge. That will definitely be something that customers would have interest in, especially those who have a lot of remote sites."

      Hyper-converged specialist Scale Computing has been chosen as an Alliance Partner by APC by Schneider Electric to support its hyper-converged Micro Data Center Xpress solution, which combines a purpose-built infrastructure with a physical management wrapper for hyper-converged architectures. Xpress is a complete and energy-efficient IT solution that is pre-tested, optimized, and able to be rapidly deployed. APC said it creates a reliable and robust environment to leverage the best of on-premises and multi-cloud infrastructures.

      posted in Scale Legion
      scaleS
      scale
    • RE: Virtualization and HA, Scalability

      @kelsey our latest blog post might be useful for your class as well:

      https://mangolassi.it/topic/16198/4-it-pitfalls-to-avoid-in-2018

      posted in IT Discussion
      scaleS
      scale
    • 4 IT Pitfalls to Avoid in 2018

      Technology can be a great investment if you invest wisely. As technology changes, it is always a good idea to check if the ideas you had a year ago are still valid in the coming year. Here are a few ideas to think about in 2018 so you can avoid pitfalls like Pitfall Harry from the classic Activision game Pitfall pictured above.

      A2600_Pitfall.png

      SAN Technology

      Don’t buy a SAN. I repeat. Do not buy a SAN. Whether you’ve bought a SAN in the past or not, it is now a dying technology. SANs have been a staple of datacenter infrastructure for the last couple decades but technology is moving on from the SAN. A big part of this reason is the rise of flash storage and storage speeds overtaking the speeds that controller-based SAN architectures can provide.

      NVMe is the new generation of flash storage and is designed to allow storage to interact directly with the CPU, bypassing controllers and storage protocols. We are entering into computing resource territory where storage is no longer the slowest resource in the compute stack and architectures will need to clear the compute path of controllers and protocols for optimal speeds.

      Whether the SAN is physical or virtual, it still has controllers and protocols weighing it down. Even many new virtual or software-defined storage architectures still follow the SAN model and have virtualized the controller as a virtual storage appliance (VSA) which is a VM acting as a storage controller. You may not be ready for NVMe right now, or in 2018, but don’t let a 2018 investment in dying SAN technology keep you from moving to NVMe when you may need it in 2-3 years.

      Instead, look for controller-less storage architectures like SCRIBE from Scale Computing. In testing, Scale Computing was able to achieve storage latency as low as 20 microseconds (not milliseconds) with NVMe storage. Controller-based SAN technologies could never come close to these speeds.

      Going All-In on Cloud

      One of the recurring themes I heard in 2017 was, “Everyone should have a cloud strategy.” Still true in 2018, but from what I saw in 2017, many interpreted this as abandoning on-prem and migrating entirely to cloud computing. There are clearly many cloud providers who could be pushing this notion of an all-in cloud strategy but the reality is that those that have already been executing cloud strategies are largely landing on some kind of hybrid cloud architecture.

      The cloud is a beautiful resource and most organizations are probably already using it in one way or another, even if it may be Salesforce, Office 365, web scaling, a few VMs in AWS or Azure for dev and test, or IoT. The benefit vs. cost varies not only by service but also because how these services are used is different from business to business. It can be easy to jump into a cloud-based service without fully understanding the cost or performance characteristics and in many cases it may not be easy to escape once you’ve committed.

      If you are considering cloud, it is important to evaluate the solution thoroughly for each aspect of your IT needs. Understand not only the cost but the performance capabilities vs. on-prem solutions. There are many systems, like manufacturing, that don’t easily tolerate some of the latency and even outages that can come with cloud computing. On-prem solutions for these systems that we refer to as edge computing may be a requirement.

      It is very likely that a combination of on-prem solutions (like hyperconverged infrastructure) and cloud-based solutions may be the best overall strategy for your IT department. Cloud is just one more tool in the IT toolbox to provide the services your business needs.

      Over-Centralizing the Datacenter

      The pendulum always seems to swing back and forth between centralized datacenters and distributed datacenters. When cloud computing was becoming more mainstream, the pendulum seemed to swing toward the centralized approach. As I just discussed about cloud computing, the pendulum now seems to be swinging back away from centralization with the rise of edge computing and micro-datacenters. These on-prem solutions can provide greater availability and performance than cloud for a number of use cases.

      The benefits of centralizing are attractive because it could lower operational costs by consolidating systems under one roof. However, there are far better remote management systems available these days that can also lower the operational costs of remote site infrastructure. In addition, simplified infrastructure solutions like hyperconverged infrastructure to create micro-datacenters are much easier to deploy and manage than traditional infrastructures.

      As the pendulum continues to swing, we will likely see most organizations landing closer to the middle with a combination of solutions. The IT department of the near future will likely include IoT devices, micro-datacenters, cloud-based computing, and more traditional datacenter components all combined in an overall IT infrastructure strategy.

      Premium (and Legacy) Hypervisors

      As virtualization continues to evolve with technologies like cloud, containers, IoT, hyperconvergence, and beyond, the need for the hypervisor as a premium solution is diminishing. Hypervisor licensing became a big business with high licensing costs and those initial hypervisors did make virtualization mainstream by pulling together the traditional infrastructure components like servers and SANs. That traditional approach has now reached a plateau.

      For cloud, hyperconverged infrastructure, and containers, hypervisors have become a commodity and big premium hypervisors with features you may never need are often not the best fit. Hypervisors that have been designed specifically to be lightweight and more efficient for technologies like hyperconverged infrastructure or cloud are part of a growing market trend. Traditional or legacy hypervisors that were designed to work with servers and SANs over a decade ago are not necessarily the best investment for the future.

      Summary

      Unlike Pitfall Harry, a misstep will most likely not get you eaten by an alligator but it may end up costing your organization in the long run. Only you know what is best for your organization but it is important to consider your strategies carefully before blowing your IT budget. The experts at Scale Computing will be happy to help you understand the benefits of hyperconverged infrastructure and datacenter modernization into 2018 and beyond. For more information contact us at [email protected].

      posted in Scale Legion scale scale hc3 san virtualization hyperconvergence hyperconverged hypervisor scale blog
      scaleS
      scale
    • RE: Virtualization and HA, Scalability

      @kelsey said in Virtualization and HA, Scalability:

      @scale this is written work but we have done a presentation to

      If you needed presentation materials, I'm sure that we could find some for you.

      posted in IT Discussion
      scaleS
      scale
    • RE: Virtualization and HA, Scalability

      @scottalanmiller thanks for the mention. As was mentioned, Scale Computing makes scalable, highly available hyperconverged virtualization platform solutions. If there are any questions that we can answer about our products or HC concepts, we're here to assist. Sounds like a good class project and a chance to provide a lot of material for your class.

      Is this written work or do you get to give a presentation?

      posted in IT Discussion
      scaleS
      scale
    • Restoring Files and Folders out of Scale HC3 VM Snapshots

      HC3 snapshots capture an entire VM and all of its data at a single point in time. And restoring (aka cloning) a previous snapshot creates a new bootable VM copy of that data as it existed at previous point in time. But there are many ways to use those snapshots to restore older versions of corrupted or deleted files and folders ... including database files, etc.

      Here is just one documented example that uses a Linux Mint Live CD to boot a VM and access NTFS file systems from previous snapshots... https://www.scalecomputing.com/files/support/public/Win_File_Recovery_ISO.pdf

      Youtube Video

      Note that a similar process could also be done using a Windows (PE) based recovery ISO instead of Linux mint as long as that environment has the ability to manually load VIRTIO disk drivers (and virtio network drivers if needed) either when it is created or at boot time - which many tools like commercial Windows based Bare Metal Recovery ISO's for many full system backup products do allow...

      because of the legality and licensing requirements of windows PE and various tools, we don't want to recommend any specific windows recovery ISO or windows "PE builder" tools ...

      With the Linux Mint ISO - if for some reason you need or want to be able to mount the NTFS volume in read/write mode and get an error about "unclean file system" it is possible to do by first using the ntfsfix command which is included in the Linux mint iso. One case might be if you want to also install samba to share the mounted volume so it can be directly accessed across a network

      b5eowdlauszl.png

      Lastly - don't overlook the option of cloning the VM snapshot - disconnecting the virtual nic from the clone so you can boot it up without an IP address and name conflict, change the name/ip then shutdown and re-attach the nic and reboot the clone with a completely different VM name and IP ... copy the files you need from that VM to wherever you need them. when you are done shut down the cloned VM and delete it.

      posted in Scale Legion scale scale hc3 youtube snapshots
      scaleS
      scale
    • 3rd Party Applications on Scale HC3: How Support Works

      Abstract: One of our most common questions is "Will my application run on Scale?". If it runs of Windows or Linux then 99% of the time, the answer is yes. Still, information from some application vendors on support can be confusing. This paper explains how your HC3 system and applications are supported by the multiple vendors involved.

      Third Party Apps on HC3

      posted in Scale Legion scale scale hc3
      scaleS
      scale
    • Scale HyperCore HC3 Native Replication Feature Note

      A great resource to start with to learn about HC3 built in replication
      https://www.scalecomputing.com/files/support/public/Replication_Feature_Note.pdf

      It's also important to plan for the capacity requirements of retaining point in time snapshots for replication and backup purposes: HyperCore HC3 Capacity, Clone and Snapshot Management
      https://www.scalecomputing.com/files/support/public/HCOS_Capacity_Management.pdf

      Youtube Video

      posted in Scale Legion scale scale hc3 replication youtube hypercore
      scaleS
      scale
    • RE: How Can I Convert My Existing Workloads to Run on Scale HC3

      qemu-img is actually what HyperCore OS uses internally when it is doing both import and export of VM's to/from HC3. As a result, if you use the "foreign VM import" process referenced above, it leverages the fact that if you simply rename a vmdk for example to a qcow2 file extension (that HC3 expects) and then import it, qemu-img will actually detect that the disk contained in file is really a VMDK and do that conversion automatically for you saving a step!

      One other benefit of letting HC3 do the conversion is that it will convert to the right qcow2 format for that HC3 version automatically. if you are doing the pre-conversion using qemu-img on windows (or linux for that matter), you may want to run the qemu-img info on an empty HC3 exported qcow to see what flags it has and try to match them. depending on the version of HC3 and version of qemu for windows you are using I have seen cases mostly with older versions of HC3 where you need to specify the compat version to match something like this:

      qemu-img convert -p -O qcow2 -o preallocation=metadata,compat=0.10 source-image.qcow2 output-image.qcow2 
      

      (this was an older version of HC3)

      On a very new version of HC3 as of this post it looks like compat: 1.1. I got tired messing with stuff like that and the extra step so now I always start with renaming the virtual disk files to .qcow2 extension and try letting HC3 figure it out first at least which generally works. (VHDX may be the exception ... and of course you have to get into the "right" vmdk format in some cases as there are lots of different vmdk formats)

      Another tip / FAQ - if you ever have a .OVA file, generally a virtual appliance, that is just a tar archive that you can expand and inside there will be a virtual disk file, usually .vmdk but sometimes .img format that you can convert/import into HC3 using the above processes.

      Of course ALL of this is just getting HC3 to see the virtual disk. The OS on that virtual disk has to have the right drivers_ active_ to be able to boot on HC3 which either means that it has virtio drivers pre-installed and set to boot (if "performance" drivers are selected when creating the VM) or IDE drivers (if "compatible" drivers are selected ... and for windows mergeide.reg was run before migration.) Linux is generally just automatic but Windows will result in a 7B BSOD if a driver for the boot disk isn't active on the imported virtual disk.

      posted in Scale Legion
      scaleS
      scale
    • HC3 Move Powered by Double-Take / Carbonite Quick Start Guide

      For most customers and workloads, HC3 Move is the preferred method to move running, in-use systems onto the HC3 platform minimal downtime. Because HC3 Move uses continuous data replication to synchronize OS, applications and data from running systems onto the HC3 platform, these systems can remain running and in-use right up to the point of "switchover" to HC3 which can be performed at any time. Further, the switchover process is orchestrated to safely shutdown the original workload, sync last minute changes into the target VM and quickly restart the full system on HC3 maintaining it's original name, full software stack and even original IP if desired. Total downtime may be as little as 10 minutes and is generally well under 1 hour.

      Be sure to check HC3 move documentation as sell as the HC3 Support Matrix for the latest on what OS versions are supported by HC3 move... for example desktop OS's and very old versions of windows server may not be supported.

      https://scalecomputing.com/files/support/public/HC3_Move_Powered_with_Double-Take.pdf

      posted in Scale Legion scale scale hc3 carbonite doubletake
      scaleS
      scale
    • Create a WinPE ISO with VirtIO Drivers included for Recovery or Restore Processes

      In situations where it is necessary to boot a VM to a rescue environment, and a Windows recovery environment is preferred, Microsoft has made it extremely easy to create a CD image that can be uploaded to an HC3 cluster and used as a boot drive for a VM.

      These steps were used on a Windows 10 host, and Microsoft will likely have much more comprehensive information and would be better suited for assistance in case of issues or disparities...

      It is assumed that these steps will be run on an HC3 cluster, where the Scale Tools CD is mounted and accessible to a Windows VM.

      First, download and install the Windows Assessment Deployment Kit as listed on Microsoft's WinPE walkthrough.

      According to that walkthrough, the Deployment Tools and Preinstallation Environment components are required for installation.

      Once complete, start the "Deployment and Imaging Tools Environment" application that was installed with elevated privileges (Start -> type 'deployment', right click and select "Run as administrator") and use the following commands:

      1. copype amd64 C:\WinPE_amd64
      2. dism /mount-image /imagefile:"c:\winpe_amd64\media\sources\boot.wim" /index:1 /mountdir:"c:\winpe_amd64\mount"
      3. dism /add-driver /image:"c:\winpe_amd64\mount" /driver:"e:\drivers\net\w10\netkvm.inf"
      4. dism /add-driver /image:"c:\winpe_amd64\mount" /driver:"e:\drivers\serial\w10\vioser.inf"
      5. dism /add-driver /image:"c:\winpe_amd64\mount" /driver:"e:\drivers\stor\w10\viostor.inf"
      6. dism /unmount-image /mountdir:"c:\winpe_amd64\mount" /commit
      7. MakeWinPEMedia /ISO C:\WinPE_amd64 C:\WinPE_amd64\VirtIO-WinPE_amd64.iso

      The machine architecture, filenames and paths above are all dependent upon the environment and configuration choices.

      Lastly, upload the created ISO (C:\WinPE_amd64\VirtIO-WinPE_amd64.iso) to the HC3 cluster, insert the ISO into a VM's empty CDROM, and start the VM.

      With the VM booted to the PE ISO, a SMB share can be mounted and files copied as needed, or other recovery operations completed. For example, to mount a SMB share from a remote host:

      net use * \fileserver\share * /user:USERNAME@DOMAIN
      

      Further customization and capabilities can be applied and configured prior to the above step 7, as needed, but extend beyond the initial needs of this post.

      Information for the above was gathered from the following Microsoft pages:

      WinPE: Create a Boot CD, DVD, ISO, or VHD
      WinPE: Add drivers

      posted in Scale Legion scale scale hc3 virtio kvm winpe
      scaleS
      scale
    • New Scale Computing Website

      Scale Computing's new website is now live. The old one was due for an overhaul and we are happy to announce that the new, modern one is now available. Come check us out and let us know what you think!

      posted in Scale Legion scale
      scaleS
      scale
    • How do I backup my VMs on Scale HC3?

      A: There are several options available to HC3 users, including the native HC3 backup capabilities.

      HC3 features a full set of native features to allow users to backup, replicate, failover, restore, and recover virtual machines. Snapshot-based, incremental backups can be performed between HC3 systems without any additional software or licensing. Many HC3 users implement a second HC3 cluster or a single node to serve as a backup location or failover site. The backup location can be as second HC3 system that is onsite or remote. The backup location can be used just to store backups, or to fail them over if the primary HC3 system fails. HC3 VM backups can be restored to the primary HC3 system sending only the data that is different. Backup scheduling and retention can be configured granularly for each VM to meet SLAs.

      Scale Computing also offers the ScaleCare Remote Recovery Service as a cloud-based backup for HC3 systems supporting all of the native HC3 features. For users who lack a secondary backup site, the remote recovery service acts as a backup site for any VMs that need protection. VMs can be recovered instantly on remote recovery platform to run in production until they can be restored back to the primary site. The Remote Recovery Service also includes a runbook to assist in DR planning and execution from implementation to recovery. ScaleCare engineers assist in the Remote Recovery Service in planning, implementation, DR testing, and recovery.

      HC3 VMs can also be backed up using virtually any third-party backup software that supports your guest operating system and applications. If you are migrating an existing physical machine to a VM, you likely don’t need to change your backup at all. Backup solutions, including Veeam, that include backup agents can be used with the guest operating system allowing them to be backed up over the network to a backup server or other location depending on the solution. (other popular ones we see and in some cases have tested include Unitrends, Acronis, Storagecraft, Barracuda)

      Some HC3 users choose to use HC3 native export features to export VM snapshots or backups to store on third party backup servers or storage. This extra backup method can be useful for long-term storage of VM backups. These exported backups can be imported into any other HC3 system for recovery. (Note - while exports currently can't be scheduled in the UI, they can be done of live machines at any time and the ScaleCare support team may be able to set up a simple scheduling process for these "under the hood" ... contact support to discuss)

      You can read more about HC3 backup and disaster recovery in our whitepaper,
      Disaster Recovery Strategies with Scale Computing.

      posted in Scale Legion backup scale scale hc3 disaster recovery
      scaleS
      scale
    • RE: Scale Computing General News

      Scale teams up with VAR Copaco

      Delivering Scale's data centre in a box on Lenovo hardware to provide high performance, simplified IT and flexible 'pay as you grow' IT, ideal for organisations of all size.

      London ---- Scale Computing, the market leader in solutions for midsized and enterprise companies, today announced a strategic partnership with value add distributor Copaco. As a leader in delivering IT services, Copaco will be reselling Scale's innovative HC3 product on Lenovo servers in the Netherlands.

      With over 400 qualified staff, Copaco's specialist team is experienced in reselling a range of quality IT services and solutions to suppliers across the region. Copaco is constantly looking to include new leading-edge products, providing its customers with the most up to date solutions available, and will now be offering Scale's hyperconvergence solution as part of its portfolio.

      Scale Computing provides a complete data centre in a box with servers, storage, virtualisation and high availability combined in one easy to use appliance. The hyperconverged technology is sold on Lenovo servers helping organisations to achieve higher performance and reduce costs, with the flexibility of 'pay as you grow' IT infrastructure. In addition, with no additional software to license and no external storage to buy the Scale solution enables a lower out of pocket costs.

      As a distributor, Copaco will also have access to the Scale Partner Community which provides partners and distributors with the products, programmes and services required to address customer infrastructure needs. The Scale Partner Community is growing across Europe and was transformed earlier in the year to provide partners with greater incentives, helping members to grow their business.

      Sacha Wingers, Business Director Enterprise Solutions at Copaco commented: "Our partners are shifting away from traditional IT and are moving towards more modern infrastructures. As part of this we look to resell the latest solutions in order to meet these requirements. The combination of Lenovo and Scale Computing's HC3 software is ideal for our customers who are looking to overcome the challenges around virtualisation and need the flexibility and scalability to grow, allowing them to meet future needs."

      "Copaco offers a comprehensive range of solutions that meet customer requirements and we are pleased that Scale Computing can be a part of this," commented Johan Pellicaan, Managing Director and VP EMEA at Scale Computing. "The company is committed to delivering leading services and incorporating the HC3 solution is an exciting partnership for us. This move is in-line with our strategy to grow across Europe and we are pleased that we can work with Copaco to expand our services. Copaco will also join our Scale Partner Community and we look forward to working in close partnership with them to help grow our businesses alongside each other."

      Tom Sluys, General Manager Benelux at Lenovo Data Center Group (DCG) commented: "We teamed up with Scale Computing to offer customers a new hyperconverged offering and we are pleased that Copaco is already seeing the value for this in the Netherlands."

      posted in Scale Legion
      scaleS
      scale
    • Lights Out Management with Scale HC3

      Ever wished you had the ability to remotely monitor, manage, and power on or off your HC3 nodes outside the HC3 web interface? Also known as “out-of-band” or “remote” management, many nodes in the HC3 lineup have Lights Out Management (LOM) capabilities.

      The IPMI and iDRAC features are not available on all node lines and may require some disruptive and/or non-disruptive firmware updates and BIOS changes in order to access their management capabilities.

      IPMI (Intelligent Platform Management Interface) is a specification that provides management and monitoring capabilities independently of the firmware and operating system of the host. As an example, a node that may be powered off or otherwise unresponsive across a normal network connection could be remotely managed through IPMI instead.

      Dell’s iDRAC (integrated Dell Remote Access Controller) is a platform that uses the same ideas as IPMI but is proprietary to Dell. It is available on certain Dell lines and, in the case of Scale Computing’s HC3 node lines, is integrated into the motherboard. As with IPMI, an administrator has access to power management, system monitoring like temperature and fan speeds, remote console access, and more.

      More information and details on specific Scale HC3 models is available on our support portal using the following links (or by logging in and searching "lights out")

      Customer portal
      Partner portal

      For convenience, I've attached the Nov 2017 version of the doc here - but it's recommended you check the portal for any recent updates

      posted in Scale Legion ipmi lights out management scale scale hc3
      scaleS
      scale
    • Intel Meltdown and Spectre Vulnerabilities and the Scale HC3

      A group of platform vulnerabilities have been identified to exist for many CPUs, including the Intel x86 class of processors. These vulnerabilities exploit flaws in the Intel processor itself, affecting all Intel based servers, including the Scale Computing HC3 platforms. These vulnerabilities have been publicized as Meltdown (CVE-2017-5754) and Spectre (CVE-2017-5753, CVE-2017-5715). Many technical details are publicly available here:

      https://meltdownattack.com/

      How Vulnerable is HC3?

      Meltdown, as described in the research paper[1], does not affect our Hypercore Operating System (HCOS) directly due to our use of hardware virtual machines (HVM). Additionally, because the host OS is locked down, and users do not have access to introduce or run arbitrary code on the host, an ordinary user cannot read host kernel or physical memory. The operating systems of guest VMs, however, are vulnerable, and must be patched using the recommendations of the OS provider to mitigate against this threat.

      Spectre[2], on the other hand, is comprised of multiple vulnerabilities which are more difficult to exploit, but remain dangerous. One of these techniques is demonstrably able to read host memory from within a guest VM[3]. This is a serious threat to security

      Addressing both of these vulnerabilities is currently our top priority.

      When Will an Update be Available?

      The Scale Computing Software Engineering team has been closely monitoring all available information to make the best decisions for mitigating and correcting these issues with the Scale HC3 platform. We have made this our top priority and are currently testing our initial patch for the core issues and plan to have a release available in the coming days. Our Engineering and Quality Assurance teams are working diligently to fully test and verify the stability and viability for production use. We will update with a more accurate time frame as it is available or as new information is released.

      As best practices and at all times, Scale Computing recommends[4], proper planning, testing, and implementation of infrastructure backups, security access control mechanisms, and that regular software updates be applied to all guest VM software and operating systems.

      [1] Meltdown Paper https://meltdownattack.com/meltdown.pdf
      [2] Spectre Paper https://spectreattack.com/spectre.pdf
      [3] Google Project Zero Blog https://googleprojectzero.blogspot.co.at/2018/01/reading-privileged-memory-with-side.html
      [4] Information Security with HC3 https://www.scalecomputing.com/wp-content/uploads/2017/01/whitepaper_information_security_hc3.pdf

      posted in Scale Legion meltdown spectre intel scale scale hc3
      scaleS
      scale
    • 1 / 1