ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scottalanmiller
    3. Best
    • Profile
    • Following 170
    • Followers 168
    • Topics 3,468
    • Posts 151,732
    • Groups 1

    Posts

    Recent Best Controversial
    • Home Lab Ideas from SAMIT Video

      Youtube Video

      I want to talk about something that comes up pretty often and is a tough one to address, well it's easy to address in some ways and hard in others, and that is: what should I be doing in my home lab?

      It's actually kind of tough, I mean the high level, simple answer is "do anything and everything you would do ina business", but of course if you're new to IT you may not know what everyone does in a business or have any idea how to access it or exactly what it would look like. So that becomes a bit of a challenge. I was lucky, I worked inbusiness when I was much younger and so already had a decent view of internal business IT when I got into really kind of shifting my career from from engineering to IT, and that gave me a good head start from my home lab. But there's a lot of basics you can pick up, especially if you're doing like certification courses and stuff where, just as an example Microsoft's MCSE coursework is going to cover an awful lot of what a business looks like.

      In many cases you can kind of figure out what a business looks like: we know that their desktops are going to be heavily managed, we know that they're going to have central password authentication management systems, we know that they're going to do everything with automation and centralization, and centralized storage and backups and protection. And then lots of view into what they're doing. We may not know what those things look like, but we kind of have an idea of what it's going to look like even if you've never worked in a business you can probably picture a lot of these things. Maybe just because you've had been exposed to it in school or you've seen other businesses working casually as a customer or whatever, but so tackling the wide range of things there's two things we have to consider: one is what is your goals in your education if you're looking to become a DBA, where you're looking to become a systems admin, or you're looking to become a network admin, so you can create very different educational paths as to what you want to look at, but that's a separate conversation.

      If we're looking at things that a business does well this is where the sky's kind of the limit and you kind of have to play around and just start coming up with big lists of things. Some basic things that you would do at home that you would also do in a business could include managing your desktop with central authentication even if you only have one or two desktops, your desktop and your laptop, go get some VMs and put some desktop environments there, whatever virtualization. All businesses have virtualization, you should to maybe try different kinds variety is also an important thing look at.

      Normal usage activities in a business; are they gonna send email, of course they are, are you running your own email system - you can do that at home. Everything you can do in a business you can do at home reasonably. Now a lot of businesses make it with hosted email and you're free to play with that at home as well. but you're not gonna get the same education playing with hosted email that you would with your own email, but if it's just one account you may want to do both get really good exposure to just, you know, the differences. And oh this is how much work it is to run email and I know how to run email, but I would normally make the business decision to go hosted. Those kinds of things can be incredibly useful, and you talk about those things in an interview: I ran my own email at home, I ran three different kinds of servers, I know how much work each of them takes and I know why I probably wouldn't do that.

      I did that. I did it, unfortunately in a corporate environment: built my own email server from the ground up and realized how incredibly labor-intensive that was to build; how dangerous it was to support alone, and moved to hosted because it made sense for the business. So it the technical led me to business decisions and you can make business decisions at home. They you just have to learn the way the factor show much is your time worth, how much is the investment kind of capital do you have and those kinds of things. And you can get hosted email for two dollars a month, four dollars a month for the, you know, the top end enterprise stuff. We're not talking about huge investments, but you don't necessarily have to make them either hosted. You probably can figure out how to use even if you haven't used it yourself.

      Instant messaging - businesses tend to using some messaging of some sort. Deploy some at home whether it's OpenFire or Microsoft Communicator / Link, and Rocket.Chat, Mattermost, things like that. These are these are not super complicated things to get set up and working. You can run them at home. Sure it might just be you talking to your parents or a spouse or kids or whatever, but why not, right, or you can take your home lab and expand it beyond your home. You want to make email for your family - go for it. You want to make instant messaging for your family - go for it. Give family and friends who don't live in the house to also use your instant messaging. This might actually be a better way to learn because you actually have users to support, maybe only five but that's perfect. Get five users, get them to complain about things when they don't work, ask them to use a ticketing system, build a ticketing system, right, whether that's Spiceworks or a OSTicket or SodiumSuite or whatever.

      There are products out there that are free or low cost. All those are free and you can you just, you know, install on your own servers or start using and have people put in tickets and you respond to tickets and keep good notes. Have good documentation - that doesn't mean writing it in notepad. Firebird documentation system maybe dock you with your mediawiki or a SharePoint site all those are great options. Run your own internal house like you would imagine a great business would do with good documentation, solid key control, get out there using KeyPass or LastPass or some kind of password management system and maybe run your own storage: whether that's a NAS in your house or a fileserver or more modern storage like a cloud storage type thing so NextCloud, for example, or a product like that share it with your family so something like a file server might be a little bit hard to share with people outside your house, but if you're doing something like NextCloud could be perfect for sharing with someone outside your house. And then you can do things like integrate that with the email maybe use that as your email client or just whatever or have central authentication between all these systems.

      The more you build up, the more you'll have to support. As you do that you'll find that you're offering benefits to your family into other people maybe not big benefits and maybe it's kind of silly, but you can transfer files, you can email each other, communicate really easily build a PBX and put make, you know, voice communications between members of the family this is something that I do. I do it less as a home lab and moreso as home production. If that makes sense, but I run a PBX for home - hosted and in my home way of multiple phone lines. We can call between rooms, we can call outside, it's our landline. We can also call other members of the family who are on the same system. We don't have to make an outside call, we can call them on an internal extension. We have voice, of course, we have voice calling, we have video calling we can we can do video over it, we have lots of flexibility. We can do conference rooms, IVRs, all kinds of things and we can share one incoming number throughout everyone in the family, if we want to. And just know our extensions, lots of power and flexibility in a system like this that, sure you don't need it, but it's a great way to learn, a great way to get experience, and he actually produces some kind of neat value for your for your family and friends. It could be fun, right, and it doesn't require family can all your friends involved, get everyone to use a soft phone, put it on your your your Android or iPhone and just make calls from there over your PBX - lots of great experience for you. Tt could be interesting for them and maybe you'll find some benefit.

      So cost savings or whatever by doing this you could make sensors for your house and measure temperature or humidity and record them somewhere. Build your own application, of course, we're getting out of IT and we're getting into software engineering, but there's lots of good opportunities there, and there's a lot of crossover stuff.

      Automate all of your systems. Don't spend time manually managing them all. I mean maybe do that at first, because you'll help you learn how it works, but then learn how to automate them. Once you get past traditional automating with a first command-line and then scripted the command line, maybe move on to state systems like Ansible or SaltStack, put a Puppet or Chef, write lots and lots of cool technologies, most of them free that you can jump in and start using immediately.

      Build your own internal websites. Make not just a website, but an application. So instead of just a static HTML site do a WordPress. Make a blog about the things that you're doing inside the house, host it on your own servers and when we say home lab in many cases that's inside your house that often is the most cost-effective, but we could also mean hosting it on a cloud host like Vultr or DigitalOcean, Amazon LightSale or someone like that, right, Linode is another great example.

      If you're going to do something like hosting your own blog, maybe that's a better way to go because you want to be publicly facing and you want it to be highly reliable because people might be reading it. So things like that use lots of things and you'll be getting exposure to managing databases, building applications, installing and configuring and connecting to databases and taking backups. Make sure you're taking backups of everything. Do centralized backups. Do backup management, test your backups. These are all great practices that you can do at home and to get value from if your data at home was backed up and tested would you sleep better at night? Sure. A lot better? Maybe not, but a little yes. Get some advantage out of it. These are things you can do and you can just keep thinking about the different things that businesses would do.

      Central logging. Have a central server that collects the logs from every device that you have. Whether it's a VM or physical or whatever and actively look at those logs. Make dashboards, pay attention to your logs. Learn how to look through logs to learn what good baselines look like.

      Central management. Something like Zabbix or Zennos, right, that looks at all of your machines and can collect data on them and tell you if they're up or down or CPUs are spiking or whatever.

      Use a desktop management system. There's products out there like SodiumSuite or SW that are free and you can use those to do more traditional SMB management as opposed to exam extends towards what we think of as enterprise or like server-side management where you're really looking at kind of up and down. So with servers and SodiumSuite or SW tend to look more towards things like "oh here's desktops that we expect to go up and down but we need to have lots of information about them" and it might be part of your documentation system to make all of this work. So there's lots and lots of different things that you can just keep looking at different products and different approaches and different styles and of course do all this with variety.

      If you do one web server in one case maybe use a completely different web server with a completely different database in a completely different application for a different task, and by splitting those things up and sharing them you'll get more and more exposure to different things and of course keep it focused on the things that matter to you. If you're really heavily working on the networking side maybe you're going to be more interested in publishing a VPN for people to connect to and having some complex routing that you're going to have to build inside your network versus if you're looking at systems or application management where you going to be looking at what applications you can provide or the platforms for providing them if you're looking at virtualization you may be focusing a lot on disaster recovery and high availability and those kinds of things or central the single pane of glass management interfaces and those sorts of things. Maybe you're gonna be focused on storage and you're gonna want to have a SAN and a NAS and some object storage all in your network and store things for people in different ways and work heavily with backup and those kinds of things.

      Lots and lots of variety lots of ways that you can approach it and just really an unlimited number of things that you can build in your home. Never feel that you should have run out of things you can do at home. You should only run out of time in which to do them.

      I hope this was helpful I hope it gives you lots of great creative ideas and you can run out and start experimenting in your home lab or your hosted home lab and find lots of interesting problems to tackle and of course always go out and document what you're working on and write about it. Especially in online communities, it's a great place to get people to talk to you about decisions you've made you can say why you're trying an application people can give you feedback about the way in which you're trying it whether it's a good application how that might be used in business. Factors you may not have considered, things that you might be doing differently at home than what people would do in a business things like that there's a lot of good feedback you can get that way that might be really useful, and in some cases, of course, you're gonna have to buy equipment, but by and large you're gonna be able to do most of this stuff for free or nearly for free which gives you a lot of flexibility, and of course you can build things and tear them down, but one of the biggest advantages is if you run things in production from home, right, treat it like it's production keep it up. Monitor it for up time do regular maintenance on it, keep it secure, secure everything, of course, I approach everything with a security and reliability mindset.

      Think about it from a business operations perspective and that will help you get maximum value from your home lab and how you approach it, and of course all of this is for the purpose of one, producing online documentation that people can look at and say "wow look at the dedication, look at the things that this person is doing", this is a great insight into their motivation their activity their interest, their areas of expertise, and also it becomes an amazing talking point in your in your interviews, right? You go in and you talk to someone about what you do at home and they're gonna have a completely different perspective of you when they ask you know if you have two people that you're going to hire and one has one stuff touched Exchange and you have not only touched it, but running a production at home for multiple users and have taken it through migrations and different versions and you can tell them all about your home experience of things that you do cradle to grave with email, it's not just something you installed someplace because you were told to. You make decisions about it, you made it work,you have a type of experience that most people are going to lack, even though it access to it and of course then use your home lab to go get certifications on those things that you want to take further. You will just build upon that experience.

      posted in IT Careers samit youtube scott alan miller home lab it education
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      Rare picture of @JaredBusch yelling at a client

      0_1519873167957_064FA63D-2037-4AB4-BC52-4B591CE81D30.jpeg

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Why Dual Controllers is Not a Risk Mitigation Strategy Alone

      It has become a mantra of storage salespeople to say that storage devices with dual controllers are high availability and, more or less, magic black boxes that won't fail. Nothing, of course, could be further from the truth. Common sense as well as observation say that not only do they fail, they fail more often than even a comparable level server would fail. Why? Let's investigate.

      First of all let's set the stage. The idea with most storage in these discussions is that it is used to back VMs or physical servers and so is an additional point of failure and must make up for that by being insanely reliable on its own. SANs come in a ridiculous variety from the silly Netgear SC101 on the low end to things like EMC VMAX on the high end exactly the same way that computers come in the form of a Raspberry Pi Zero or the IBM z series mainframes. While both are SANs and both are computers, they are very, very different animals.

      There are many facets to this discussion in general but here we will look at the dual controller issues specifically. But here are the quick basics...

      • Normal servers have dual "everything" if you want them to.
      • Servers at the same "tier" as storage have the same redundancy, including controllers.
      • Servers sell at high volume and get more testing than their storage counterparts.
      • Having two of something and having them be high availability are not the same thing.

      So let's break this down.

      Dual controllers just means that there are two, it does not mean that one takes over when the other one fails. It's that simple. The assumption is that this is why two controllers are present, but that is a marketing gimmick. There are two controllers because it raises the price and makes it easy to get people to buy, not because it provides a demonstrable value.

      In high end, mainframe class SANs like those available from EMC, HDS and HPE 3PAR offer what we call active/active controllers or "highly decoupled" controllers. The controllers are independent, do not share firmware, are both seeing the disks all of the time and essentially have no dependencies on each other. Because high end SANs offer this, it makes it easy to lump all dual controller SANs into this category but this isn't how things work any more than your Raspberry Pi does not have dual failover motherboards like your HPE Integrity does. In a high end, mainframe class SAN like EMC VMAX you indeed get high availability in a single SAN chassis. You also pay for it.

      It cannot be missed that if you buy an EMC VMAX, as an example, you could have purchased one of several similar classed servers from companies like IBM, HPE and Oracle, that are servers with the same high reliability in a single chassis. So the dual controllers of the SAN in this range still do not increase the reliability in respect to servers of the same class. So even here, the dual controller system is to maintain reliability at its tier, SANs are never more reliable than their server compatriots at a given tier.

      When we drop to the next tier down of storage, most SANs continue to come with dual controllers in a single chassis. However this are not active/active controllers and are instead systems like active/passive and are highly coupled. The tight coupling is where we start to see issues. The tight coupling means that the controllers have some amount of dependency upon each other - often rather a lot of it. This is often in the form of firmware dependencies but can be electrical or mechanical, as well. These systems often perform flawlessly in non-failure modes such as in demos where one controller is removed from the chassis so they look great in showrooms. But in the real world when a controller fails the overall system failure rate is extremely high and since there are twice as many controllers to fail as necessary the chance of there being "a" failure is much higher than if there was only a single controller.

      Tightly coupled dual controller SANs are probably the riskiest form of storage device. Because the controllers tend to "shoot each other" and because they have often high failure rates on firmware patching they, from long observation across many product lines, are dramatically less reliable than a normal server or even a cheaper single controller SAN device.

      And, of course, there remains the risks of a shared chassis and backplane and a single disk array. Those are major components to share.

      And there is "HA compatibility". It is not uncommon for a storage device with dual controllers to fail over from one controller to the other successfully - according to the storage device itself but for systems connected to it to fail because the failover happened too slowly. The speed of failover is often overlooked in these scenarios and systems can fail due to the dual controllers while the vendor gets to report that no failure took place. A tricky reporting strategy.

      Dual controllers sound wonderful but as a concept on their own, they are meaningless. How they are implemented and how they are intended to work matter greatly. By and large, dual controllers are a negative as they are costly and risky and exist only as a marketing strategy to make SANs have a degree of plausible "magic" or to give IT buys plausible denyability that they felt confident that the system could not fail. But the reality is that buying a non-active/active dual controller SAN is the riskiest storage move.

      posted in IT Discussion san storage risk risk analysis
      scottalanmillerS
      scottalanmiller
    • Solus 1.1 Linux on Scale HC3

      Solus is a new Linux distribution that includes a fresh, new desktop environment called Budgie. While Budgie is available on some other distributions this is its native platform (much like Cinnamon on Linux Mint.) Finally a chance to check it out for myself.

      0_1462185060577_Screenshot from 2016-05-02 12:46:34.png

      0_1462185080132_Screenshot from 2016-05-02 12:54:18.png

      0_1462185105335_Screenshot from 2016-05-02 12:54:55.png

      0_1462185503181_Screenshot from 2016-05-02 12:55:18.png

      0_1462185522109_Screenshot from 2016-05-02 12:59:59.png

      0_1462185574963_Screenshot from 2016-05-02 13:05:46.png

      0_1462185587994_Screenshot from 2016-05-02 13:09:33.png

      0_1462185601055_Screenshot from 2016-05-02 13:10:56.png

      0_1462185615438_Screenshot from 2016-05-02 13:11:32.png

      0_1462185638735_Screenshot from 2016-05-02 13:11:59.png

      0_1462185664587_Screenshot from 2016-05-02 13:12:39.png

      0_1462185674083_Screenshot from 2016-05-02 13:12:51.png

      0_1462185711956_Screenshot from 2016-05-02 13:16:00.png

      0_1462185744118_Screenshot from 2016-05-02 13:16:17.png

      As you can see the installer still leaves a bit too much to the end user and is easy for an experience Linux user but would be a little daunting to a newbie looking to use this as an easy to use end user desktop. If this is being used in a company and being installed for staff, this would not be an issue, of course.

      posted in IT Discussion solus linux linux desktop scale scale hc3 ntg lab budgie
      scottalanmillerS
      scottalanmiller
    • Scale HC3 HC2150 ServerBear Results

      The Scale HC3 HC2150 officially released today and as we have cluster numero uno, we got to run the first tests from ServerBear to see how it stacks up to the slightly older, non-tiered HC2100. In this test we are running on a CentOS 7 VM, 2GB RAM, 2vCPU and with storage tiered turned to eleven. So this is the most comparable to tests that we have done from many online cloud providers and the HC2100 in the past.

      Scale HC3 HC2150 ServerBear 2x2

      UnixBench (w/ all processors) 2644.8
      UnixBench (w/ one processor) 1202.5
      Read IOPS: 10715
      Write IOPS: 6947
      Storage Throughput: 441MB/s

      posted in IT Discussion scale scale hc3 scale hc3 hc2150 serverbear ntg lab
      scottalanmillerS
      scottalanmiller
    • UNIX Scheduling with cron

      In most UNIX systems (definitely including all major Linux Distributinos, the BSD family and Solaris family) the main system task scheduling system is the cron dæmon and utility, named after Chronos, the Greek god of time. Cron is a robust scheduling system that can be used by any user on the system.

      Cron is a dæmon that reads a set of simple text files to determine what scheduled tasks it is to execute and when.

      While it is possible to edit cron's text files directly, this is discouraged. Under most circumstances, we instead manage cron through the crontab command line utility. This tool works just like using vi manually does but automatically manages the files on our behalf so that we do not need to now where they are, handles user management and has a syntax checker so that we are much more likely to avoid simple mistakes in our cron entries.

      The cron files managed by crontab are located at /var/spool/cron/crontabs on most systems, but they are not meant to be edited manually.

      There are just two common options when using the crontab command, crontab -l will list the cron table of the current user and crontab -e will open the crontab editor. If you need to remove a cron table completely for the user, it is crontab -r.

      In nearly all cases, you will start without any scheduled tasks. We can verify like this:

      $ crontab -l
      no crontab for scott
      

      If we run crontab -e we can create a cron table and add in some entries. Cron is an incredibly simple system to use. However, the complexity with it comes in the format of its scheduling syntax.

      The cron table format is pretty easy to use, once you get used to it, but the fields can be difficult to remember so it is common and expected that you will normally look them up when making an entry unless you do this very commonly.

      The cron table is based on a "column" system. Here is the format:

      min hour day(month) month day(week) command
      

      And we fill in each of the first four fields with a single number, a list of comma delimited numbers or a hyphen denoted range of numbers and the final field is the command that we want to run, exactly as we would run it normally. Each numeric field will take an asterisk (*) which is the standard wildcard meaning "any".

      • Min: The minute of the hour, just like on the clock.
      • Hour: The hour of the day in 24 hour format.
      • Day of the Month: The calendar number.
      • Month: The month of the year.
      • Day of the Week: 0 is Sunday, so is 7

      The fields can be in these forms:

      • Single Number: 5
      • List of Numbers: 0,15,30,45
      • Range of Numbers: 1-5
      • All: *

      So let's say that we want to run a custom script that we have written that emails out a notification every day at five in the afternoon that tells everyone that it is time to go home. We would do it like this:

      0 17 * * * /opt/scripts/gohomeremindermail.sh
      

      The zero is the minutes of the hour and the 17 is 5PM on a 24 hour clock. The next three asterisks denote that this is to go out every day of the month, every month of the year, any day of the week.

      Now wait a minute, what if we want to limit that to week day? Easy enough, try this:

      0 17 * * 1-5 /opt/scripts/gohomeremindermail.sh
      

      You add one line for each item that you wish to schedule. Each item is known as a "cron job". The cron jobs in your personal cron table will run with your permissions; they will act as if they were run by you. Cron scheduling is actually very simple and straight forward.

      There are some special case items that you can put into the cron table that do not exactly follow the above format. The @reboot directive tells cron to execute the cron job immediately following a system reboot, but at no other time. This can be very handy for special case jobs.

      The format looks like this:

      @reboot /home/scott/mystuff/mycoolscript.sh
      

      If you have the necessary permissions, like if you are the root user, you can use the crontab command to modify the cron tables of any user as well by adding a -u and the user's name like this:

      crontab -e -u scott
      

      Advanced Crontab Entries with Fractions

      One quirk of the cron table format is that it allows the use of fractions for the numbers. Not exactly fractions but an "every" variable. So in this example...

      0-30/5 * * * * /opt/apps/sendanalert
      

      The sendanalert program would fire every five minutes from the start of every hour until the half, then wait half an hour. It's a short hard to writing 0,5,10,15,20,25,30

      */10 * * * * /opt/apps/sendanalert
      

      This one would have send the alert every ten minutes.

      Also available are some other handy short hands that are rarely used...

      @yearly
      @weekly
      @daily

      Part of a series on Linux Systems Administration by Scott Alan Miller

      posted in IT Discussion linux unix sam linux administration cron cron job crontab bsd solaris
      scottalanmillerS
      scottalanmiller
    • Hanging at Scale Today

      In Indianapolis. Haven't seen their California facility yet.

      0_1472656260808_image.jpeg

      posted in IT Discussion scale
      scottalanmillerS
      scottalanmiller
    • Windows Administration: NTFS and ReFS Filesystems

      Since Windows NT first released, one of its key components is the ever evolving NTFS file system. NTFS and its features have been core to the Windows NT experience, which includes family members like Windows NT 3.1, NT 4, Windows 2000, XP, 2003, 2008, 2012 and the current Server 2016. Starting in 2012, Microsoft added an additional enterprise filesystem to the Windows NT family: ReFS. With the addition of ReFS to the Windows NT family, Windows is now much more like most UNIX operating systems in providing administrators with multiple file systems filling different needs. This, however, is a new technical challenge for Windows Administrators because there have never been choices in the Windows world previously and it is very common for ReFS to mistakenly be thought of as a successor to NTFS when, in fact, it is not at all and is actually a complimentary file system.

      NTFS is the standard filesystem for Windows and the appropriate choice for most use cases. ReFS cannot even be used for the main "C" drive as it is not bootable like NTFS. ReFS was designed to be the Windows answer to UNIX filesystems like ZFS, BtrFS and HammerFS.

      NTFS is the more general purpose, more broadly applicable filesystem while ReFS is designed for use for Hyper-V storage and is expected to be used in conjunction with Storage Spaces.

      The most common use cases are for NTFS to be used on top of hardware RAID while ReFS has built in software RAID that is often assumed to be used.

      NTFS has more features than ReFS, big unique features include deduplication, compression, hard links, transactional NTFS, certain encryption types (EFS) and quotas. Some of these are rather significant. In most use cases, NTFS is also more performant than ReFS is. The benefits of ReFS are few and are only applicable under very specific conditions, conditions which at his time are generally warned against as being rather nascent and untested. It is not intended to replace or compete with NTFS but to fill a specific role for which NTFS is poorly suited.

      For all intents and purposes, Windows Admins should be using NTFS in nearly all use cases, so much so that ReFS has little need to even be considered or discussed outside of pure curiosity in that world. Windows desktops and servers must run off of NTFS primarily and have little, if any, reason to even look to ReFS as it is often a pure negative in that environment.

      In the Hyper-V virtualization world, ReFS has an important role to play but even there, only in specific circumstances and conditions - ones that remain the exception, not the rule. But it would behoove Hyper-V Administrators to familiarize themselves with ReFS to prepare them for an understanding of when ReFS is appropriate and when NTFS is appropriate.

      When in doubt, when there is any question, simply choose NTFS. NTFS is the safe choice for speed, reliability, features and maturity.

      Also, because ReFS is so new and because it trusts itself and other layers to never fail, it is poorly prepared to handle situations in which there has been a failure. From Wikipedia:

      Issues identified or suggested for ReFS, when running on Storage Spaces (its intended design), include:

      • Adding thin-provisioned ReFS on top of Storage Spaces (according to a 2012 pre-release article) can fail in a non-graceful manner, in which the volume without warning becomes inaccessible or unmanageable.[7] This can happen, for example, if the physical disks underlying a storage space became too full. Smallnetbuilder comments that, in such cases, recovery could be "prohibitive" as a "breakthrough in theory" is needed to identify storage space layouts and recover them, which is required before any ReFS recovery of file system contents can be started; therefore it recommends using backups as well.

      • Even when Storage Spaces is not thinly provisioned, ReFS may still be unable to dependably correct all file errors in some situations, because Storage Spaces operates on blocks and not files, and therefore some files may potentially lack necessary blocks or recovery data if part of the storage space is not working correctly. As a result, disk and data addition and removal may be impaired, and redundancy conversion becomes difficult or impossible.

      • Because ReFS was designed not to fail, if failure does occur, there are no tools provided to repair it. Third party tools are dependent on reverse engineering the system and (as of 2014) few of these exist.

      These issues are incredibly serious as it means that there is significant concern as to the viability of ReFS in the one scenario under which it is intended to be used.

      posted in IT Discussion windows windoes server 2012 windows server 2016 windows server 2012 r2 windows administration refs ntfs filesystems storage spaces raid sam windows administration
      scottalanmillerS
      scottalanmiller
    • Best Ad Ever

      No idea what they were meaning to advertise, but I approve this message...

      0_1478295514747_Screenshot from 2016-11-04 17-36-59.png

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • SpiceWorld 2017 Austin Grey Breakfast

      We do a terrible job of making these decisions once SpiceWorld rolls around, so I'm getting this set in stone now so that we are ready. The Grey Breakfast which ran from 2009 - 2015, and then skipped 2016 because of the wedding, will be on again in 2017 and will be at the Cafe Crepe located under the Hampton Inn Downtown Austin, next door to the Hyatt Place and one block from the conference center. This is on San Jacinto and 2nd.

      posted in IT Discussion spiceworld spiceworld 2017 grey breakfast spiceworld austin
      scottalanmillerS
      scottalanmiller
    • Installing Salt Master

      Salt Open is the free, open source component of Salt Stack. Salt can be easily deployed to many different operating systems. In this example we will use CentOS 7, but essentially the same instructions will work across many operating systems.

      First we will start with a basic CentOS 7 build. In this example I start with a minimum build, plus firewalld on a Scale HC3 cluster.

      0_1481434162771_Screenshot from 2016-12-11 00-05-40.png

      Now we can easily get Salt Master installed.

      cd /tmp; curl -L https://bootstrap.saltstack.com -o install_salt.sh
      sh install_salt.sh -M
      

      Now we need to open the firewall ports needed by the Salt Master, which is TCP 4505 and 4506.

      Here are the commands for CentOS, RHEL or Fedora.

      firewall-cmd --permanent --zone=public --add-port=4505-4506/tcp
      firewall-cmd --reload
      

      And here are the commands for Ubuntu.

      sudo ufw allow 4505
      sudo ufw allow 4506
      

      This script will detect our operating system and add proper repos, packages and get everything installed for us.

      Lastly, for this example, we will change the default hostname of the box to salt which is what the Salt Minions expect.

      echo 'salt' > /etc/hostname
      reboot
      

      That's it, our Salt Master should be all up and running. Now our Minions can connect to it.

      posted in IT Discussion salt saltstack devops linux sam salt open
      scottalanmillerS
      scottalanmiller
    • Public Cloud vs. Hosted Hyperconvergence Costing Project

      So doing cost comparisons seem to be very popular and I'm going to try to do a bit more of them. I thought that it might be useful if we had some real world workloads to use to compare the two approaches. Coming up with contrived examples is useful, but only so useful. Getting examples of what people actually need to compare would be far more interesting.

      What are we comparing?

      In the first corner: public cloud. Services like Amazon AWS and Vultr. An average mainline Windows server there is about $96/mo and Linux is about $40.

      In the second corner: hosted hyperconvergence. We can play around with different options, but Scale and Colocation America are the easiest and are very comparable as it is enterprise, full support, single price and Tier IV datacenters with Amazon-like full time support.

      Comparing these two is very useful because both are off-premises approaches that overlap in what they provide. Two different approaches to essentially identical needs for SMB customers.

      Let me know some workloads and let's get to comparing!

      @colocationamerica
      @scale
      @ScaleLegion

      posted in IT Discussion hyperconvergence hyperconverged scale scale hc3 colocation america amazon aws vultr
      scottalanmillerS
      scottalanmiller
    • What Avaya Has to Teach Us About Closed Source

      Recent events have led Avaya, the giant PBX vendor, to file for Chapter 11 Bankruptcy Protection. This, in turn, has led current and potential customers to ask how this affects them; and this is an excellent question. Current customers have existing technical debt and any risk that they will incur has already happened, outside of being prepared for a suddenly lack of support there is little for them to do.

      New customers or potential customers need to rethink their telephony strategy. Avaya is a closed source appliance (black box) and buying the appliance is not what your money is spent on, it is being spent on the support that comes from Avaya. While the appliance itself isn't bad, it's just an appliance and in many ways is inferior to other options (such as a virtual machine.)

      Because of the closed nature, Avaya is a necessary component of the viability of the appliance. Should Avaya fail completely, the device's value literally drops to zero. That doesn't mean that it will stop functioning immediately, but it means that should anything go wrong with it, there is no official support so any security breaches, patching issues, instabilities, hardware failures or similar have no remediation - the singular thing that so much money would be paid for would simply never be delivered.

      This is the risk of closed source software and black box hardware. They are a perfectly fine product category but are completely dependent on two things:

      • Support from the rights-holding vendor, and...
      • Confidence in the vendor.

      The issue now with Avaya is that we can no longer have confidence that they will be able to survive to continue to support their products. An investment like this in something like storage often requires a five to ten year confidence window, with a PBX this is generally more like ten to twenty years.

      This is simply an anecdotal example of real world risks from the use of closed source and appliances playing out. It is often difficult or impossible to assess, as customers and certainly impractical, the financial health of a major corporation and customers buying Avaya a few months ago would have had little or no warning that their investment was to be at so much risk. Their only warning was that the product was closed to them and that third party support would essentially end along with primary vendor support ending as one is just an extension of the other.

      Open source and open hardware products protect against this scenario. We do not have the same concerns about financial viability because support can be transfered to other vendors, products can be maintained without a vendor and products are open for us to maintain ourselves without the resources, permission or tools of a rights-holding vendor.

      posted in IT Discussion avaya open source
      scottalanmillerS
      scottalanmiller
    • Linux: BtrFS

      BtrFS, the "Better File System" pronounced "Butter F S", is a Linux-native, very modern addition to the Linux ecosystem meant primarily to provide Linux with a solid competitive product to FreeBSD's ZFS (BtrFS was introduced before ZFS was widely available on Linux.) BtrFS was first introduced by Oracle, who also makes ZFS. Both of these are classified as Copy-on-Write (COW) filesystems.

      Before we delve into BtrFS, we need to clarify what exactly it is. Like the elder ZFS, BtrFS is far more than a filesystem. It is actually three very distinct components:

      • File System: BtrFS contains a full POSIX filesystem and allows it to operate anywhere that EXT3, EXT4, XFS or ZFS would be used, for example. It is not a clustered filesystem, which is a common misconception. It is a traditional filesystem.
      • Logical Volume Manager: BtrFS, like ZFS, contains a complete logical volume manager (LVM) within itself and has no need to rely on Linux' LVM2 for this functionality but, like any filesystem or LVM, could be used on top of another LVM like LVM2 if desired.
      • Software RAID: Also like ZFS, BtrFS contains its own software RAID implementation so can function in a software RAID capacity without needing to use the older MD RAID system.

      With these three components BtrFS, like ZFS, is really a complete storage subsystem rather than strictly a filesystem. BtrFS is a "stack" of RAID / LVM / FS and would be more comparable to another stack such as MD / LVM2 / XFS than to any single component of another stack. This adds complication as discussing BtrFS requires that we understand which layers are being considered at any give time. For example we could easily use these stacks at different times: MD / LVM2 / BtrFS; or MD / BtrFS / BtrFS; or BtrFS / BtrFS / BtrFS!

      BtrFS is considered the "path forward" by the EXT team and was started by an engineer from the Reiser team. By 2017 with XFS having widely displaced the EXT family and Reiser family of filesystems the future of Linux is primarily looking to be shared by XFS and BtrFS. In 2017, at the time of this writing, Ubuntu is still on EXT4 as the default filesystem, Red Hat has moved to XFS and Suse has moved to BtrFS as defaults.

      BtrFS is an extremely fast moving and elusive target to describe as development is ongoing and rapid and many core features are planned but not yet implemented. At this time the following are the key features of the BtrFS storage stack:

      • Mostly self-healing in some configurations due to the nature of copy-on-write
      • Online defragmentation and an autodefrag mount option
      • Online volume growth and shrinking
      • Online block device addition and removal
      • Online balancing (movement of objects between block devices to balance load)
      • Offline filesystem check
      • Online data scrubbing for finding errors and automatically fixing them for files with redundant copies
      • RAID 0, RAID 1, and RAID 10
      • Subvolumes (one or more separately mountable filesystem roots within each disk partition)
      • Transparent compression (zlib and LZO), configurable per file or volume
      • Snapshots (read-only or copy-on-write clones of subvolumes)
      • File cloning (copy-on-write on individual files, or byte ranges thereof)
      • Checksums on data and metadata
      • Union mounting of read-only storage, known as file system seeding (read-only storage used as a copy-on-write backing for a writable Btrfs)
      • Block discard (reclaims space on some virtualized setups and improves wear leveling on SSDs with TRIM)
      • Send/receive (saving diffs between snapshots to a binary stream)
      • Hierarchical per-subvolume quotas
      • Out-of-band data deduplication (requires userspace tools)

      A great number of advanced features are planned, but are not yet available. Key among these are parity RAID features including RAID 5 and 6, but also potentially the first implementations of RAID 5.4, 5.5 and 5.6 and the only other RAID 5.3 (aka RAID 7) implementation outside of ZFS. Object based mirrored RAID, inline deduplication and encryption are also planned for the near future.

      Already BtrFS is an advanced and powerful choice for Linux admins when suitable. Many of its features are designed for use when used as the physical layer storage system and not for use in a VM. For most purposes, we will expect to find BtrFS used in dedicated storage devices and other choices more commonly used in virtual machine instances.

      Competition in other operating systems: Microsoft's answer to ZFS and BtrFS is their ReFS COW filesystem. Mac OSX abandoned ZFS in favour of their upcoming APFS COW system. Dragonfly has its own advanced COW filesystem named Hammer.


      Part of a series on Linux Systems Administration by Scott Alan Miller

      posted in IT Discussion linux btrfs filesystems filesystem sam linux administration raid lvm logical volume managers
      scottalanmillerS
      scottalanmiller
    • RE: Vultr & abusive neighbors

      So that's why my VM got shut off.

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • Hyper-V Architecture 2016 and How To Install

      Hyper-V, due to loads of misinformation repeated in some online communities, can be very confusing, but should not be. Hyper-V is, without exception, a free Type 1 (bare metal) hypervisor from Microsoft. It uses the Xen architecture model (also used by VMware ESX) where the hypervisor sits on the metal and a small controller environment, sometimes called a Dom0, runs in a privileged virtual machine and provides an interface and often drivers and other support to the hypervisor.

      No matter how Hyper-V is installed, there is always a bare metal hypervisor and a "Dom0" component. There is no and can be no exception to this.

      Hyper-V itself and its included Dom0 environment are completely free without exception. However, you can optionally put things into or modify the Dom0 to make it require a license. The most common way that this happens is by installing Hyper-V via the Windows Server role functionality. This causes a full version of Windows, which requires licensing, to be placed in the Dom0 instead of the lean, free Dom0 that comes with Hyper-V itself.

      Installing Hyper-V should always be done using the completely free Hyper-V installer and never via an installed Windows Server system that needs to be packaged up and moved into the Dom0.

      Download Hyper-V 2016 here: https://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-server-2016

      Do not be confused by the term evaluate. Hyper-V is not an evaluation, it is the evaluation center that handles the download link because it is free. The term is the name of the site, not a reference to the status of the product.

      posted in IT Discussion hyper-v virtualization hyper-v 2016
      scottalanmillerS
      scottalanmiller
    • How Many Windows Server VMs Can You Run on Hyper-V SAMIT Video

      Youtube Video

      @scottalanmiller helping to explain how to think about and figure out Windows Server licensing when running on Hyper-V (or any virtualization) and why Hyper-V seems confusing.

      posted in IT Discussion hyper-v windows server virtualization licensing scott alan miller samit youtube windows server 2016 microsoft licensing
      scottalanmillerS
      scottalanmiller
    • Examining unRAID Storage

      Some of you may have heard of the hobby storage system called unRAID which promotes itself as a RAID alternative. It, however, it not actually a RAID alternative, but actually just low end software RAID 4 mixed with built in hybrid RAID options.

      unRAID is a purely software product, no hardware components, and uses a single parity data protection scheme with a dedicated parity disk (aka RAID 4.) The hybrid nature comes from the fact that the implementation uses the largest disk in the array as the parity disk so that any other disks, of any size, can be used. This means that unRAID is not as reliable, nor as performant, as normal RAID 4, as it cannot evenly use all drives in the array for operations. And RAID 4 is commonly already not as fast as RAID 5, due to the dedicated parity spindle. And RAID 4 suffers from uneven wear and tear, making it increasingly risky compared to RAID 5, even before considering the additional risks added by the hybrid "feature."

      unRAID is, without a doubt, a "never use" technology. It lacks any technical benefits to make it viable, and adds on problems such as being consumer only without any enterprise support, being closed source, and have no major vendor behind it.

      posted in IT Discussion raid storage unraid
      scottalanmillerS
      scottalanmiller
    • Staggering Cost of Azure and Windows on Cloud

      We just shut down some Azure systems that a client had left running because they forgot to tell us that they were not in use for more than a year. Oops. We've been bugging them to decom it but they thought that they were using it.

      While shutting it down, we just figured out the cost that was involved here. Two ridiculously tiny Windows VMs, one running a .NET application and one running Spiceworks.

      Annual cost? Roughly $5,000 USD. (Was actually $4,998.27) Holy cow. That's $2,500 per year, per workload, for systems that proved to not be very stable. That's enough money to have easily bought physical servers for each workload, and paid for colocation for each! These were tiny VMs, just enough to run their workloads.

      This is why both Azure and Windows are just so ridiculously expensive to have in this era. Had these been Linux systems on Vultr, we easily could have done all of this same hosting for under $500/year. Possibly way less than that, that's being exceptionally generous.

      posted in IT Discussion azure windows server windows cloud computing vps
      scottalanmillerS
      scottalanmiller
    • The Myth of RDP Insecurity

      I know the 🌶 site there is this persistent myth that RDP is insecure and that the solution to its insecurity is to wrap it in a VPN. This seems very silly as RDP is natively VPN'd already. If a VPN provided security, surely the very well researched and secured VPN that is integrated with RDP would be the best choice. That VPN is already completely "shrunk" to expose nothing but RDP, which of course you can do manually with some other VPN solution, but requires much more work and is a "one off" rather than a well known, battle tested configuration.

      Azure itself exposes RDP directly because it is considered extremely secure. It's roughly identical to SSH in security. Of course, exposing "nothing" is better than exposing "something", but the option there is to close down the services completely, not to wrap either in "yet another VPN." Things like port locking, port knocking, only opening when needed, and so forth add some serious security on top of the existing security mechanisms, but a VPN wrapper is just redundant.

      Where does this idea come from? This feels, to me, to be one of those "Windows Admins who distrust Windows" myths where there is an irrational distrust of Microsoft and the product is just believed to be insecure. We hear stories, on the spicy site of course, of people constantly getting hacked through RDP... of course they are, if you expose it with common usernames and weak passwords, any VPN will get hacked. But, as people usually do, they accept no blame and no one is willing to point out the obvious faults in the setup as professional review is discouraged there, and people point fingers at an innocent vendor who could only defend themselves by throwing customers under the proverbial bus.

      But honestly, has anyone ever heard of an actual hack of RDP? One where the end users didn't leave it wide open in a way that would have compromised any service? VPNs are only as secure as the passwords that you put in front of them.

      RDP is very secure and there is no need for additional security around it. Just standard good security practice and it is as secure as anything is reasonably going to get. If you need something more secure, you can't be exposing anything like this to the outside world.

      posted in IT Discussion rdp vpn security
      scottalanmillerS
      scottalanmiller
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 2140
    • 2141
    • 6 / 2141