ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scottalanmiller
    3. Best
    • Profile
    • Following 170
    • Followers 168
    • Topics 3,468
    • Posts 151,732
    • Groups 1

    Posts

    Recent Best Controversial
    • Mounting an NFS Home Share on CentOS 7 Clients

      Home directories present both a unique challenge and opportunity for utilizing remote shares in UNIX. As an opportunity they allow us to keep servers lean and share home directories broadly giving end users consistent files and environments with minimal effort, reducing storage needs and speeding system deployment. The challenge is that we want to avoid the overhead and risks of having home directories continuously mounted when unneeded. Home directories are probably the most common usage of NFS shares to normal servers (outside of unique uses such as backup targets and virtualization shared storage.)

      To tackle the problems associated with persistent mount mounts (such as delays or failures at boot time) on Linux we look to the use of automounting - that is a daemon that will look for a filesystem require and initiate the NFS mount at the time of use rather than proactively.

      Note: In my examples, nfs is the /etc/hosts entry for my NFS server. You will need to use the name of your server wherever you see me referencing nfs as a server name.

      The Linux daemon that handles this is autofs. Autofs is not installed in a CentOS 7 minimal install so we need to add it.

      yum -y install autofs nfs-utils
      

      Now to configure autofs to look for home directories to mount:

      echo "/home /etc/auto.home" >> /etc/auto.master
      echo "* nfs:/home/&" >> /etc/auto.home
      

      Now to move the old /home out of the way in case something is there already.

      mv /home /tmp/home.old; mkdir /home
      

      And we can start up AutoFS:

      systemctl enable autofs.service
      systemctl restart autofs.service
      reboot
      

      Now we have two convenient ways for testing the automounter. By default the "net" filesystem is enabled and we can simply navigate to...

      cd /net/nfs/home
      

      And our files should be visible there. You will often need to navigate directly into a subfolder of the mount to see them.

      If you have home directories created on the share already then we can test mounting in that way:

      sudo su - username
      

      This should, if all is working, take you right into the newly mounted home directory. You can test with these commands:

      pwd
      df .
      
      posted in IT Discussion nfs nfs 3 centos centos 7 file server home ntg lab scale scale hc3 rhel rhel 7 linux
      scottalanmillerS
      scottalanmiller
    • Happy Third Birthday to MangoLassi!

      Actually missed the announcement by three days. February 13th is the anniversary date. Three years, can you believe it!

      posted in Announcements birthday anniversary
      scottalanmillerS
      scottalanmiller
    • Building ELK on CentOS 7

      Okay, after much work, we finally have a working ELK install process for CentOS 7. This took a bit of work thanks to all of the configuration files that need to be created or modified. This is a long one, hopefully this will be useful.

      Here is a basic VM being created on a Scale HC3. You are going to want to start with at least two vCPU and at least four GB of RAM, I'd recommend at least six and eight is a good starting point if you have the resources and will use this for more than a lab. Half a terabyte is a good starting point for disk space. Heavily recommended that XFS be used.

      ELK on Scale

      #!/bin/bash
      
      cd /tmp
      yum -y install wget firewalld epel-release
      yum -y install nginx httpd-tools unzip
      systemctl start firewalld
      systemctl enable firewalld
      wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"
      yum -y install jdk-8u65-linux-x64.rpm
      rm jdk-8u65-linux-x64.rpm
      rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
      
      cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
      [elasticsearch-2.x]
      name=Elasticsearch repository for 2.x packages
      baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
      gpgcheck=1
      gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      EOF
      
      cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
      [elasticsearch-2.x]
      name=Elasticsearch repository for 2.x packages
      baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
      gpgcheck=1
      gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      EOF
      
      yum -y install elasticsearch
      mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.old
      echo 'network.host: localhost' > /etc/elasticsearch/elasticsearch.yml
      systemctl start elasticsearch
      systemctl enable elasticsearch
      
      cat > /etc/yum.repos.d/kibana.repo <<EOF
      [kibana-4.4]
      name=Kibana repository for 4.4.x packages
      baseurl=http://packages.elastic.co/kibana/4.4/centos
      gpgcheck=1
      gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      EOF
      
      yum -y install kibana
      mv /opt/kibana/config/kibana.yml /opt/kibana/config/kibana.yml.old
      echo 'server.host: "localhost"' > /opt/kibana/config/kibana.yml
      systemctl start kibana
      systemctl enable kibana.service
      htpasswd -c /etc/nginx/htpasswd.users kibanauser
      setsebool -P httpd_can_network_connect 1
      mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.old
      
      cat > /etc/nginx/nginx.conf <<EOF
      user nginx;
      worker_processes auto;
      error_log /var/log/nginx/error.log;
      pid /run/nginx.pid;
      
      events {
          worker_connections 1024;
      }
      
      http {
          log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                            '$status $body_bytes_sent "$http_referer" '
                            '"$http_user_agent" "$http_x_forwarded_for"';
      
          access_log  /var/log/nginx/access.log  main;
      
          sendfile            on;
          tcp_nopush          on;
          tcp_nodelay         on;
          keepalive_timeout   65;
          types_hash_max_size 2048;
      
          include             /etc/nginx/mime.types;
          default_type        application/octet-stream;
      
          include /etc/nginx/conf.d/*.conf;
      }
      EOF
      
      cat > /etc/nginx/conf.d/kibana.conf <<EOF
      server {
          listen 80;
      
          server_name example.com;
      
          auth_basic "Restricted Access";
          auth_basic_user_file /etc/nginx/htpasswd.users;
      
          location / {
              proxy_pass http://localhost:5601;
              proxy_http_version 1.1;
              proxy_set_header Upgrade \$http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host \$host;
              proxy_cache_bypass \$http_upgrade;        
          }
      }
      EOF
      
      systemctl start nginx
      systemctl enable nginx
      systemctl start kibana
      systemctl restart nginx
      firewall-cmd --zone=public --add-port=80/tcp --perm
      firewall-cmd --reload
      
      cat > /etc/yum.repos.d/logstash.repo <<EOF
      [logstash-2.2]
      name=logstash repository for 2.2 packages
      baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
      gpgcheck=1
      gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
      enabled=1
      EOF
      
      yum -y install logstash
      # See below for file generation for you
      # cd /etc/pki/tls/
      # openssl req -subj '/CN=elk.lab.ntg.co/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
      
      cat > /etc/logstash/conf.d/02-beats-input.conf <<EOF
      input {
        beats {
          port => 5044
          ssl => true
          ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
          ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
        }
      }
      EOF
      
      cat > /etc/logstash/conf.d/10-syslog-filter.conf <<EOF
      filter {
        if [type] == "syslog" {
          grok {
            match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
            add_field => [ "received_at", "%{@timestamp}" ]
            add_field => [ "received_from", "%{host}" ]
          }
          syslog_pri { }
          date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
          }
        }
      }
      EOF
      
      cat > /etc/logstash/conf.d/30-elasticsearch-output.conf <<EOF
      output {
        elasticsearch {
          hosts => ["localhost:9200"]
          sniffing => true
          manage_template => false
          index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
          document_type => "%{[@metadata][type]}"
        }
      }
      EOF
      
      service logstash configtest
      systemctl restart logstash
      systemctl enable logstash
      cd /tmp
      curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
      unzip beats-dashboards-*.zip
      cd beats-dashboards-1.1.0
      ./load.sh
      cd /tmp
      curl -O https://raw.githubusercontent.com/elastic/filebeat/master/etc/filebeat.template.json
      curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' [email protected]
      firewall-cmd --zone=public --add-port=5044/tcp --perm
      firewall-cmd --reload
      systemctl restart logstash
      

      You will likely want to generate a server-side certificate for use with Logstash. This is not necessary depending on how you intend to use ELK, but for most common usages today, you will want to include this step:

      cd /etc/pki/tls/
      openssl req -subj '/CN=your.elk.fqdn.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
      

      This will generate the logstash-forwarder.crt file that we will see in another post.

      posted in IT Discussion scale ntg lab scale hc3 centos centos 7 elk logging log management how to linux elasticsearch kibana logstash kibana 4
      scottalanmillerS
      scottalanmiller
    • Risk: Single Server versus the Smallest Inverted Pyramid Design

      This comes up so often that it is worth having a risk analysis for this one scenario.

      Scenario: Two servers and a single NAS or SAN as shared storage. High Availability solution applied to the servers so that if one server dies the other can immediately spin up the failed VM. (This is a 2+1 Inverted Pyramid design.)

      Assumptions: We assume the following...

      • Server means an enterprise class commodity server, a standard server like the HPE DL380 or Dell R730. Anything in that general category of reliability.
      • NAS or SAN is a comparable price range unit which would include unified storage offerings from companies like Synology or ReadyNAS as well as dedicated SAN from Dell and HPE.
      • The logic in considering the two plus one IPOD design is because the "high availability" checkmark was required.
      • That the baseline for standard availability is defined as the level of availability associated with a standard enterprise server.

      Now in this example we can represent the standard risk presented by a single server solution as X. That would be if we had a single server with no second server and no external storage. This is our baseline and is "standard availability." There is only a single failure domain to consider so this is very simple.

      Now let's determine the risk of the 2+1 IPOD solution. We have two failure domains, the server tier and the storage tier. We have three devices, each is roughly equivalent with a risk of roughly X. (See below.)

      So our storage tier has a risk of X, that's simple. There is no mitigation for the risk there, it is what it is.

      The server tier has two servers that each have a risk of X but we mitigate this with hypervisor or application layer high availability technologies. These are not perfect but we assume that they are effective. There are many possible risks here, that the HA layer will fail, that the workloads will not be consistent, that the applications will behave badly and, the big one, that the second host will fail while the first one is down. Even with all of these factors together the risk at this layer is a tiny fraction of X. We will call this risk Y where Y is a positive risk number greater than zero but far closer to zero than to X. What is important is that Y is less than X but not zero.

      Now the risk of the two failure domains must be combined because each tier must be fully functioning or the entire system has failed. If the server tier fails the storage is useless. If the storage tier fails the servers are useless. So we have a dependency chain of risks.

      So the risk here is X + Y. We don't know what Y is, but what is important is that the risk of the resulting system is a number greater than X. It doesn't matter how risky X is, it doesn't matter how small Y is, the resulting risk figure is riskier than X by a tiny or potentially somewhat large amount.

      If risk were our only factor, this would be not that far from a break even. The single server design would still win on being less risky, easier to manager, more performant and a host of other reasons. But one factor that we are never without is cost. Cost is a form of risk itself that can never be ignored. If cost is no object then by extension risk is no object either (risk is measured in financial terms, after all.)

      In our example, one system has a single server. The other has three servers. It is plausible that the cost of the storage unit would be less than the cost of one of the server nodes, surely. Far more likely the cost would be higher. Even if the cost were zero, the theory would remain strongly true but for the sake of the example we will assume the average and say that it costs the same as one of the other servers. The cost of the inverted pyramid design comes in at an average of 300% the cost of the single server solution while being more risky. This is huge. In the real world this would vary from something like 250% on the low end to like 600% on the high end.

      So at the end of the day, we spend 3x what we should just to have more work to maintain the system and to take on extra risk without benefit.

      Many people like to think or claim and vendors will happily feed into this belief, that storage devices, especially SANs, are "magic" and not subject to the same risks as normal servers even though they are just servers themselves. A typical unified storage device is built on SuperMicro servers or similar which puts the risk profile nearly identical to that of a standard enterprise server. This is, indeed, a standard server in every way.

      It is very common to look to dedicated SANs that are slightly more expensive as also being infallible sometimes simply because of the name SAN (which is only a reference to its network protocols) and sometimes because of the assumption that dual controllers will break the rules of risk and make the single device effectively riskless. Dual controllers are not used in standard enterprise servers for a reason - they generally add no value and often create additional risk. Indeed in non-active/active controllers (the only ones that can be remotely considered in this price range and scale) dual controllers are shown to routinely create disasters far more often than standard servers fail.

      Most storage devices in this range also lack the support options that enterprise servers do. This is not a problem built into the solution type but into common choices in the approach to this layer so should not generally be calculated as a rule. But it is very common to see people say that they must avoid five minutes of downtime at the server layer but will select a storage device with a two week SLA on repairs - and will not be returned with the data intact!

      Because of the combinations of lower production rates, less testing, dual controller induced failure, special case software and more the storage layer is actually normally quite a bit more risky than the server layer even with comparable hardware. So we are actually overly generous to the IPOD solution approach by calling this risk X, when in fact it is generally quite a bit higher, possibly 2X or more!

      It should be noted that it is possible to move up to very high end, very expensive storage devices that will mitigate a large portion, but never all, of this risk. But in a 2+1 design the cost would normally double or more the entire product cost and are effectively unthinkable as far better risk mitigation strategies can be done that are both less risky and far less costly in other ways.

      posted in IT Discussion inverted pyramid best practice risk risk analysis scottalanmiller san nas storage
      scottalanmillerS
      scottalanmiller
    • The Emperor's New Storage

      Original Article: The Emperor's New Storage on SMBITJournal

      We all know the story of the Emperor’s New Clothes. In Hans Christian Anderson’s telling of the classic tale we have some unscrupulous cloth vendors who convince the emperor that they have clothes made from a fabric with the magical property of only being visible to people who are fit for their positions. The emperor, not being able to see the clothes, decides to buy them because he fears people finding out that he cannot see them. Everyone in the kingdom pretends to see them as well – all sharing the same fear. It is a brilliant sales tactic because it puts everyone on the same team: the cloth sellers, the emperor, the people in the street all share a common goal that requires them to all maintain the same lie. Only when a little boy who cares naught about his status in society but only about the truth points out that the emperor is naked is everyone free to admit that they don’t see the clothes either.

      And this brings us to the storage market today. Today we have storage vendors desperate to sell solutions of dubious value and buyers who often lack the confidence in their own storage knowledge to dare to question the vendors in front of management or who simply have turned to vendors to make their IT decisions on their behalf. This has created a scenario where the vendor confidence and industry uncertainty has engendered market momentum causing the entire situation to snowball. The effect is that using big, monolithic and expensive storage systems is so accepted today that often systems are purchased without any thought at all. They are essentially a foregone conclusion!

      It is time for someone to point at the storage buying process and declare that the emperor is, in fact, naked.

      Don’t get me wrong. I certainly do not mean to imply that modern storage solutions do not have value. Most certainly they do. Large SAN and NAS shared storage systems have driven much technological development and have excellent use cases. They were not designed without value, but they do not apply to every scenario.

      The idea of the inverted pyramid design, the overuse of SANs where they do not apply, came about because they are high profit margin approaches. Manufacturers have a huge incentive to push these products and designs because they do much to generate profits. SANs are one of the most profit-bearing products on the market. This, in turn, incentivizes resellers to push SANs as well, both to generate profits directly through their sales but also to keep their vendors happy. This creates a large amount of market pressure by which everyone on the “sales” side of the buyer / seller equation has massive pressure to convince you, the buyer, that a SAN is absolutely necessary. This is so strong of a pressure, the incentives so large, that even losing the majority of potential customers in the process is worth it because the margins on the one customer that goes with the approach is generally worth losing many others.

      Resellers are not the only “in between” players with incentive to see large, complex storage architectures get deployed. Even non-reseller consultants have an incentive to promote this approach because it is big, complex and requires, on average, far more consulting and support than do simpler system designs. This is unlikely to be a trivial number. Instead of a ten hour engagement, they may win a hundred hours, for example, and for consultants those hours are bread and butter.

      Of course, the media has incentive to promote this, too. The vendors provide the financial support for most media in the industry and much of the content. Media outlets want to promote the design because it promotes their sponsors and they also want to talk about the things that people are interested in and simple designs do not generate a lot of readership. The same problems that exist with sensationalist news: the most important or relevant news is often skipped so that news that will gather viewership is shown instead.

      This combination of factors is very forceful. Companies that look to consultants, resellers and VARs, and vendors for guidance will get a unanimous push for expensive, complex and high margin storage systems. Everyone, even the consultants who are supposed to be representing the client have a pretty big incentive to let these complex designs get approved because there is just so much money potentially sitting on the table. You might get paid one hour of consulting time to recommend against overspending, but might be paid hundreds of hours for implementing and supporting the final system. That’s likely tens of thousands of dollars difference, a lot of incentive, even for the smallest deployments.

      This unification of the sales channel and even the front line of “protection” has an extreme effect. Our only real hope, the only significant one, for someone who is not incentivized to participate in this system is the internal IT staff themselves. And yet we find very rarely that internal staff will stand up to the vendors on these recommendations or even produce them themselves.

      There are many reasons why well intentioned internal IT staff (and even external ones) may fail to properly assess needs such as these. There are a great many factors involved and I will highlight some of them.

      • Little information in the market. Because no company makes money by selling you less, there is almost no market literature, discussions or material to assist in evaluating decisions. Without direct access to another business that has made the same decision or to any consultants or vendors promoting an alternative approach, IT professionals are often left all alone. This lack of supporting experience is enough to cause adequate doubt to squash dissenting voices.
      • Management often prefers flashy advertising and the word of sales people over the opinions of internal staff. This is a hard fact, but one that is often true. IT professionals often face the fact that management may make buying decisions without any technical input whatsoever.
      • Any bid process immediately short circuits good design. A bid would have to include “storage” and SAN vendors can easily bid on supplying storage while there is no meaningful way for “nothing” to bid on it. Because there is no vendor for good design, good design has no voice in a bidding or quote based approach.
      • Lack of knowledge. Often dealing with system architecture and storage concerns are one off activities only handled a few times over an entire career. Making these decisions is not just uncommon, it is often the very first time that it has ever been done. Even if the knowledge is there, the confidence to buck the trend easily is not.
      • Inexperience in assessing risk and cost profiles. While these things may seem like bread and butter to IT management, often the person tasked with dealing with system design in these cases will have no training and no experience in determining comparative cost and risk in complex systems such as these. It is common that risk goes unidentified.
      • Internal staff often see this big and costly purchase as a badge of honour or a means to bragging rights. Excited to show off how much they were able to spend and how big their new systems are. Everyone loves gadgets and these are often the biggest, most expensive toys that we ever touch in our industry.
      • Internal staff often have no access to work with equipment of this type, especially SANs. Getting a large storage solution in house may allow them to improve their resume and even leverage the experience into a raise or, more likely, a new job.
      • Turning to other IT professionals who have tackled similar situations often results in the same advice as from sales people. This is for several reasons. All of the reasons above, of course, would have applied to them plus one very strong one – self preservation. Any IT professional that has implemented a very costly system unnecessarily will have a lot of incentive to state that they believe that the purchase was a good one. Whether this is irrational “reverse rationalization” – the trait where humans tend to apply ration to a decision that lacked ration when originally made, because they fear that their job may be in jeopardy if it was found out what they had done or because they have not assessed the value of the system after implementation; or even possibly because their factors were not the same as yours and the design was applicable to their needs.

      The bottom line is that basically everyone, no matter what role they play, from vendors to sales people to those that do implementation and support to even your friends in similar job roles to strangers on Internet forums, all have big incentives to promote costly and risky storage architectures in the small and medium business space. There is, for all intents and purposes, no one with a clear benefit for providing a counter point to this marketing and sales momentum. And, of course, as momentum has grown the situation becomes more and more entrenched with people even citing the questioning of the status quo and asking critical questions as irrational or reckless.

      As with any decision in IT, however, we have to ask “does this provide the appropriate value to meet the needs of the organization?” Storage and system architectural design is one of the most critical and expensive decisions that we will make in a typical IT shop. Of all of the things that we do, treating this decision as a knee-jerk, foregone conclusion without doing due diligence and not looking to address our company’s specific goals could be one of the most damaging that we make.

      Bad decisions in this area are not readily apparent. The same factors that lead to the initial bad decisions will also hide the fact that a bad decision was made much of the time. If the issue is that the solution carries too much risk, there is no means to determine that better after implementation than before – thus is the nature of risk. If the system never fails we don’t know if that is normal or if we got lucky. If it fails we don’t know if this is common or if we were one in a million. So observation of risk from within a single implementation, or even hundreds of implementations, gives us no statistically meaningful insight. Likewise when evaluating wasteful expenditures we would have caught a financial waste before the purchase just as easily as after it. So we are left without any ability for a business to do a post mortem on their decision, nor is there an incentive as no one involved in the process would want to risk exposing a bad decision making process. Even companies that want to know if they have done well will almost never have a good way of determining this.

      What makes this determination even harder is that the same architectures that are foolish and reckless for one company may be completely sensible for another. The use of a SAN based storage system and a large number of attached hosts is a common and sensible approach to controlling costs of storage in extremely large environments. Nearly every enterprise will utilize this design and it normally makes sense, but is used for very different reasons and goals than apply to nearly any small or medium business. It is also, generally, implemented somewhat differently. It is not that SANs or similar storage are bad. What is bad is allowing market pressure, sales people and those with strong incentives to “sell” a costly solution to drive technical decision making instead of evaluating business needs, risk and cost analysis and implementing the right solution for the organization’s specific goals.

      It is time that we, as an industry, recognize that the emperor is not wearing any clothes. We need to be the innocent children who point, laugh and question why no one else has been saying anything when it is so obvious that he is naked. The storage and architectural solutions so broadly accepted benefit far too many people and the only ones who are truly hurt by them (business owners and investors) are not in a position to understand if they do or do not meet their needs. We need to break past the comfort provided by socially accepted plausible deniability or understanding, or culpability for not evaluating. We must take responsibility for protecting our organizations and provide solutions that address their needs rather than the needs of the sales people.

      [I actually started this post over five years ago but it was just completed today. This one has been in my "to write" pile part way written for a full half of a decade!]

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • Who is the Real IT Manager?

      In a normal department, roles are generally relatively clear cut as to who is and who is not the decision maker and manager of that department. The head of HR has final say in HR decision making, the head of finance has final say in matter of accounting and budgeting, the head of operations has final say as to equipment chosen, locations and such, the head of legal has final say on legal matters, the head of marketing has final say on marketing matters and so on and so forth. Sure, all of these functions report up to the final executives like the President and the CEO who oversee the business as a whole and sometimes those executives do get their "hands dirty" by moving down into the trenches to assist with direct decision making within a single department. But by and large, we know that the CFO is going to have the final say, oversight and responsibility for financial matters and accounting and that decisions for accounting and finance are either made by the CFO directly or by financial department personnel under their supervision. It's clear, in these cases, who is in charge of and responsible for the departments.

      IT departments, however, often do not operate this way, at least not in the SMB. The head of IT, regardless of title, often has little to no decision making power. Not only not setting needs, but rarely making decisions, often not even trivial ones. Decision making often falls to those higher in management, sometimes even going so far as to not consult the IT department at all, or disregarding their proposals or injecting their own proposals!

      This brings up a very, very critical question - who really is the IT manager? If the person who is listed as the IT manager is not really managing IT in many cases, who is? And why are they hiding this fact?

      This is an important bit of terminology that has a lot of ramifications. It appears, from observation, that many companies label one person as the IT manager but give the job to someone else. In some ways, this is likely an extension of the existing problem of IT staff getting false or inflated titles. But it goes further than that. This is also management hiding their true function as heading IT within an organization.

      Why would someone hide their responsibilities in this manner? Perhaps it is just really poor thought processes and no one pointing out the obvious. But it feels like this is less than likely. Maybe it is because an organization is attempting to "play politics" and set up non-decision making staff as scape goats for bad IT decisions. Maybe it is to hide the fact that trained, skilled IT decision makers are lacking in the organization.

      For whatever reason, both true IT management (those making the IT decisions) and IT staff (those doing the majority of IT work) often silently agree to ignore the fact that the person making IT decisions is not the person labeled as and promoted as being IT management. Whether this is inexperience, pressure, confusion or something else, the problem remains.

      This can cause many problems in businesses. For one, the people making critical IT decisions, and therefore critical business decisions, are often untrained, unvetted, ill-suited, unchecked, secret decision makers who have no oversight, training or likely mandate for the tasks that they are taking on. Likewise, training, vetting and more is often spent on people with the title but not the responsibility or role, of IT management. This, in turn, causes yet more problems.

      Partially this is the natural extension of the Dilbert Principle in practice - those least capable are put into the assumed role of least damage which is seen as management. In any case, it hinders the organization's ability to determine where training and resources need to be spent, where decisions and mistakes will be made, who is responsible and accountable and makes the ability to function very poor. And it greatly hampers the ability to hire and retain good staff or even identify when good staff are in place.

      Every role in an organization can benefit from understanding and identifying the truth. If you don't know who the decision makers are, you are at an extreme disadvantage. If you can identify them, you can intelligently choose when it is prudent to expose this to the organization as a whole.

      posted in IT Discussion it business
      scottalanmillerS
      scottalanmiller
    • Nearly Every Technical Conversation

      truth hurts

      I feel this applies well this week, but really, always.

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • IT is Complex

      We all know that IT is complex, it's a big field with a lot of moving parts. But too few people really stop and understand why it is so complex. How can it be so difficult to know all of it, why are things so impossible to repeat, why can't companies find people with experience in exactly what they do? But do we need so many specialists for each little thing?

      Companies need to understand this because believing that IT "isn't that hard" or not understanding the scope and complexity of the field does much to undermine good management of IT workers and departments.

      IT is not defined. This is a bigger problem than people often realize. IT is not well designed. Talk to a dozen people who think that they work in IT and likely you will get at least ten totally different definitions of IT and at least half the people won't feel that one or two of the people in that dozen even work in IT and are using the term for working in a different field! As people with IT labels, we struggle to even know if we ourselves are working in IT! That's now complex things are. What the average consumer things of as IT is actually what we call "bench" work, a related but non-IT field; and many others confuse IT with electricians (even businesses often make this mistake.)

      IT is enormous. Few fields have as many aspects as IT does. IT is one of the largest fields that there is for work and there are so many vastly different jobs. The role of a network engineer, a systems administrator, a help desk technician, a database base administrator, an automation engineer or an application manager are all so different as to reasonably be completely different fields; each requiring a lifetime of study to really know well and each with it's own foci so extreme that there are "lifetime studies" within each foci within each IT discipline! And those are just the well defined roles, roles that overlap some or all of these are common and can include just about anything.

      IT roles are undefined. Both within the field itself and outside of it, almost nothing done in IT is well defined. What one person calls a system administrator is likely to be nothing at all like what another person calls it. The same title at ten jobs will likely involve ten nearly unrelated roles. Someone who is the best network engineer at one job could be the worst at another without changing anything that they are doing. Everything from tools, techniques, tasks, goals, expectations, resources, technologies and desired outcomes can be polar opposites or even unrelated between jobs of the same level, pay, region, market and title! There is no way to predict if your experience or skills that have been a perfect fit in the past will even be useful or applicable in a new role regardless of the amount of interviewing and research done.

      IT exists in every market. IT exists not as its own thing but as an addendum to every other business category. That means that while you might be experienced as a DBA for healthcare, you might see the same role but with a different twist in finance, manufacturing, government, military, research, aerospace, tourism, construction, etc. Knowing everything about IT alone is only part of the picture. Each industry is highly unique in how it uses IT, expects IT to behave, and what IT will do. There are many axis for working in IT, no one can do them all.

      Every company is unique. This can't be overstated. Outside of the complexities of IT and the complexities of different industries, each individual business is totally unique from an IT context. IT actually sees this just as dramatically or even more dramatically than any other business function. Not only does IT need to deeply understand all of the unique aspects of their business, they also then have to combine that with all of the unique aspects of the IT of that business and the two effectively multiply against each other. No other role really experiences the geometric increase in complexity from these two angles in the way that IT does.

      IT is cross discipline. IT is not a single thing and it is certainly not primarily a technical field. IT is primarily business, but a business support role that manages the infrastructure of the business. IT professions, especially those in decision making roles, need a deep business understanding of both business in general as well as the specifics of the industry, locality and individual business themselves. Then, on top of a business scope that rivals that of positions like CEO, COO and CFO, IT must also then know and understand technology and how it is used, could be used, should not be used and so forth in relationship to those factors. IT pros don't get to just be experts in one thing; they have to be experts in many. The IT department in any company will normally need an understanding of general business, the operations of the business in question, accounting and finance, legal, human resources, infrastructure, regulations, and then IT itself.

      IT doesn't control the majority of the "pieces". IT is not in the business of making the technology that it uses, it uses parts from other vendors. But it has to assemble a business infrastructure from the pieces that vendors provide. So like a car mechanic, IT is at the mercy of the manufacturers and vendors to provide working systems; but unlike a mechanic IT has to take a multitude of different parts and assemble a business infrastructure out of them that, itself, is very complex. IT has no control over the individual parts that they must work with and is at the mercy of many other organizations to keep pieces working individually and, in some cases, together.

      The tools of IT are vastly complex and always changing. IT doesn't exist in a vacuum, it uses tools from vendors (or vendor-like entities.) Each product is vastly complex from a hard drive or CPU to servers, switches and software. A single company might run billions of lines of code, none of which they control or create, most that has never been tested together, all which need to be updated constantly and exist in a multitude of states at any given time and use an incredible array of highly complex hardware moving at incredible speeds. Nearly everything in IT is a miracle of engineering. It can take decades of training and experience to really know some products on their own, let alone in combination with other products. But IT faces products that are constantly updating and potentially changing so there is no way to accumulate experience on a single product in a highly useful way without creating huge danger from a lack of updating. The idea that any IT professional can really know everything about any product is far fetched. Even simple products require constantly learning new things about them and keeping up with changes and learning how things change in the real world when interacting with other components.

      IT cannot be ripped and replaced. Unlike a car which has a set of known components which can be replaced to fix "any" problem, there is no reliable approach like this in IT. There is no "pay to fix it" or just "stop trying to fix it and replace it" magic fixes. There is no known machine that we can look up all the tested parts of and just make sure that each is there. Every business is a unique "machine" and is in constant motion. The machine that works reliably today may not tomorrow. IT cannot know what will happen, only predict risk and cost and manage change as best as it can.

      IT is not isolated. Moreso than other environments, IT does not exist in a vacuum. The infrastructure that was fast ten years ago is slow today. The security that was tight last month is wide open today. The threats that we plan for are relevant one day and totally different the next. Making a car that can drive on a street in 1930 is still useful today. Cars used benzine in 1900 and still use it today. Few aspects of study have the world change around them making knowledge and techniques obsolete even when the business or technology internally does not change, but IT does. IT doesn't have the choice to stand still, many factors, especially risk and security, change constantly.

      IT is young and still evolving. IT has only existed theoretically since the 1940s and only practically since the late 1980s. Very little research has gone into the field and the market is still driven by vendors and products rather than research and the industry promoting its own experts. This is slowly changing but has a long way to go.

      IT has no unified training process. Other fields have options such as university programs in which to learn about their disciplines. IT, however, lacks this. No university programs exist (or are known to exist) that teach IT as a whole. Some teach ancillary topics, some teach technical bits or do sales on behalf of vendors but no solid university programs teaching IT as a discipline exist and no university has the resources currently to do this. The field requires that common sense and self education provide what is needed and will continue to do so until the field is saturated and the university system has an opportunity to afford and attract research level IT practitioners to its ranks. As long as IT resources remain overly scarce in the field, the field will continue to compensate far beyond the reach of the university system. There is no concept of IT certification (for IT as a whole) and not even the possibility of such a concept at this time. We are decades away from being able to reasonably entertain such an idea.

      IT has high expectations. IT professionals are often expected to work long hours with little breaks or time to study, often expected to work all of the worst aspects of management (giving all time and energy to the firm), professionals (extensive personal training, personal responsibilities and guarantees) and blue collar factory workers (often forced to sit idly by when impractical just to be not be seen as "different" by other employees.) The expectations put on IT are often unreasonable even by the worst standards. The field is often mistreated on such a scale that it has impacted the field as a whole as to its ability to progress.

      IT is rarely understood by outsiders. Those that hire, manage and utilize IT resources cannot, for understandable reasons listed above, reasonably understand what IT does, should do or would do it allowed to. IT is often encumbered with assumptions from those both above it and below it and this sense of confusion leads to distrust, mistreatment and distancing that increases the complexity and decreases the quality of IT work and potential. IT is often forced to play politics unnecessarily simply because it is not treated equally with other departments and has to work around organizational problems that are unique to IT.

      posted in IT Discussion it business
      scottalanmillerS
      scottalanmiller
    • Being an IT Manager: Joys and Headaches

      I had a late night meeting recently with a software development intern who recently came to work for us. We met at a coffee shop near where he lives, at eleven at night. In most professions I imagine that going to a meeting with a new staffer in the late evening would seem odd but somehow, in IT, it seems almost normal.

      We talked for hours. We talked about project ideas, career goals, academic pursuits and ended with a two hour architectural discussion about the new project he’ll be working on. Leaving our meeting at four in the morning, it was invigorating to see him excited about software development and working in IT in general.

      Moments like this one remind me of just how much I love what I do and how awful it would be to do anything else. After twenty years in the field, I still love the excitement and variety of working in IT and especially of being a technical IT manager.

      Seeing that passion in someone just entering the field is exciting too. Hopefully, twenty years from now, that young intern will be meeting a new intern of his own, late at night and reminiscing about his first time getting coffee with his boss – way back when.

      Driving home, watching to make sure that the sun wasn't about to peek over the horizon, I was contemplating just what makes IT and specifically IT management the best job in the world. I've been a manager for a decade now.

      A decade. Time sure has flown.

      Being an IT manager means managing change. It means staying technical, very technical, while also working with people. I am not a traditional manager, simply keeping the employees in line and seeing that everyone keeps working. My job is to guide, to mentor and ultimately, I believe, to inspire.

      To inspire? Really? Absolutely. If I was to describe my job in as few words as possible I would say that my job description is to "Inspire and instill passion." When pressed I would probably add "while providing gentle guidance and course correction."

      I remove roadblocks. I provide assurance. I sign off on and accept responsibility for potentially risky decisions. I shield from the outside world. I facilitate creative brilliance.

      My staff are professionals. Brilliant, quirky, hard working, self-motivated IT professionals – each carefully selected in the hopes of finding someone who brings both technical skill and unmitigated drive to the organization. These are not people whom I need to manage in the traditional sense. These are people who need me to keep the path open so that amazing productivity and cool, unique solutions can just happen naturally.

      This isn't the HR department. This isn't accounting. This is IT. If you need me to watch over your shoulder all day long to make sure that you’re still working then you’re in the wrong field. I’m not here to convince you to do your job. I’m here to make sure that you have the resources necessary to do your job to the best of your ability.

      From time to time I am called upon to engage in technical pursuits. I may help with system engineering tasks, respond to an emergency when a server is down, help with a software architecture discussion, perform a peer review, give my opinion about the merits of one technology over another.

      I am tasked with leading by example. I read books, magazines, websites, e-zines, blogs and listen to podcasts from IT Conversations. I do this everyday. When my junior staff see how much time I spend educating myself after having been in this industry for so long it helps them to see how much there really is to learn and how exciting that process can be.

      In IT we can never stop learning. Our industry is one of constant change and keeping up with it is possibly the most critical skill that we can develop. I strive to create a culture where we learn from and support one another – where we grow together as a team.

      Challenges (My Blackberry…)

      While I love being an IT manager, I am also aware of the dark side of working in the IT field and of being an IT manager in particular.

      The hours can be, and often are, long. Brutally long. The very nature of working in IT means that your work tends to become integrated with your life, making it difficult to separate the two.

      IT management is not a "leave the office at the office" kind of career and stress follows you home. Vacations can easily become nothing more than working from an alternative location involving a hotel and your family enjoying Disney World – while you work at a laptop and have room service bring your lunch.

      IT management has a natural level of stress and urgency that exists in few other fields. When HR or Accounting have an emergency it is seldom a "right now" kind of emergency. IT, on the other hand, is almost always involved in situations involving a need for immediate attention.

      It is not uncommon for IT-related problems to impact large segments of the business, making them almost completely unable to work and, in many cases – like losing a website, database or other business-critical application – may result in a situation that can be measured in dollars lost per minute.

      As an IT Manager I am tied to my BlackBerry. I sleep with it beside my bed and check it on every occasion upon which I might awaken. I then check my mail thoroughly first thing every morning and check it as I go to sleep at night. Sometimes I even keep my BlackBerry under my pillow.

      Email is only the first tie between the office and myself. There is also Twitter and RSS. With staff and clients around the world IT never sleeps. It doesn’t take a very long of continuous connected before any amount of disconnection begins to introduce its own stress and worry. Even the idea of downtime can be stressful in and of itself, making it very difficult to relieve the pressure.

      IT is demanding in other ways as well. Working in IT means always staying on top of the latest technologies, trends, policies and techniques. While this is exhilarating it can easily become overwhelming. Being an IT manager is not something that you “possess” but something that you maintain. And you must maintain it. Every day you have to work to keep up with the general pace.

      The IT field rewards those who keep pace with change but punishes harshly those who fall behind. Staying on top of such a large industry is difficult at best and doing so while avoiding burnout is practically an art form. One which few ultimately master.

      Of course, I can hardly talk about the dark side of IT without taking into consideration the current economic condition. Like many fields, IT is often hit excessively hard by the whims of business and the natural changes in economic climate.

      One year IT professionals are overwhelmed with the number of job offers and opportunities only to be left on the side of the road holding a cardboard sign, "Will Code for Food" the next. Not that I think that IT has it worse than most professions. The media loves to glorify IT when times are good and scold it when times are bad. We are the whipping boy and the poster child, heroes and villains.

      It is difficult to ascertain the exact state of the discipline at any particular moment as the field is so large and poorly defined and statistics so misleading. Even as an industry insider it can be impossible to reach a consensus about whether we are currently in rich times or fallow. Either way the opportunities tend to be there for the making – IT provides a potential for moving against the market in general.

      No career will ever be perfect and for someone like me choosing the path of passion and creativity over one of stability and safety is an obvious one. I wouldn't have it any other way. Balancing life at work and life at home is, and will continue to be, challenging.

      It is not only a challenge because of my workload or hours of availability but also one of constant social upheaval.

      When I first began working in IT email was very new and just beginning to take its place in business and in academia. Within a few years email had become ubiquitous and the landscape changed forever. Now we have instant search, massive online documentation repositories, social networking, instant messaging within and without the corporate environment. Portable always-on connectivity keeps us connected to this new social communications structure that blends our jobs into our lives as easily as our meals and our sleep.

      Working in IT continues to be both exciting as well as scary. If it wasn't scary I don't believe that it would be so exciting. No matter how much I work or how much pressure is applied to me, I am fortunate to wake up each day feeling excited that what awaits me will allow me to grow, learn, explore, create and (I hope) enable others to do what on their own would be impossible. IT is, above all else, a field whose purpose is to promote the potential in others.

      I have no idea what tomorrow will bring to IT, but I am confident that it’ll surprise me and challenge me. That might be what I look forward to the most.

      Originally published in Datamation on May 18, 2009.

      I did not remember this article at all and stumbled on it this evening. Blast from the past. In the article I was amazing that a decade had flown by. Now nearly another decade has passed!

      posted in IT Discussion datamation
      scottalanmillerS
      scottalanmiller
    • Was It the Last IT Guys Fault

      Common story, working in IT in the SMB almost always means taking over the reigns from an IT guy or MSP that was there and is no more. Often there is a gap of a day or a year. It's the weird nature of IT in the SMB that there is almost never a handover period, no transition process.

      This makes for an interesting and problematic culture in IT. Invariably we spend our careers managing systems that we did not design, did not choose, have to answer for but had no influence in selecting and we often lack documentation, tribal knowledge and even insight into what decisions led to where the business is when we take over. Often systems are only half implemented or some degree of "in process changes" exist and we have to determine if they were halted because someone left the company or because they were completed or they were abandoned!

      This process of constantly taking over for someone that was fired, quit, or just disappeared (happens more than you think) creates a culture of all problems instantly being blamed on "the old IT guy / firm." Why? Because, why not? They were responsible for making the mess, lacking the documentation, not sticking around or whatever. This was "their network" and it has problems, who else is there to blame? The constantly changing IT position(s) in companies makes it feel natural to just expect that the last guy did a bad job and we have to fix it. So much so that it is almost expected and it has become cultural to point fingers at the last guy by everyone. It's the excuse for the technical debt that stops the current IT people from doing a good job and it keeps everyone from having to assign blame to the management that is still around. No need for management to accept any blame as there can be no proof of their involvement and the new IT guy or firm has no basis for suspecting them. Even if management is to blame, the new guy won't take the bold step of assigning blame to their new bosses and the bosses have no reason to not save face when there is a nameless person no longer around as the perfect scape goat.

      Having worked in SMB IT in both the on staff as well as a consulting role, and many years as a service provider, I've seen this countless times and nearly every first day conversation with a new job or customer is going to be about how many the last guy or firm was, what they screwed up, how they didn't finish anything and so forth. There is always much complaining. The customer or employer always has so many issues to expose and so much blame to lay. And often, from the IT person's side, looking at the system there are so many unanswerable questions: "Why are there no backups?", "What do you mean that the RAID array has been degraded for two years?", "How do you not have any passwords to these systems?", "Why are there three email servers here but email is hosted with Google?"

      But in the majority of cases, as the weeks or months or maybe years pass, it starts to become apparent, that budgets aren't approved, management is taking over IT decision roles, projects get canceled halfway through and so forth. Pretty soon you start to realize "I'm building the same awful problems that they blamed on the last guy!" It's not you, and it wasn't him... it's management. This is how the company runs! The last guy wasn't fired, he quit. The last MSP wasn't replaced, they stopped service from the client being unable to pay! The truth starts to become obvious.

      Are there exceptions? Of course. And we know pretty reliably that the average IT Pro isn't very good and the average MSP isn't very good and there really are terrible situations being left behind all over the place and good reasons to place some or a lot of blame on people building houses of cards for others to deal with. But this situation is nearly impossible if management is hiring, managing and auditing well. Management is always at the core of IT, even when they are hands off to IT management (as they should be.) They are the core of the business and influence who is IT, and how they are treated.

      Is there a simple answer here? No, but understanding this culture in IT and how it happens and that it does happen is important. Knowing how management has a "nothing to lose" stance shows us why this will just keep happening as long as SMB IT has high turn over. There is just always someone to blame. It's too convenient.

      As IT Pros, we can work to be a little less apt to throw someone under the bus when we don't now why they did what they did. Maybe they were following orders. Maybe they got caught with changing demands or requirements. Maybe they were forced to work on something that they didn't understand. Maybe they were not approved to update needed systems. We just don't know. But by looking at the forensics of old problems and looking at current management practice, perhaps we can start to piece together a coherent story that suggests that the management making problems today is the common thread with the problems of yesterday

      posted in IT Discussion it management management business
      scottalanmillerS
      scottalanmiller
    • How to Download Hyper-V 2016

      For some reason, Microsoft makes it very confusing as to how to acquire Hyper-V 2016. They make it feel like it might be okay to install it via the role in Windows Server (hint: don't do that.) They don't have a big Hyper-V website with flashing lights telling you how to acquire it. Their high profile, free enterprise virtualization is actually quite hidden and only available for download through Technet, probably to make it uniform with other free MS products. Confusingly, Hyper-V is a trial, just one without limits, but fear not, this is the real deal.

      So without further ado, here is the link that you need:

      Download Hyper-V 2016

      Yes it is free, yes you install this directly on your servers, no it is not a trial in the normal sense.

      posted in IT Discussion virtualization hyper-v microsoft technet hypervisor hyper-v 2016
      scottalanmillerS
      scottalanmiller
    • RE: What is the Latest With SodiumSuite?

      @fuznutz04 said in What is the Latest With SodiumSuite?:

      OK, thanks for the update. I'll hold off until I hear more.

      A new full timer has been added to the team. Dozens of hours of work into SS has gone on this past week already!

      posted in SodiumSuite
      scottalanmillerS
      scottalanmiller
    • RE: Ashley Madison hackers publish compromised records

      It's just a few of them. Vatican, Web.de and IBM addresses primarily. Tons of Vatican users, though. But that's not surprising in the least other than the fact the site is for adults and not children.

      Too soon?

      posted in News
      scottalanmillerS
      scottalanmiller
    • If You Thought Spice Levels Were Extreme...

      We have been dealing with gamification and studying it for years. @Nic and I have discussed it at long length as I did with Tabrez as well. When implementing ML we specifically wanted to avoid going down the path of unnecessary gamification. Yes, it is awesome for engaging people and getting them to participate on the site but the downside is that it causes people to not put in time creating content, answering questions, being helpful or whatever but makes people look for ways to get points and "win", which is not what a community is about. So we strove to keep gamification to a minimum. There is very little info about people and without going to the "people" lists you see nothing about them. New users and veterans appear equally in discussions. We did not want people getting "street cred" purely for having been around. We want the community to decide who are the experts and who are not naturally, not with an automated platform making decisions - often based solely on activity.

      Anyway, that was a long lead up to wanting to show you what people are excited about for gamification features for the platform. They are doing some really interesting things and I'm sure it is great for other forums. But just wanted people to know what we have at our disposal and are resisting implementing.

      https://community.nodebb.org/topic/6589/gamification

      posted in Announcements
      scottalanmillerS
      scottalanmiller
    • Double Fisting at Touche

      Here I am at Touche on 6th Street in Austin, Texas at SpiceWorld 2015 at the Scott Alan Miller Afterparty on Thursday evening double fisting it. One drink is a "surprise me" that I have no idea what was in it and the other is just a giant glass of gin from @Rob

      scott_double_fist_touche.jpg

      posted in Water Closet spiceworld spiceworld 2015 touche
      scottalanmillerS
      scottalanmiller
    • Smooth Integrated Terminal for SSH: The Missing Killer Feature for Windows

      After using many different operating systems and talking to many people about why they prefer Mac or Linux over Windows, once thing that I consistently hear is how important it is that they have the integrated terminal that is used to access other systems (normally via SSH.) Every UNIX machine has a great terminal included with it and not only are these used to work on the local system but are perfect for connecting to other systems. Developers and IT people use this constantly and it is a huge feature.

      Now Windows has cmd.exe and PowerShell and PS is an amazing tool for working on Windows but is almost useless for working on non-Windows systems. And the built in terminal programs for interacting with these tools leaves a lot to be desired. It is complicated and very limited compared to most UNIX terminal applications from a graphical, GUI and interaction standpoint (not complaining about language or scripting functionality, purely about the terminal itself.)

      Now everyone adds Putty to Windows and Putty is great, but it is not nearly as smooth as an integrated terminal and it is not very useful for working on the local system. I've used tools like ConEmu and integrated it with Putty and PowerShell and that makes things better but is complicated and anything but integrated with the system.

      So many people state that they choose Mac over Windows and their only reason is the terminal program! It seems to be that Microsoft is missing a huge opportunity here. Add a new application called WinTerm or something to that effect (NTerm, NTterm, Terminator, Terminal Velocity, Termination) that handles PowerShell and SSH, is completely integrated, is loaded with graphical goodies, has no width limitations, looks gorgeous by default, has opacity options, etc. This is a minor coding thing for Microsoft, they can do it in their sleep.

      I think many UNIX admins and developers would be completely happy to stay with or switch back to Windows with that one, little thing! Add vi and emacs to the system included in the OS install and even more people will be happy.

      posted in IT Discussion microsoft windows
      scottalanmillerS
      scottalanmiller
    • Linux Bonding Modes

      Another one from January, 2008: Linux Bonding Modes

      When bonding Ethernet channels in Linux there are several modes that can be chosen that affect the way in which the bonding will occur. These modes are enumerated from zero to six. Let’s look briefly at each and see how they differ. Remember that when looking at these modes that bonding can include two or more Ethernet channels. It is not limited to just two.

      The mode is set view the modprobe command or, more commonly, is simply inserted into the /etc/modprobe.conf (or /etc/modules.conf) configuration file so that it is configured every time that the Linux Bonding Driver is initialized.

      Mode 0: Round Robin. Transmissions are load balanced by sending from available interfaces sequentially packet by packet. Transmissions only are load balanced. Provides load balancing and fault tolerance.

      Mode 1: Active-Backup. This is the simplest mode of operation for bonding. Only one Ethernet slave is active at any one time. When the active connection fails another slave is chosen to take over as the active slave and the MAC address is transferred to that connection. The switch will effectively view this the same as if the host was disconnected from one port and then connected to another port. This mode provides fault tolerance but does not provide any increase in performance.

      Mode 2: Balanced XOR. This is a simple form of load balancing using the XOR of the MAC addresses of the host and the destination. It works in general fairly well but always sends the packets through the same channel when sending to the same destination. This means that it is relatively effective when communicating with a large number of different remote hosts but loses effectiveness as the number decreases becoming worthless as the value becomes one. This mode provides fault tolerance and some load balancing.

      Mode 3: Broadcast. This mode simply uses all channels to mirror all transmissions. It does not provide any load balancing but is for fault tolerance only.

      Mode 4: IEEE 803.ad Dynamic Link Aggregation. This mode provides fault tolerance as well as load balancing. It is highly effective but requires configuration changes on the switch and the switch must support 802.3ad Link Aggregation.

      Mode 5: Adaptive Transmit Load Balancing. This mode provides fault tolerance and transmit (out going) load balancing. It provides no receiving load balancing. This mode does not require any configuration on the switch. Ethtool support is required in the network adapter (NIC) driver.

      Mode 6: Adaptive Load Balancing. Like mode five but provides fault tolerance and bidirectional load balancing. The transmit load balancing is identical but receipt load balancing is accomplished by ARP trickery.

      posted in Self Promotion linux nic bonding nic teaming scottalanmiller
      scottalanmillerS
      scottalanmiller
    • Vendor Tags in MangoLassi

      If you are a vendor (or an MSP or some kind of organization that would benefit from being identified or tagged) there is a simply way to handle this. To do this in the easiest way we recommend this process, of course you can modify this to your needs, but this makes for the simplest means that we have found to accommodate the normal needs or an organizational tag.

      Let's say that you work for AlpacaVaca, the ultimate Alpacan Vacation consultancy and you want easy representation here on MangoLassi. You have three of your team here in the community: Roger, Maureen and Reginald475. Each has their own accounts, but it is hard to remember exact screen names and to remember who works for or with which vendor. You can always tag the individuals, but that does not meet the needs well.

      What we recommend is, on your own email back end, making an email account, likely an alias or distribution group, that includes all of the people in your organization that should be notified that they have been tagged in the community. This might include all, some or none of the people active in the community. It could include other people as well.

      Then, use this new email alias or distribution group as the email address for a new MangoLassi account. The account should be, in this example, @alpacavaca. Then, when anyone wants to tag Alpacan Vacations as a company, they can do so with nearly zero effort and instead of tagging an individual who might be on vacation, may switch companies or whatever - they can reliably reach the right group of people at the company who can ensure a proper response.

      I hope that this is helpful in improving the utility of the MangoLassi community.

      posted in Announcements mangolassi vendors
      scottalanmillerS
      scottalanmiller
    • Why I Don't Answer My Phone

      All of these reasons I have pointed out to people time and again, I'm a bit upset that someone else made an awesome blog post about it. These are things that every single person who uses a phone should have to understand before receiving a license to use the telephone. Phones interrupt people, they are not persistent, they lack opportunity to establish situational context. Everyone should read this before making another phone call.

      posted in Water Closet communications best practices
      scottalanmillerS
      scottalanmiller
    • Never Let the Vendor Set Up a Server

      When I order a car, I don't expect the dealer to have my mirrors adjusted or the seat in the right place. When I get a new laptop or desktop, my first task is to install a fresh copy of the OS. You don't want your vendors making decisions for you about these things, you need them to make a good product and deliver it to you and provide support when things fail or provide updates and patches, but actually setting up and operating the devices are things that you do yourself and require a certain about of knowledge and effort to do well.

      When it comes to servers, making sure that all of the parts are understood and ensuring that the setup process is correct and repeatable is very important, far more important than with something like a desktop.

      Because we need it to be repeatable. Should we need to rebuild the server, build another to match it, repurpose it or whatever - knowing that the process of setting up the server can be done, reliably, is very important. If you always have your vendor set up the server for you, you have no means of ensuring that you have access to the necessary tools, installation media, drivers, knowledge of what to do, an understanding of what has been done and have produced appropriate documentation.

      Because it needs to be right. Even if the vendor does the work for you, which is not very much work anyway, you need to double check everything. It is literally less work, in nearly all cases, to do the work yourself rather than to attempt to double check everything that someone else has done.

      Because it requires less faith. Some big vendors have already been caught using "pre build" processes to put spying software, bloatware and malware onto customer systems. Skipping the "vendor do my work for me" step reduces the risk that this can happen and increases your opportunities to catch it if it does.

      Because vendors don't offer every option. Commonly only limited RAID options are presented for pre-configuration and more details selections may be needed for the real world setup.

      Because it creates complicated licensing situations. Your vendor does not have knowledge of or access to your licenses so having the vendor do anything that involved licensed products can make things more complicated.

      Because vendors are not experts. Server vendors are manufacturers, not IT companies. They lack the skills, knowledge, mandate or business reason to understand what is needed by the customer. This is not their wheelhouse and there is no reason to think that they should know what configuration is good for a customer.

      Because vendors are not knowledgeable of your business. Server vendors simply don't know your needs and factors and cannot possibly know what would be right for you without you providing them so much detail that there was no value to having them do this work anyway.

      A server is a big investment and an important piece of infrastructure for any company. Hoping to skip a few minutes of very simple, basic work like setting BIOS settings, setting up the RAID and installing the hypervisor buys you essentially nothing but adds a lot of risk that is very unlikely to rear its ugly head until months or years down the road.

      Take the small effort to do things right at the beginning and protect yourself and your business from surprises down the road.

      posted in IT Discussion server best practice
      scottalanmillerS
      scottalanmiller
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 2140
    • 2141
    • 4 / 2141