ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scottalanmiller
    3. Best
    • Profile
    • Following 170
    • Followers 168
    • Topics 3,469
    • Posts 151,733
    • Groups 1

    Posts

    Recent Best Controversial
    • Building a First Active Directory Domain Controller on Windows 2012 R2 Core

      Assuming that we have just installed, manually, a Windows Server 2012 R2 machine and have configured it with networking we are ready to begin building a working Active Directory Domain Controller from the command line. There is no need to have any GUI access, or even a GUI installed, in order to run a Domain Controller.

      First we need to rename our machine, or ensure that it has a good, usable name already.

      Rename-Computer prd-win-ad1
      Restart-Computer -Force
      

      I like the naming convention style here. We have "prd" to denote production, "win" to denote a Windows server and "ad1" to tell us that this is our first Active Directory Domain Controller. You can use any convention that you like, of course. As always, reboot after a rename.

      Now that our machine is renamed (and we assume that you have a static IP assigned to it, AD DCs need static IP addresses to work properly) and rebooted, we can simply run a few PS commands to promote our machine to be an Active Directory Domain Controller.

      We should note here that the dcpromo command still exists for the moment, but has been deprecated in Server 2012 R2 and will not be available in future Windows Server releases, so we want to discontinue using it.

      First we just install the proper components:

      Install-WindowsFeature -Name AD-Domain-Services
      

      0_1469035051316_Screenshot from 2016-07-20 13:06:30.png

      Now we will create a password variable to use in our domain creation command, replace mypassword with a securepassword of your own...

      $Password = ConvertTo-SecureString -AsPlainText -String mypassword -Force
      

      And now we can do the actual promotion process.

      Install-ADDSForest -DomainName ad.mydomain.com -SafeModeAdministratorPassword $Password -DomainNetbiosName mydomain -DomainMode Win2012R2 -ForestMode Win2012R2 -DatabasePath "%SYSTEMROOT%\NTDS" -LogPath "%SYSTEMROOT%\NTDS" -SysvolPath "%SYSTEMROOT%\SYSVOL" -NoRebootOnCompletion -InstallDns -Force
      

      If this completes successfully you should get a message like this one:

      0_1469040377712_Screenshot from 2016-07-20 14:45:59.png

      Once this has completed, we simply need to reboot, again.

      Restart-Computer -Force
      

      References:

      Technet Guide to DCPromo

      A legacy dcpromo approach would be: dcpromo.exe /unattend /NewDomain:forest /ReplicaOrNewDomain:Domain /NewDomainDNSName:domain.tld /DomainLevel:4 /ForestLevel:4 /SafeModeAdminPassword:"mypassword"

      posted in IT Discussion active directory server core windows windows server windows server 2012 r2 windows server core powershell command line sam windows administration domain controller install-addsforest install-windowsfeature
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      @Tim_G said in What Are You Doing Right Now:

      Wondering why I got muted on SW... o-well.

      Oh, you posted something useful and factual. F U a-hole.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Building a Simple Windows Server 2012 R2 RDS Terminal Server

      One of the many types of mostly standard Windows Server applications is that of RDS or Remote Desktop Services, the name that Microsoft applies to their own brand of terminal server (and previously known as: Terminal Server.) RDS is a combination of technologies, based on simultaneous multi-user support in Windows Server combined with Microsoft's RDP protocol (RDP means Remote Desktop Protocol but that's part of the name so you have to say protocol twice, but capitalize only once) and a set of MS licensing that makes this all possible. RDP is probably the most popular, common and well known terminal server product in the world and is used extensively by companies of all sizes. Licensing is relatively straightforward, unlikely VDI licensing, which aids in its popularity.

      In building out own simple, stand alone Windows RDS server we need to start with a fresh install of Windows Server 2012 R2. In most cases I would encourage you to install the GUI-less "Server Core" option, but not so when intending to deploy RDS as RDS requires the full GUI and set of Windows Desktop bells and whistles in order to do its job.

      We also want some mostly obvious basics to be addressed before we begin doing any work that is RDS specific in our setup. Be sure to update Windows, of course. Join RDS to Active Directory, ensure that networking is working correctly.

      Once we are ready to move beyond our base Windows Server deployment, we can begin by using the Server Manager utility to deploy the RDS Role. Of course this can be done locally on the server that we are deploying for RDS, or we can do it remotely using the same tools thanks to Microsoft's efficient location-agnostic administration tools. Choose Add roles and features.

      0_1469142391565_Screenshot from 2016-07-21 18:57:43.png

      RDS is a special case within the Microsoft Windows Server world and so has its own entry at the very top level of the selection tree here. Be sure not to miss it at this first level. Select Remote Desktop Services installation

      0_1469142430740_Screenshot from 2016-07-21 18:58:12.png

      RDS is a power suite of tools that can be deployed across many Windows Server instances (individual virtual machines) or can be all deployed on a single host. Choose Quick Start which makes this easy for us, allowing us to deploy everything, automatically to a single host. This is perfect for testing, supporting a small set of users or for use in a normal small / medium business (SMB). In the enterprise space we would typically doing a far more complex system with load balancing and heavy separation of duties. But not in this example.

      0_1469142525240_Screenshot from 2016-07-21 18:58:37.png

      On our next option screen we are presented with two options again, Virtual machine-based desktop deployment and Session-based desktop deployment. The former here is VDI (individual VMs for each user - one user per system at a time) and the latter is terminal services with multiple user sessions to a single shared system. Session-based is what we want here, pick the latter.

      0_1469142804304_Screenshot from 2016-07-21 18:59:40.png

      Because we opted for the Quick Start option earlier our selection screen here is very simple. Windows just needs to know the one server from our management pool of servers on which we want to deploy RDS. Hopefully, like me, you have given your RDS server a super clear name. Just select it and we are ready to continue.

      0_1469142928657_Screenshot from 2016-07-21 19:00:13.png

      Now we just get a warning, this machine is going to reboot a bit during this process.

      0_1469143001989_Screenshot from 2016-07-21 19:00:55.png

      Because they don't want the system to surprise you by rebooting after you failed to pay attention to the warning, you are forced to accept this decision via a checkbox before the Deploy button will become available to you.

      0_1469143083300_Screenshot from 2016-07-21 19:01:13.png

      Now you can sit back and relax for a few minutes. In my testing in the lab on a Scale HC3 cluster this process took over five minutes, partially because of a reboot partway through. Good time to grab that coffee while we wait for this large deployment.

      0_1469143104956_Screenshot from 2016-07-21 19:01:31.png

      Once that completes we will get this screen that tells us that everything has completed, gives us an internal link to use to connect to the server itself through its newly deployed web portal and a popup (which you cannot see in the screen capture here) will tell us that we have 119 days to figure out the licensing for our newly deployed system before it will shut down on us.

      0_1469143305049_Screenshot from 2016-07-21 19:20:46.png

      That's it. We are all done! We can log in and test it out already. (I'll show the system working connected remotely via Firefox running on Linux Mint 17.3 just to show the versatility of this solution.)

      0_1469143486309_Screenshot from 2016-07-21 19:24:23.png

      By default there is not a lot to see, yet:

      0_1469143562572_Screenshot from 2016-07-21 19:25:15.png

      Back in our Server Manager we can see that the management tools for RDS have been added for us:

      0_1469143706746_Screenshot from 2016-07-21 19:27:19.png

      We now have a working Windows RDS 2012 R2 deployment, if just a simple one. In the next lesson we will delve into configuring RDS to supply more of the remote desktop services that we are expecting.

      This style of simple, stand alone RDS deployments is very popular in both very small shops where additional resources are simply not needed, but also increasingly so on systems such as hyperconverged platforms where the platform underneath RDS will supply the necessary scaling and redundancy to handle many scenarios allowing for a simpler deployment design while maintaining high reliability.

      posted in IT Discussion windows windows server windows server 2012 r2 remote desktop services rds rdp terminal server sam windows administration scale scale hc3
      scottalanmillerS
      scottalanmiller
    • RE: Random Thread - Anything Goes

      0_1498613882656_IMG_6744.JPG

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • RE: We live in an era where...

      Because that's easier and faster than saying "Hey Uncle Scott, look at this."

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • RE: Random Thread - Anything Goes

      0_1505289198807_IMG_7660.JPG

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • The SMB Two Server Dilema, What to Do

      So often in discussing systems we talk about what not to do but we sometimes forget that we need to do what to do instead. This is probably the more important topic, knowing what not to do is pointless if we are already doing the right thing. Knowing what not to do isn't very useful without knowing what actually works. So let's tackle that.

      In the SMB market we often face the situation where we feel that we need to move past having a single server or a single container for our business functions. Deciding when this happens or how to determine it more than a single server is needed is a great topic for another thread.

      Assuming that we know that two servers are needed, but that we only need the capacity of a single server (e.g. we have two servers for the primary purpose of protecting against failure rather than because we have moved past the working capacity of a single box) we have a few standard approaches that work for essentially all cases.

      Approach 1: Two Stand Alone Servers

      This might seem like a silly idea, but it is generally far more practical than people give it credit for being. Individual servers are very simple to set up and maintain, easy to understand, have no unnecessary dependencies or complexities, are very fast and efficient and far more reliable than people generally assume. When well treated, five nines is not impossible for uptime from quality servers that are well maintained. As high or even higher than nearly any NAS or SAN in the same category.

      With two servers there is no automatic failover in case one node fails. However, in most cases restoring from backup is quick and easy. If the restore time of critical workloads is acceptable, and in many cases it would be almost unnoticed, then this approach is very cost effective. You simply rely on the rapid restore capabilities of the platform to get systems back up and running on the remaining host in a timely manner. Not all workloads would normally need to be restored during an interim disaster recovery process making this procedure more efficient than it might first seam.

      By having two servers we start by splitting the workload (in most cases) between the two separating our failure domain so that if one fails, only half of our workloads go down immediately. This alone is a tiny benefit and not always truly beneficial if all of the services are tightly coupled and this has to be considered. But with split workloads it means that, at most, 50% of systems need to be restored to the temporary location from backup and in nearly all cases it would be even less than that. Sometimes only one or two critical, non-redundant workloads would need manual restoration while awaiting the repair or replacement of the failed host. And with good service contracts that can be handled by a vendor the same day, in most cases, making even that relatively trivial.

      This approach is the farthest from HA (high availability) but is quite a bit more resilient than having just a single server while being the simplest, easiest and lowest cost to implement.

      Approach 2: Two Server HA Clustering

      Once spending the money to acquire two servers, most companies are going to want to go the extra mile and implement a full high availability cluster with them. This generally makes sense in today's market where the cost of HA solutions has fallen dramatically and is potentially even free.

      In this approach the storage of the two servers is clustered together to make a single storage pool. This requires, generally, an increase in the total amount of storage purchased and potentially an increase in storage performance to offset the overhead of storage replication. This, however, is necessary in any HA solution as storage replication is the underpinning of any high availability design. So even when growing past this stage these requirements effectively remain even if moving to external storage. SAN or NAS of the same class need the same storage replication to approach the same level of reliability at that layer, so this is never an extra cost of this approach, it is simply an inherent cost of all HA. There are many storage technologies that can do this such as DRBD, HAST, StarWind, Gluster, CEPH, OpenIO and more. Many of these are free, especially in a two node situation.

      Our choice of storage replication technology will depend on our choice of platform. Some example solutions would be:

      • XenServer with DRBD. DRBD is fully baked into the platform itself, and completely free and used in many other scenarios such as HA Linux servers and NAS devices. It's a very standard and battle tested component. It runs on XenServer's Dom0 and is included, not an add on. This approach is 100% free top to bottom.
      • Hyper-V with Hyper-V Replication. This approach uses nothing but Microsoft's native Hyper-V capabilities and does a rapid, automated asynchronous replication from one node to the other. Not as robust as using DRBD or Starwind, but inclusive and simple.
      • Hyper-V with Starwind. This is the Hyper-V equivalent to XenServer with DRBD. Starwind is a third party component but enterprise class and totally free for two nodes in this manner.
      • KVM with DRBD. Same as with XS, totally free and totally inclusive in the base product, no third party products needed.
      • ESXi with StarWind. ESXi to do this requires at a minimum the Essentials Plus license, but Starwind will do a two node scale system for free. So there is no additional cost to the replicated local storage here.

      Those solutions cover essentially all common use cases for a two node HA cluster. And these approaches together cover essentially all realistic, real world two node scenarios. Local storage is the only viable approach for two servers (and often for many more) and nothing more should be considered until there is growth in nodes, and often vertical growth will trump scaling out here for the same reasons of simplicity.

      posted in IT Discussion xen xenserver hyper-v starwind drbd ha-lizard kvm vmware esxi vsan hpe vsa storage virtualization platform servers
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      @momurda said in What Are You Doing Right Now:

      Wow you can feel the Freedom oozing from every orifice in America these days. So much so that people who have been married for years to Americans still need to spend lots of money and do loads of paperwork to become citizens. Then they need to pass a test that 3/4 of American born Americans cant pass.
      I helped my friends wife quiz for her test a couple years ago.

      To be fair to the system, I'd like the 75% that can't pass the test kicked out, too 😉

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Installing Guacamole on CentOS 7
      yum -y update
      yum -y install cairo-devel libjpeg-devel libpng-devel uuid-devel freerdp-devel pango-devel libssh2-devel libssh-dev tomcat tomcat-admin-webapps tomcat-webapps vncserver-devel wget gcc
      cd /tmp
      wget http://sourceforge.net/projects/guacamole/files/current/source/guacamole-server-0.9.9.tar.gz
      tar -xzvf guacamole-server-0.9.9.tar.gz
      cd guacamole-server-0.9.9 
      ./configure
      make; make install; ldconfig
      cd /var/lib/tomcat/webapps
      wget http://sourceforge.net/projects/guacamole/files/current/binary/guacamole-0.9.9.war
      mv guacamole-0.9.9.war guacamole.war
      mkdir /etc/guacamole
      mkdir /usr/share/tomcat/.guacamole
      
      cat > /etc/guacamole/guacamole.properties <<EOF
      guacd-hostname: localhost
      guacd-port:    4822
      user-mapping:    /etc/guacamole/user-mapping.xml
      auth-provider:    net.sourceforge.guacamole.net.basic.BasicFileAuthenticationProvider
      basic-user-mapping:    /etc/guacamole/user-mapping.xml
      EOF
      
      ln -s /etc/guacamole/guacamole.properties /usr/share/tomcat/.guacamole/
      

      Use the md5sum tool to get a password. I'll give an example here, you'll need to make your own.

      printf '%s' "mysecretpassword" | md5sum
      4cab2a2db6a3c31b01d804def28276e6  -
      

      Now we need to configure out XML User Mapping file. This file maps users to passwords and assigns them a list of systems to which they can connect. In a more complex deployment we could replace this file with a MariaDB or similar database, but XML is so easy to deal with that at least for now, we will stick with that. For just a couple of users, this is very easy to deal with and the file should be self explanatory.

      cat > /etc/guacamole/user-mapping.xml <<EOF
      <user-mapping>
      <authorize 
      username="mangolassi" 
      password="4cab2a2db6a3c31b01d804def28276e6" 
      encoding="md5">
      <connection name="CentOS 7 GitLab">
      <protocol>ssh</protocol>
      <param name="hostname">192.168.1.59</param>
      <param name="port">22</param>
      <param name="username">scott</param>
      </connection>
      <connection name="Windows 8.1 Lab 1">
      <protocol>rdp</protocol>
      <param name="hostname">192.168.1.194</param>
      <param name="port">3389</param>
      </connection>
      <connection name="Windows 8.1 Lab 2">
      <protocol>rdp</protocol>
      <param name="hostname">192.168.1.195</param>
      <param name="port">3389</param>
      </connection>
      </authorize>
      </user-mapping>
      EOF
      
      chmod 600 /etc/guacamole/user-mapping.xml
      chown tomcat:tomcat /etc/guacamole/user-mapping.xml
      systemctl enable tomcat
      systemctl start tomcat
      /usr/local/sbin/guacd &
      

      0_1471317718070_Screenshot from 2016-08-15 23-21-21.png

      0_1471322858792_Screenshot from 2016-08-16 00-47-21.png

      posted in IT Discussion fedora linux fedora 24 korora 24 guacamole
      scottalanmillerS
      scottalanmiller
    • RE: Random Thread - Anything Goes

      0_1520175620135_059B4C94-A7EB-465B-9BB7-90CB592B280A.jpeg

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • RE: Struggling to Understand Kernel and OS Separation

      Also wondering how the new HP DRM is affecting your printer programming career.

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      Hanging out with the wife tonight, tomorrow is our fifteenth wedding anniversary.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • The Commoditization of Architecture

      http://www.smbitjournal.com/2016/10/the-commoditization-of-architecture/

      I often talk about the moving "commodity line", this line affects essentially all technology, including designs. Essentially, when any new technology comes out it will start highly proprietary, complex and expensive. Over time the technology moves towards openness, simplicity and becomes inexpensive. At some point any given technology becomes goes so far in that direction that it falls over the "commodity" line where it moves from being unique and a differentiator to becoming a commodity and accessible to essentially everyone.

      Systems architecture is no different from other technologies in this manner, it is simply a larger, less easily defined topic. But if we look at systems architecture, especially over the last few decades, we can easily system servers, storage and complete systems moving from the highly proprietary towards the commodity. Systems were complex and are becoming simple, they were expensive and are becoming inexpensive, they were proprietary and they are becoming open.

      Traditionally we dealt with systems that were physical operating systems on bare metal hardware. But virtualization came along and abstracted this. Virtualization gave us many of the building blocks for systems commonidization. Virtualization itself commoditized very quickly and today we have a market flush with free, open and highly enterprise hypervisors and toolsets making virtualization totally commoditized even several years ago.

      Storage moved in a similar manner. First there was independent local storage. Then the SAN revolution of the 1990s brought us power through storage abstraction and consolidation. Then the replicated local storage movement moved that complex and expensive abstraction to a more reliable, more open and more simple state.

      Now we are witnessing this same movement in the orchestration and management layers of virtualization and storage. Hyperconvergence is currently taking the majority of systems architectural components and merging them into a cohesive, intelligent singularity that allows for a reduction in human understanding and labour while improving system reliability, durability and performance. The entirety of the systems architecture space is moving, quite rapidly, toward commoditization. It is not fully commoditized yet, but the shift is very much in motion.

      As in any space, it takes a long time for commoditization to permeate the market. Just because systems have become commoditized does not mean that non-commodity remnants will not remain in use for a long time to come or that niche proprietary (non-commodity) aspects will not linger on. Today, for example, systems architecture commoditization is highly limited to the SMB market space as there are effective upper bound limits to hyperconvergence growth that have yet to be tackled, but over time they will be tackled.

      What we are witnessing today is a movement from complex to simple within the overall architecture space and we will continue to witness this for several years as the commodity technologies mature, expand, prove themselves, become well known, etc. The emergence of what we can tell will be commodity technologies has happened but the space has not yet commoditized. It is an interesting moment where we have what appears to be a very clear vision of the future, some scope in which we can realize its benefits today, a majority of systems and thinking that reside in the legacy proprietary realm and a mostly clear path forward as an industry both in technology focus as well as in education, that will allow us to commoditize more quickly.

      Many feel that systems are becoming overly complex, but the opposite is true. Virtualization, modern storage systems, cloud and hyperconverged orchestration layers are all coming together to commoditize first individual architectural components and then architectural design as a whole. The move towards simplicity, openness and effectiveness is happening, is visible and is moving at a very healthy pace. The future of systems architecture is one that clearly is going to free IT professionals from spending so much time thinking about systems design and more time thinking about how to drive competitive advantage to their individual organizations.

      posted in IT Discussion smbitjournal architecture commoditization
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      Found a secret picture taken at Spiceworks of a board meeting...

      daily_picdump_2970_640_high_19.jpg

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • TraceRoute: Better Results with TCP SYN

      There has been some talk this week about doing "advanced traceroutes" using TCP SYN instead of ICMP requests in order to get better, faster results. I think that many people feel that this is new or advanced or somehow not something that we have always had. However, the standard traceroute tool has long been doing this and we need only be aware of how to use it which is as simple as the -T flag on the standard traceroute command in Linux (check your man pages for your specific version if this does not work and please report back so that we can document.)

      Here is an example of a standard ICMP based traceroute, the default action:

      $ traceroute yahoo.com
      traceroute to yahoo.com (98.138.253.109), 30 hops max, 60 byte packets
       1  FIOS_Quantum_Gateway.fios-router.home (192.168.1.1)  1.127 ms  1.190 ms  1.319 ms
       2  47.186.128.1 (47.186.128.1)  2.496 ms  4.023 ms  4.021 ms
       3  172.102.51.152 (172.102.51.152)  7.062 ms  7.647 ms  7.634 ms
       4  ae7---0.scr01.dlls.tx.frontiernet.net (74.40.3.17)  7.607 ms  7.605 ms  7.590 ms
       5  ae0---0.cbr01.dlls.tx.frontiernet.net (74.40.4.14)  7.572 ms  7.555 ms  7.538 ms
       6  exchange-cust2.da1.equinix.net (206.223.118.2)  7.521 ms  3.433 ms  3.231 ms
       7  ae-3.pat2.dnx.yahoo.com (216.115.96.58)  34.136 ms ae-4.pat2.bfz.yahoo.com (216.115.97.207)  36.672 ms ae-3.pat2.dnx.yahoo.com (216.115.96.58)  35.558 ms
       8  ae-6.pat2.nez.yahoo.com (216.115.104.116)  35.975 ms ae-6.pat1.nez.yahoo.com (216.115.104.118)  36.616 ms  36.612 ms
       9  et-0-0-0.msr2.ne1.yahoo.com (216.115.105.179)  35.106 ms et-18-1-0.msr2.ne1.yahoo.com (216.115.105.185)  38.772 ms et-19-1-0.msr1.ne1.yahoo.com (216.115.105.27)  35.495 ms
      10  et-1-0-0.clr2-a-gdc.ne1.yahoo.com (98.138.97.73)  37.470 ms et-19-1-0.clr2-a-gdc.ne1.yahoo.com (98.138.97.75)  37.845 ms et-1-0-0.clr1-a-gdc.ne1.yahoo.com (98.138.97.69)  36.016 ms
      11  et-18-25.fab2-1-gdc.ne1.yahoo.com (98.138.0.93)  36.524 ms et-17-1.fab6-1-gdc.ne1.yahoo.com (98.138.93.5)  41.100 ms et-17-1.fab5-1-gdc.ne1.yahoo.com (98.138.93.1)  36.971 ms
      12  po-17.bas1-7-prd.ne1.yahoo.com (98.138.240.20)  35.994 ms po-12.bas2-7-prd.ne1.yahoo.com (98.138.240.26)  33.088 ms po-16.bas2-7-prd.ne1.yahoo.com (98.138.240.34)  33.267 ms
      13  * * *
      14  * * *
      15  * * *
      16  * * *
      17  * * *
      18  * * *
      19  * * *
      20  * * *
      21  * * *
      22  * * *
      23  * * *
      24  * * *
      25  * * *
      26  * * *
      27  * * *
      28  * * *
      29  * * *
      30  * * *
      

      And here is the same command using the -T command to switch from the default to TCP SYN packets instead, bypassing the commonly blocked ICMP protocols. Notice that this also requires elevated privileges to run because it can easily be used for a DoS attack.

      $ sudo traceroute -T yahoo.com
      traceroute to yahoo.com (98.139.183.24), 30 hops max, 60 byte packets
       1  FIOS_Quantum_Gateway.fios-router.home (192.168.1.1)  0.960 ms  1.096 ms  1.245 ms
       2  47.186.128.1 (47.186.128.1)  4.399 ms  4.893 ms  4.893 ms
       3  172.102.51.82 (172.102.51.82)  6.160 ms  6.548 ms  6.546 ms
       4  ae7---0.scr01.dlls.tx.frontiernet.net (74.40.3.17)  4.878 ms  4.875 ms  4.873 ms
       5  ae0---0.cbr01.dlls.tx.frontiernet.net (74.40.4.14)  5.650 ms  51.704 ms  52.083 ms
       6  exchange-cust2.da1.equinix.net (206.223.118.2)  8.926 ms  5.519 ms  3.123 ms
       7  xe-2-0-2.pat1.dce.yahoo.com (216.115.96.93)  29.173 ms  29.616 ms  29.617 ms
       8  ae-8.pat1.bfz.yahoo.com (216.115.101.231)  42.673 ms ae-9.pat2.bfz.yahoo.com (216.115.101.199)  49.814 ms ae-0.pat1.bfy.yahoo.com (216.115.97.196)  42.649 ms
       9  et-0-0-0.msr1.bf1.yahoo.com (74.6.227.129)  42.643 ms et-19-0-0.pat2.bfz.yahoo.com (216.115.97.105)  42.613 ms et-0-0-0.msr2.bf1.yahoo.com (74.6.227.137)  45.617 ms
      10  et-0-1-1.clr2-a-gdc.bf1.yahoo.com (74.6.122.19)  42.619 ms et-19-1-0.msr1.bf1.yahoo.com (74.6.227.133)  42.044 ms et-19-0-1.clr1-a-gdc.bf1.yahoo.com (74.6.122.35)  40.177 ms
      11  po8.fab4-1-gdc.bf1.yahoo.com (72.30.22.39)  41.775 ms UNKNOWN-74-6-122-X.yahoo.com (74.6.122.91)  41.214 ms po7.fab6-1-gdc.bf1.yahoo.com (72.30.22.11)  41.764 ms
      12  po-13.bas2-7-prd.bf1.yahoo.com (98.139.129.211)  40.512 ms po-11.bas1-7-prd.bf1.yahoo.com (98.139.129.177)  40.907 ms po7.fab3-1-gdc.bf1.yahoo.com (72.30.22.5)  38.354 ms
      13  ir2.fp.vip.bf1.yahoo.com (98.139.183.24)  37.887 ms  41.289 ms po-10.bas1-7-prd.bf1.yahoo.com (98.139.129.161)  39.821 ms
      

      That's all that there is too it. Better results, faster.

      posted in IT Discussion traceroute unix icmp ping tcp syn linux networking
      scottalanmillerS
      scottalanmiller
    • RE: Spiceworks Just Got Acquired by Publisher Ziff-Davis

      Just a note that I want to remind everyone of.... it is easy at a time like this to revel in the potential schadenfreude of the fall or the near fall of what we often see as a "competitor" in the industry. Spiceworks as a company has always been intimately tied to MangoLassi from the shear fact that ML was formed by SW deciding that all of us with ties to the MSP / ITSP space were not going to be allowed (without paying) on the SW platform any longer in early 2014, so even from the onset, ML felt a bit like a refugee camp for those that SW no longer wanted.

      This "refugee" association created a natural feeling of being rejected, jilted, unwanted. Of course there was a feeling of "us vs them" because there was a "them" that had rejected "us". And now after what often feels like a hard fought battle, it is easy to want to rejoice in what feels like a victory, one that seems like it was hard won. But there are several factors to keep in mind.

      First, what to us here feels like a long battle for survival that resulted in a close victory, simply isn't. ML has been on a solid climb for half a decade and any "battle" was over long ago. There might not have been some definitive moment, but achieving permanent viability is something ML did a long time ago and something that almost no other community ever manages. Being overly excited now will feel a bit like gloating - to us we think this is a recent event and "wow, we won", but to the SW community and company it feels like they "lost" years ago and have just been slowly fading away. To them, it was inevitable, and just a matter of time.

      It is also incredibly important to remember that most people in the community, even most employees, have no idea that ML is made up of people mostly asked to leave SW long ago, or of how the relationship was before or after. To most everyone it is just "another community" that treads much of the same ground. We want to be warm and welcoming as people likely come to discover what has been taking place in ML over these last five years. And many who were involved were directed by management and had to participate in order to secure their livelihoods. Many feared for their income and investments.

      Because of the nature of communities, it is easy for so much of what has transpired over the years to seem personal. And I don't want to downplay that at times it must have been. But by and large decisions made and actions taken were purely, or nearly, for business reasons. That doesn't make all decisions wise, but the nature of the business beast is that all of these things are highly complex - risky decisions made with only a limited view of the situation, gambling on human reactions, guessing at market futures, and so forth.

      I lived through the downfall of Eastman Kodak, and for decades people have mocked them for "not seeing digital imaging" coming, but they did and were preparing for it for decades before it came, but it didn't change the fact that Kodak wasn't capable of weathering that storm, their business model simply didn't allow for it. Sometimes great decisions aren't enough to adapt. That's not to say that no mistakes could have been made, but hindsight can easily make good decisions made at the time feel foolish knowing how they would turn out.

      What I really want people to see is that yes, we have great reasons to be excited about MangoLassi and its future. Not because another community appears to be going away, but because we are a great community of welcoming professionals who are hear to help one another to grow, to learn, to adapt. We are here for each other, even those of us that tend to be pretty grumpy. We don't do it for points or badges, we do it for each other; and we are open to all. These things make us special.

      We had our moment to feel the schadenfreude, but it has past. Now it is time to invite, to welcome, to show off how ecumenical, how professional, how helpful, how welcoming we can be. A new era for ML is about to begin. The time is here to focus on how we can provide the best experience to each other, and to those who will be joining us on this journey.

      It is our inclusivity that has made us strong, and will continue to keep this community vibrant and unique.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • ZFS is Perfectly Safe on Hardware RAID

      This is sadly one of those articles that is needed not because there is something to be learned but to dispel an unfounded and destructive myth that has entered into the collective consciousness of the IT industry, at least in the isolated SMB sector of it. At some point someone began saying that the FreeNAS team claimed that ZFS was not safe or ill advised to run on hardware RAID. But it is not true that FreeNAS / iX said this; nor is it true at all.

      Let's break down all of the problems with this and then look at the source of this misinformation.

      1. If ZFS was unsafe on hardware RAID, this would mean that ZFS was unsafe, period. The claim is nonsensical as it is always made by someone desperately promoting ZFS as a filesystem but trying to do so by claiming that ZFS is unreliable and could never be counted on. Clearly someone is confused.
      2. Storage abstraction works in such a way that this should never be a concern. Hardware RAID, or Software RAID, or LVM, or whatever present a "drive appearance" to upper layers. This abstraction and interface system is universal and total. Any working filesystem will work on any of these by definition.
      3. ZFS is used on top of hardware and software RAID in most cases, this is the standard deployment of it outside of massive Sparc architecture mini computers because anything that runs ZFS (FreeBSD, Ubuntu or Sparc AMD64) would all be expected to be VMs and not on bare metal, so ZFS' primary use and role is on top of other RAID.

      So where does this myth come from? Well, from what we can tell, it comes from one true, but marketing style statement from the FreeNAS folks that was worded correctly but in such a way that people who made false assumptions and added their own implications to the statement taking something true and making it very, very untrue. This was then combined with the "Chinese telephone" effect of people in an insular community repeating this misinformation second or third hand until it became lore and was then eventually believed even though obvious information and common sense would tell us it is not possibly true.

      Here is one of the key references used: http://www.freenas.org/blog/freenas-worst-practices/

      0_1483205677975_Screenshot from 2016-12-31 12-34-29.png

      In its latest version, the older statements have been moved from true to now quite misleading and clearly an open attempt at marketing. But let's break this down to be sure we understand why this is a vendor trying to make a sale and not engineers giving you valid information.

      1. This is about FreeNAS, not about ZFS. The information here makes some assumptions that make sense, given that this is a FreeNAS resource, but carrying this implication to other ZFS scenarios makes no sense.
      2. That FreeNAS is "designed to use its own volume manager" is totally fine, but FreeNAS is just FreeBSD with a web GUI and FreeBSD was also designed to be used on hardware RAID and do whatever you need it to do. It's equally designed for both, this is really just marketing fluff.
      3. That ZFS won't be able to "balance reads and writes" and such is, again, marketing. Of course it can't, because we are asking the RAID hardware to do that. This isn't a warning, it's just restating the original decision again. We could compress all that to "If you choose hardware RAID over ZFS RAID, you'll be using hardware RAID instead of ZFS RAID." It's a redundant point that is simply worded in such a way as to make it sound as if we obviously want one thing and not the other, but doesn't actually say that.
      4. Every time we hear that hardware RAID "might" or "could" do something, this is also marketing. Sure, you might buy bad hardware RAID that does a bad job, so they are using the "you might get it wrong if you don't do what we say" threat to make things sound scary when they are not. We might as well say "if you don't take a taxi to the store, you might walk off a cliff instead of going to the store"... okay, but let's just assume that I know how to walk and will actually walk to the store as the alternative.
      5. RAID cards mask SMART. Right, of course they do, so does ZFS. This is point #3. Just repeating the original point. The RAID card handles the SMART monitoring, handles the alerting, etc. There is nothing bad here, this is exactly what we presumably wanted in the first place. So all that is being said here is the good things about the hardware RAID carefully worded to make them sound bad. Marketing at its finest.
      6. Pass through or RAID 0 mode warnings are over the top and actively lying. They are using the assumption that you will use ZFS RAID even though you chose a hardware RAID card instead and use that insane assumption to state that hardware RAID is therefore bad. This isn't logical and is outright incorrect. They are right that using passthrough or RAID 0 mode on hardware RAID is a bad idea; but that's not what we were discussing doing so this is a warning for someone else about something else.
      7. Their summary is that using something other than ZFS "can" lead to problems. Of course it can. Just like "not taking the tax" "can" lead to you walking off of a cliff. They are not able to produce any problems with the alternatives, instead they are just stating that in the pool of "all possible alternatives" some of them are bad. Very cheesy marketing BS, way past the point of insulting to anyone that works in IT. We should all be offended by the way that this portion of this document is handled.
      8. The warnings about data loss from hardware RAID is based on risks from incorrectly configured hardware RAID. The same warning applies to incorrectly configured ZFS. So this has no purpose other than to be misleading.
      9. The assumptions are often made that if you are using ZFS for one feature, you must want it for all features. This is ridiculous. ZFS is a combination of three discrete products rolled into one: a filesystem, a logical volume manager and a software RAID system. The desire to use ZFS for one or two of these components does not suggest any need or desire to use it for the other one or two. That is a false and illogical assumption.

      Clearly, this document has an agenda to "sell" ZFS and a community has sprung up around FreeNAS and ZFS that has carried this banner and has drunk the koolaid and is repeating this mantra recklessly and incorrectly - often far less correctly than this document which mostly veils the marketing and explains, with leading words, why ZFS on hardware RAID is fine.

      Summary: There is absolutely zero cause for concern when using ZFS on hardware RAID. It is a filesystem like any other, it works exactly as expected. Everything here is marketing.

      Resources:

      http://www.smbitjournal.com/2014/05/the-cult-of-zfs/
      http://www.smbitjournal.com/2016/06/what-is-drive-appearance/
      https://mangolassi.it/topic/76/open-storage-solutions-at-spiceworld-2010
      https://mangolassi.it/topic/11276/scott-alan-miller-storage-101
      https://mangolassi.it/topic/12043/why-the-smb-still-needs-hardware-raid
      http://www.smbitjournal.com/2015/07/the-jurassic-park-effect/

      posted in IT Discussion raid zfs solaris openzfs bsd freebsd ubuntu unix linux filesystems hardware raid freenas trueos truenas storage
      scottalanmillerS
      scottalanmiller
    • RE: Random Thread - Anything Goes

      photo_2021-08-07 17.35.43.jpeg

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • New Linode Pricing

      0_1487269108889_Screenshot from 2017-02-16 19-18-05.png

      Don't know anyone that really compares. Linode has the best performance we've ever seen and their pricing just improved significantly. No cutting corners with them, they have crazy awesome performance and now their pricing puts them well below anyone in their category that I have seen. And they offer load balancers as well.

      posted in IT Discussion linode vps iaas cloud computing hosting
      scottalanmillerS
      scottalanmiller
    • RE: What products has Symantec bought and killed?

      All of them?

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • 1 / 1