ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scottalanmiller
    3. Best
    • Profile
    • Following 170
    • Followers 168
    • Topics 3,468
    • Posts 151,732
    • Groups 1

    Posts

    Recent Best Controversial
    • SMBs Must Stop Looking to BackBlaze for Guidance

      http://www.smbitjournal.com/2016/10/smbs-must-stop-looking-to-backblaze-for-guidance/

      I have to preface this article, because people often take these things out of context and react strongly to things that were never said, with the disclaimer that I think that BackBlaze does a great job, has brilliant people working for them and has done an awesome job of designing and leveraging technology that is absolutely applicable and appropriate for their needs. Nothing, and I truly mean nothing, in this article is ever to be taken out of context and stated as a negative about BackBlaze. If anything in this article appears or feels to state otherwise, please reread and confirm that such was said and, if so, inform me so that I may correct it. There is no intention of this article to imply, in any way whatsoever, that BackBlaze is not doing what is smart for them, their business and their customers. Now on to the article:

      I have found over the past many years that many small and medium business IT professionals have become enamored by what they see as a miracle of low cost, high capacity storage in what is know as the BackBlaze POD design. Essentially the BackBlaze POD is a low cost, high capacity, low performance nearly whitebox storage server built from a custom chassis and consumer parts to make a disposable storage node used in large storage RAIN arrays leveraging erasure encoding. BackBlaze custom designed the original POD, and released its design to the public, for exclusive use in their large scale hosted backup datacenters where the PODs functions as individual nodes within a massive array of nodes with replication between them. Over the years, BackBlaze has updated its POD design as technology has changed and issues have been addressed. But the fundamental use case has remained the same.

      I have to compare this to the SAM-SD approach to storage which follows a similar tact but does so using enterprise grade, supported hardware. These differences sometimes come off as trivial, but they are anything but trivial, they are key underpinnings to what makes the different solutions appropriate in different situations. The idea behind the SAM-SD is that storage needs to be reliable and designed from the hardware up to be as reliable as possible and well supported for when things fail. The POD takes the opposite approach making the individual server unreliable and ephemeral in nature and designed to be disposed of rather than repaired at all. The SAM-SD design assumes that the individual server is important, even critical – anything but disposable.

      The SAM-SD concept, which is literally nothing more than an approach to building open storage, is designed with the SMB storage market in mind. The BackBlaze POD is designed with an extremely niche, large scale, special case consumer backup market in mind. The SAM-SD is meant to be run by small businesses, even those without internal IT. The POD is designed to be deployed and managed by a full time, dedicated storage engineering team.

      Because the BackBlaze POD is designed by experts, for experts in the very largest of storage situations it can be confusing and easily misunderstood by non-storage experts in the small business space. In fact, it is so often misunderstood that objections to it are often met with “I think BackBlaze knows what they are doing” comments, which demonstrates the extreme misunderstanding that exists with the approach. Of course BackBlaze knows what they are doing, but they are not doing what any SMB is doing.

      The release of the POD design to the public causes much confusion because it is only one part of a greater whole. The design of the full data center and the means of redundancy and mechanisms for such between the PODs is not public, but is proprietary. So the POD itself represents only a single node of a cluster (or Vault) and does not reflect the clustering itself, which is where the most important work takes place. In fact the POD design itself is nothing more than the work done by the Sun Thumper and SAM-SD projects of the past decade without the constraints of reliability. The POD should not be a novel design, but an obvious one. One that has for decades been avoided in the SMB storage space because it is so dramatically non-applicable.

      Because the clustering and replication aspects are ignored when talking about the BackBlaze POD some huge assumptions tend to be made about the capacity of a POD that has much lower overhead than BackBlaze themselves get for the POD infrastructure, even at scale. For example, in RAID terms, this would be similar to assuming that the POD is RAID 6 (with only 5% overhead) because that is the RAID of an individual component when, in fact, RAID 61 ( 55% overhead) is used! In fact, many SMB IT Professionals when looking to use a POD design actually consider simply using RAID 6 in addition to only using a single POD. The degree to which this does not follow BackBlaze’s model is staggering.

      BackBlaze: “Backblaze Vaults combine twenty physical Storage Pods into one virtual chassis. The Vault software implements our own Reed-Solomon encoding to spread data shards across all twenty pods in the Vault simultaneously, dramatically improving durability.”

      To make the POD a consideration for the SMB market it is required that the entire concept of the POD be taken completely out of context. Both its intended use case and its implementation. What makes BackBlaze special is totally removed and only the most trivial, cursory aspects are taken and turned into something that in no way resembles the BackBlaze vision or purpose.

      Digging into where the BackBlaze POD is differing in design from the standard needs of a normal business we find these problems:

      • The POD is designed to be unreliable, to rely upon a reliability and replication layer at the super-POD level that requires a large number of PODs to be deployed and for data to be redundant between them by means of custom replication or clustering. Without this layer, the POD is completely out of context. The super-POD level is known internally as the BackBlaze Vault.
      • The POD is designed to be deployed in an enterprise datacenter with careful vibration dampening, power conditioning and environmental systems. It is less resilient to these issues as standard enterprise hardware.
      • The POD is designed to typically be replaced as a complete unit rather than repairing a node in situ. This is the opposite of standard enterprise hardware with hot swap components designed to be repaired without interruption, let alone without full replacement. We call this a disposable or ephemeral use case.
      • The POD is designed to be incredibly low cost for very slow, cold storage needs. While this can exist in an SMB, typically it does not.
      • The POD is designed to be a single, high capacity storage node in a pool of insanely large capacity. Few SMBs can leverage even the storage potential of a single POD let alone a pool large enough to justify the POD design.
      • The BackBlaze POD is designed to use custom erasure encoding, not RAID. RAID is not effective at this scale even at the single POD level.
      • An individual POD is designed for 180TB of capacity and a Vault scale of 3.6PB.

      Current reference of the BackBlaze POD 5: https://www.backblaze.com/blog/cloud-storage-hardware/

      In short, the BackBlaze POD is a brilliant underpinning to a brilliant service that meets a very specific need that is as far removed from the needs of the SMB storage market as one can reasonably be. Respect BackBlaze for their great work, but do not try to emulate it.

      posted in IT Discussion storage backblaze nas file server erasure encoding
      scottalanmillerS
      scottalanmiller
    • Installing URBackup on CentOS 7

      URBackup is a free, open source agent-based backup system that can easily be installed on a number of operating systems, including many flavours of Linux. As CentOS 7 is my "go to" Linux distribution, I am building the URBackup server there.

      First I will clone my base CentOS 7 image:

      0_1478570222788_Screenshot from 2016-11-07 19-42-33.png

      Once doing this, we can do a basic and very simple install:

      cd /etc/yum.repos.d/
      yum -y install wget
      wget http://download.opensuse.org/repositories/home:uroni/CentOS_7/home:uroni.repo
      yum -y install urbackup-server
      

      That's it. We have added the official URBackup Repo for CentOS 7 as hosted by the openSuse project so that our package will be self maintaining on our system, and installed URBackup from the repo. Nice and simple. We had to add the wget command for convenience in case it is not installed on your system (it is not by default.)

      Now we need to start the service and enable it to start on its own automatically:

      systemctl start urbackup-server
      systemctl enable urbackup-server
      

      Now we should be up and running just fine, but we will need to open the firewall port in order to be able to access the web interface from another machine. In this example we are going to open it wide up, this is not generally recommended, but given as this is just testing, it is fine.

      firewall-cmd --zone=public --add-port=55414/tcp --permanent
      firewall-cmd --reload
      

      That's it, if all is well we can now navigate to our URBackup system from a web browser on our LAN.

      0_1478571751976_Screenshot from 2016-11-07 21-18-39.png

      Now as you can see, the default storage location is inaccessible. For some reason, the default setting is for Windows. This is a quick setup.

      mkdir /data
      chown urbackup:urbackup /data
      

      Then go into the Settings tab and set /data as the backup location.

      0_1478572962589_Screenshot from 2016-11-07 21-38-15.png

      More configuration details to follow. Of course, if you were building this for production, you would add additional storage space to hold the backups.

      posted in IT Discussion scale scale hc3 linux centos centos 7 urbackup backup
      scottalanmillerS
      scottalanmiller
    • Announcing the Death of RAID

      Friends, IT Pros, countrymen, lend me your ears;
      I come to bury RAID, not to praise it.
      The evil that tech does lives after them;
      The good is oft interred with its bones;
      So let it be with RAID. The noble S.A.M.
      Hath told you RAID was ambitious:
      If it were so, it was a grievous fault,
      And grievously hath RAID answer’d it.

      Few people have written as extensively on RAID as I have, likely none have written so much and for so long. But we must address the fact that no matter how attached to it we are, no matter how much we talk about it or have studied it... RAID was never meant to last forever and sometime, in the last few years, while none of us were keeping watch: RAID slipped quietly away in its sleep. There were no death throws, no loud bangs, no sudden heart attack moment. Just peacefully, at home alone with the lights off RAID left us to go on to the big technology bin in the sky.

      So what exactly happen to our beloved storage technology? The world changed around it.

      When RAID was new, nearly everything in IT was expensive, really expensive. RAID was built around a need to accommodate large numbers of small disks in single enclosures. These were the realities of the age and even networking was far from ubiquitous at the time. And the idea that storage could pass over a network was nascent.

      Today so much is different. Drives are huge, prices have changed dramatically, we rarely work from single enclosure systems and storage networking is very common and works great.

      So first the basics: scale. RAID works great for eight disks used locally in an array big enough for local applications to run. It's perfect for that. But start pushing 40TB of storage or huge spindle counts and RAID starts to have all kinds of complications. The biggest problem is that parity based RAID options (the RAID F family of R4, 5, 6 and 7) don't scale well and two RAID levels there are so pointless at scale that they have been effective retired completely (R4 and R5) and R6 only works well at smaller scales and R7 is not widely implemented and has major rebuild problems because parity just does not scale that well.

      In order combat scaling issues with parity RAID, we need to turn to mirrored RAID which lacks the capacity efficiency of parity RAID (in most cases) which leaves RAID open to "attack" at a business level from other approaches.

      So scale has simply worked against RAID to a point to where RAID just rarely makes sense outside of small, special case deployments or for limited use like just for local boot disks.

      The move from single enclosures to clusters. Outside of storage itself, businesses have broadly moved from running single enclosure systems (one server, on its own) to clusters of servers often in a virtualization stack. This move has made it critical that our storage be able to be accessed by multiple systems at once over the network and presents an opportunity for data protection between nodes (internodal reliability rather than purely intranodal reliability.)

      We can address this to some degree with networked RAID (products like DRBD) but this does not scale well (or often at all) past two nodes and presents large problems - such as a need to consider node reliability and rebuilds both individually and together and how reads and writes will be impacted by the network. RAID quickly becomes very complex to implement.


      Because systems where RAID fails to offer a straightforward, elegant solution have moved from niche to the norm, RAID itself now faces challenges that appear to be more than it will be able to withstand.

      Enter RAIN. RAIN is really a catch all for many approaches to dealing with the RAID deficiency problems. But common approaches are beginning to emerge that are giving rise to many advantageous RAIN implementations.

      RAIN refers to a "Redundant Array of Independent Nodes" and is a play on the RAID term. The RAIN term itself is too loose to be useful. RAIN's name only implies network RAID, but that is not what is meant by its use.

      For a system to be generally accepted as RAIN, rather than Network RAID, it should be both node and drive aware and provide data protection (unlike technologies like RAID 0.) This awareness allows RAIN approaches to place data on disk in such a way as to protect against the failure of any one disk and/or any one node.

      The most common or popular implementations of RAIN rely on disk mirroring, much akin to RAID 10, but doing so at a more granular level such as the block, rather than at a full disk or logical disk level. Mirroring has become the standard due to the low cost of disks today combined with the common advantages of mirroring, mostly around reliability and speed.

      RAIN addresses the large scale reliability issues of RAID by making failure domains much smaller (individual blocks commonly) and leveraging mirroring which scales very well. RAIN additionally addresses scalability by allowing a single storage pool to scale far past the size of a single enclosure.

      RAIN addresses the networking problem by having the entire storage stack by network aware, rather than simply tunneling existing RAID implementations over a network connection. RAIN can make choices such as only reading data from the local disk rather than waiting for part of the data to be transferred over the network and can make choices such as not waiting for remote writes before continuing with write confirmation. RAIN provides more flexibility and more choices.

      I will explore RAIN technology, products and the future in another article. The future is not bleak, it is very good. The post-RAID world is not a bad one and RAID will, most likely, become mostly an anecdotal piece of IT history outside of a lingering use of RAID 1 for special cases. RAID is already niche today in larger businesses and this trend is pushing more and more into the SMB space. We can't expect everything that we do in IT to stay the same and RAID has had a good, long run as the top dog in server storage. But the times they are a changing and RAID should no longer be the assumed storage answer that it has been for so long.

      posted in IT Discussion raid rain storage
      scottalanmillerS
      scottalanmiller
    • Did AMD Just Stage a Comeback to the Server Market?

      Ryzen is out and rumour has it, it is amazing. At eight cores and sixteen threads this is AMD's answer to both Intel and Microsoft Windows 2016 licensing. AMD has placed their bets on a series of super low cost, super high performance eight core processors which is the sweet spot in Microsoft's Windows 2016 licensing scheme (that many believed was designed to bolster Intel.) The new AMD processors are also AMD's first foray into hyper-threading, a technology that they felt was less important than core count but was not in line with Microsoft's "cores cost money, but threads are free" licensing policy.

      In early test the AMD processors are in line with Intel's for single threaded operations but faster for multi-threaded (which is key in virtualization.) And at much lower cost.

      https://arstechnica.com/information-technology/2017/02/amd-ryzen-arrives-march-2-8-cores-16-threads-from-just-329/

      Not just servers here, but desktop processors are coming, too. AMD is definitely hoping for a return to their Opteron glory days of the late 2000s when Intel was in a panic to make a competitive processor. Intel still has an amazing line up this time, though. But indications are that AMD might have the upper hand and Intel might be rushing to try to compete.

      The pendulum swings, but how far? Hopefully we will see for ourselves very soon.

      posted in IT Discussion amd amd zen processors amd64 ryzen
      scottalanmillerS
      scottalanmiller
    • Failed Drives on Our Scale HC3 Cluster at Colocation America

      Many of you know that we have a large Scale HC3 cluster located in Los Angeles with Colocation America. We recently had two drives fail (it's a large cluster with a lot of drives, these were in different hosts.) They didn't fail at the same time, but within a relatively short span of time. So I thought that I would share my experiences with it.

      First of all, the cluster rebalanced automatically, which I knew that it would do but I had not seen this happen in a loss of disk scenario previously. The missing disks were marked and removed from the RAIN system, and the storage redundancy moved to other disks in the cluster. So from the workload perspective, nothing had happened. The system even rebalanced workloads to properly situate workloads on the modified nodes.

      The cluster alerted us that disks had failed and I just opened a ticket with Scale, they logged in and identified the disks and their models, that needed to be replaced and new drives were shipped out directly to Colocation America's processing facility. So the drives never had to come to our hands at all.

      Colocation America got the drives the next day and then went down and got on the phone with us to verify that procedure as they swapped out the drives. The lights on the cluster made it easy to point them to the drives that had failed. Upon replacement the cluster immediately changed status to show that the new drives had come online and the SCRIBE RAIN system started rebalancing the on-disk storage to make use of the new drives.

      We just confirmed with Colocation America that the swap had gone correctly, and then they shipped the failed drives back to Scale Computing once everything was done.

      All happened as expected, but it's nice to see these things in action. The whole process was incredibly smooth and hands off. The combination of the high availability cluster, the RAIN-based SCRIBE storage system, and using an enterprise colocation facility made the process not just painless and quick, but actually effortless and could easily have been coordinated by someone outside of IT.

      @scale @colocationamerica

      posted in IT Discussion scale scale hc3 rain colocation america colocation
      scottalanmillerS
      scottalanmiller
    • Windows NT Release History

      Just because it is handy to see it in a reference. Every major and minor release version shown.

      Name NT Kernel Date Release Number
      Windows 10 2004 10.8 2020 20
      Windows 10 1909 10.7 2019 19
      Windows 10 1903 10.6 2019 18
      Windows 10 1809 / Server 2019 10.5 2018 17
      Windows 10 1803 10.4 4/2018 16
      Windows 10 1709 10.3 10/2017 15
      Windows 10 1703 10.2 4/2017 14
      Windows 10 1607 10.1 8/2016 13
      Windows 10 1507 / Server 2016 10.0 7/2015 12
      Windows 8.1 / Server 2012 R2 6.3 2013 11
      Windows 8 / Server 2012 6.2 2011 10
      Windows 7 / Server 2008 R2 6.1 2009 9
      Windows Vista / Server 2008 6.0 2006 8
      Windows XP 64bit / Server 2003 R2 5.2 2003 7
      Windows XP / Server 2003 5.1 2001 6
      Windows 2000 5.0 2000 5
      Windows NT 4 4.0 1996 4
      Windows NT 3.51 3.51 1995 3
      Windows NT 3.5 3.5 1994 2
      Windows NT 3.1 3.1 1993 1
      posted in IT Discussion windows windows nt windows 10 windows versions windows vista windows 7 windows 8 windows 8.1 windows 2000 windows xp
      scottalanmillerS
      scottalanmiller
    • Congratulations to Mr. & Mrs. Plates

      The latest little saucer as been added to the Plate family today. Baby John was born just now.

      @stacksofplates

      posted in News
      scottalanmillerS
      scottalanmiller
    • Introducing SAM on IT on YouTube

      Youtube Video

      I think that that really sums it up.

      posted in Self Promotion sam on it scott alan miller youtube
      scottalanmillerS
      scottalanmiller
    • What Are You Doing Right Now

      No cheating.....

      posted in Water Closet time waster
      scottalanmillerS
      scottalanmiller
    • GreenShot Free OpenSource Screenshot Utility for Windows

      @JaredBusch turned me on to GreenShot which is a free, open source screenshot tool for Windows users. One of the cool features is that it has direct integration into tools like Imgur, Flickr, DropBox and more. You can have it save, go to clipboard, go directly into MS Paint or whatever tool for editing, go directly to your hosted tool of choice and more. It integrates with the "Print Screen" button so that you can get screenshots in a wink. Cool tool.

      If you are a chocolatey user, you can install and update as easily as this command:

      choco install greenshot
      
      posted in IT Discussion greenshot chocolatey windows desktop utilities foss
      scottalanmillerS
      scottalanmiller
    • RE: Bitcoin Takes Another 10% Hit on SEC Warning

      @dustinb3403 said in Bitcoin Takes Another 10% Hit on SEC Warning:

      Of course any country (particularly the US) would say that unregulated currency is dangerous and should not be trusted.

      The interest is in keeping their currency "the regulated one" on top so as to ensure control over all markets.

      Yes, a bit skeptical of the "competitor" saying that we should be worried about it.

      In other news, Microsoft recommends Windows.

      posted in News
      scottalanmillerS
      scottalanmiller
    • Best Job Title: LANlord

      We always need a title for that IT generalist, right? We used to call them LAN Admins. Today they might be called just anything. Well this one was a typo (I think) in an offline chat but I thought that it was awesome. Would love to see an SMB Generalist with LANlord as their title. Gets the point across, focused on the LAN and overseeing it.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Is This a Single Point of Failure or SPOF

      @Garyw said this one today and it is so simple and important that it has to be documented for reuse....

      How can I determine if something is a single point of failure (SPOF?)

      Answer: Turn it off, if anything is impacted, it was a single point of failure.

      It's literally that simple.

      posted in IT Discussion single point of failure risk
      scottalanmillerS
      scottalanmiller
    • Hardware Design for SAM-DR Small Rackmount Backup Device

      I'm putting together the final design of the first SAM-DR device that will be receiving a SKU and be available for order. Of course, being a SAM-SD family device, the specs are open and you are always welcome to build your own (and always welcome to buy support separately, too.) This first device design is a four bay, 8TB - 16TB usable capacity 1U rackmount model aimed at the SMB. Only looking at the hardware in this thread.


      Proposed Specs:

      • Dell PowerEdge R320 1U LFF Server
      • Intel Xeon E5-2403 1.8GHz Quad Core
      • PERC H710 Controller 512MB NV Cache
      • Dual 550W Power Supply
      • Broadcom 5720 Quad Port GigE On Board NIC
      • 16GB RAM

      Drive Specs:

      All four drive options are four drives that are identical in RAID 10 configuration. In theory, this could be configured as a two drive RAID 1 but I'd assume that that would almost never happen. All configurations as NL-SAS, not SATA, as there is really no cost difference so the performance of SAS is a bonus.

      • 4x 3TB NL-SAS RAID 10 for 6TB Usable
      • 4x 4TB NL-SAS RAID 10 for 8TB Usable
      • 4x 6TB NL-SAS RAID 10 for 12TB Usable
      • 4x 8TB NL-SAS RAID 10 for 16TB Usable

      For a 1U rackmount server, this is a really effective model for use in backups. The RAID 10 is needed, along with the NL-SAS over SATA, for ingress write performance as there is the 2x write factor to consider, write speeds will be relatively limited by the drive performance. Controller cache is not a big factor as writes are nearly always streaming, not random, so flushing to disk continuously is needed.

      posted in SAM-SD sam-sd sam-dr backup disaster recovery raid storage dell poweredge r320 dell poweredge server perc h710
      scottalanmillerS
      scottalanmiller
    • PHP Best Practices Guide

      PHP Best Practices

      posted in Developer Discussion phpstorm
      scottalanmillerS
      scottalanmiller
    • February 13th (Friday) - MangoLassi Day!

      It is completely coincidental that this is going to happen on a Friday the 13th but we are expecting a huge surge in traffic to hit MangoLassi on this day and it will represent an amazing opportunity for ML to get new users checking out the community and, we hope, joining in on the conversation. On the morning of the 13th, @Minion-Queen is going to be speaking to a large, online conference and will be discussion social media and MangoLassi specifically which, we hope, will drive a lot of people specifically from the Microsoft and MSP communities to come check us out.

      So what are we going to do here on the community? We are calling everyone out to put the day on their calendars and to put forth some effort to get on early, watch carefully and post often! We want to see the conversation explode on that day. If you have friends or coworkers that you have been considering inviting, this is the day to get them to come check it out. If ML isn't something that you do every day, please try to do it that day. Our goal is to have, by far, the busiest posting day in ML history.

      So mark your calendars, have some fresh posts ready to go (news, questions, ideas, conversation topics, images, funny stories, self promotion, you name it) and participate.

      posted in Announcements
      scottalanmillerS
      scottalanmiller
    • RE: Samsung Offices Raided

      Bomb squad called in. Had to clear the place of phones first.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Windows Server 2016 TP4 on Scale HC3 HC2000 Test

      Testing the installation of Windows Server 2016 TP4 on the latest Scale HC3 H2000 cluster build. This isn't officially supported yet, nor is Windows 10, but we wanted to see if it "can" do it rather than just going by what is officially sanctioned. So nothing too exciting here, but wanted to show the installation process going smoothly. As with most Windows OS, you need a special driver for the storage to use the optimized storage. That is shown in the screen shots.

      0_1452646456997_2016_1.png

      0_1452646466422_2016_2.png

      0_1452646480277_2016_3.png

      0_1452646487920_2016_4.png

      0_1452646496580_2016_5.png

      0_1452646504720_2016_6.png

      0_1452646511703_2016_7.png

      0_1452646524597_2016_8.png

      0_1452646531846_2016_9.png

      0_1452646538213_2016_10.png

      posted in IT Discussion scale hc3 windows server 2016 ntg lab windows server
      scottalanmillerS
      scottalanmiller
    • Tonight's Platform Update

      Some major under the hood changes to MangoLassi today that we are very excited about. After two years on Rackspace we were hitting capacity issues and had to find a way to continue to grow the community as site responsiveness has been holding things back and regular users were beginning to feel some lag when the site got busy.

      To address these issues we have done a few things today:

      • We moved from Ubuntu 15.10 to CentOS 7.2
      • We moved from Node 0.10.25 to 5.9.0 (a sizeable jump)
      • We moved from MongoDB 2.6 to MongoDB 3.2
      • We Increased Per Thread CPU Performance by about 15%
      • We Increased Thread Processing (CPU Count) by 300%
      • We Increased Memory by 400%
      • We more than tripled storage write IOPS!!
      • We migrated from Rackspace to Linode
      • We updated to NodeBB 1.0.1

      Rather an incredible about of movement. We are very excited as this provides for massively more overhead to tackle the ever growing performance demands of the community. As we grow it gets harder to keep the site fast and responsive and we feel that this move is going to provide for keeping the site how we want it for the next year or two and provides us with an easy, fast upgrade path for short term performance gains when needed as well.

      posted in Announcements mangolassi
      scottalanmillerS
      scottalanmiller
    • Challenging Is Respect

      aRKvEM7_460s.jpg

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • 1 / 1