Video Review: Solarwinds Virtualization Manager

Solarwinds Virtualization Manager ensures your IT department is agile enough to solve problems before they arise, provides the insights needed to accurately gauge the status of your infrastructure, ensures accurate provisioning of resources, and much more – all under a single pane of glass. Don’t forget to check out our written review here: “Information Overload? There’s an app for that”...

Information Overload? There’s an app for that.

Computers are the most awful way to do things, except for all the other ways we’ve tried.  It’s easy to blame computers; they don’t fight back.  What’s much more difficult, yet distressingly important, is figuring out why computers have done something unappreciated and remedying the situation.  One important tool in a systems administrator’s arsenal is Solarwinds’ Virtualization Manager. Humans have a natural tendency to anthropomorphize inanimate objects.  Many of us give our cars names, ascribe to them personalities, talk to them and sometimes treat them like members of the family.  We similarly ascribe personalities and motivations to individual computers or even entire networks of them, often despite being perfectly aware of the irrationality of this.  We can’t help it: anthropomorphizing is part of being human. Computers, however, aren’t human.  They don’t have motives and they don’t act without input.  They do exactly they are told, and that’s usually the problem.  The people telling the computers what to do – be they end users or systems administrators – are fallible.  The weakest link is always that which exists between keyboard and chair. Our likelihood of making an error increases the more stress we’re put under.  Whether due to unreasonable demand, impossible deadlines, or networks which have simply grown too large to keep all the moving parts in our memory at given time, we fallible humans need the right tools to do the job well. You wouldn’t ask a builder to build you a home using slivers of metal and a rock to hammer them. So why is it that we so frequently expect systems administrators to maintain increasingly complex networks with the digital equivalent of two rocks to bash together?  It’s a terrible prejudice that leads many organizations to digital ruin.  A tragedy that, in...

Beyond the traditional storage gateway Jun17

Beyond the traditional storage gateway

Bringing together all storage into a single point of management, wherever it resides, has long been a dream for storage administrators. Many attempts have been made to make this possible, but each has in some way fallen short. Some of the problem is that previous vendors just haven’t dreamed big enough. A traditional storage gateway is designed to take multiple existing storage devices and present them as a single whole. Given that storage gateways are a (relatively) new concept, all too often the only storage devices supported would be enterprise favourites, such as EMC and NetApp. Startups love to target only the enterprise. Around the same time as storage gateways started emerging, however, the storage landscape exploded. EMC, Netapp, Dell, HP, and IBM slowly lost market share to the ominous “other” category. Facebook and other hyperscale giants popularized the idea of whitebox storage and shortly thereafter Supermicro became a $2 Billion per year company. As hyperscale talent and experience expanded more and more, public cloud storage providers started showing up. Suddenly a storage gateway that served as a means to move workloads between EMC and Netapp, or to bypass the exorbitant prices those companies charged for certain features just wasn’t good enough. Trying to force a marriage between different storage devices on a case-by-case basis isn’t sustainable. Even after the current proliferation of storage startups comes to an end, and the inevitable contraction occurs, there will still be dozens of viable midmarket and higher storage players with hundreds of products between them. No company – startup or not – can possibly hope to code specifically for all of them. Even if, by some miracle, a storage gateway taking this boil-the-ocean approach did manage to bring all relevant storage into their offering one product at a...

Data residency made easy Jun15

Data residency made easy

Where does your data live? For most organizations, data locality is as simple as making sure that the data is close to where it is being used. For some, data locality must also factor in geographical location and/or legal description. Regardless of the scope, data locality is a real world concern to an increasing number of organizations. If this is the first time you are encountering the term data locality you are likely asking yourself – quite reasonably – “do I care?” and “should I care?” For a lot of smaller organizations that don’t use public cloud computing or storage, the answer is likely no. Smaller organizations using on-premises IT likely aren’t using storage at scale or doing anything particularly fancy. By default, the storage small IT shops use lives with the workload that uses it. So far so good… except the number of organizations that fall into this category is shrinking rapidly. It is increasingly hard to find someone not using public cloud computing to some extent. Individuals as well as organizations use it for everything from hosted email to Dropbox-like storage to line-of-business applications and more. For the individual, cost and convenience are usually the only considerations. Things get murkier for organizations. The murky legal stuff Whether you run a business, government department, not-for-profit, or other type of organization, at some point you take in customer data. This data can be as simple as name, phone number, and address. It could also be confidential information shared between business partners, credit card information, or other sensitive data. Most organizations in most jurisdictions have a legal duty of care towards customers that requires their data be protected. Some jurisdictions are laxer than others, and some are more stringent with their requirements based on the...

HP ML310e Review May22

HP ML310e Review

Being a VMware guy, I needed a VMware cluster to train and learn on. The old cluster of a DL385 and a NL40 wasn’t really cutting it and was LOUD. Therefore, I decided it was time to upgrade. I was interested to play with the concept of nested VM’s and build a virtual cluster inside my HP ML310e. So in essence, what’s it like? It’s sexy, very quiet and power efficient (as denoted by the e, whereas p means performance, in terms of more powerful CPUs, etc.) The model I have is the base model, Xeon E3-1220-V2 with a single SATA 7,200 RPM disk and maxed out to 32 GB RAM. This is fine as it is just a test lab. It would also make an excellent small office server as well if an additional disk where added. The standard 2 GB of RAM is a bit miserable but you do have another 3 slots for your RAM unless like me you’re a VMware nerd who crams in as much RAM as possible. The hardware is ideal for a nested cluster. The Xeon E3-1220-V2 has both VT-D and EPT (Hardware look aside buffer in other words) onboard. This means it has all it needs to deal with nested VM’s in hardware rather than software, so the performance is still excellent. So, let’s start with the cosmetics and the outsides. The G8 has a face lift and sports a cool metallic grill. One of the things I didn’t like about it was the fact that to actually get into the box you now have to use a key on the side of the unit. That in itself is not so bad, but you once you have the lock in the unlock position you can’t remove the key,...