Archival cloud storage can be an affordable backup layer

Welcome to the first blog in our new column. This column is dedicated to hardware, software and services for the Small and Medium Business (SMB) and commercial midmarket spaces. In most countries companies too small to be generally called enterprises make up over 99% of employer businesses, yet are often the poorest served by technology vendors and technology journalism. This is Tech for the 99%. Backups are the bane of many an IT department. Big or small, nobody escapes the need for backups, and backups can get quite expensive. Smaller organizations trying too backup more than a terabyte of data usually have the hardest time finding appropriate solutions. This is slowly changing. For many years I have avoided advocating backing up data to the public cloud. Even the cheapest backup service – Amazon’s Glacier – was too expensive to be realistically useful, and the backup software that talked to it was pretty borderline. Times changed. Software got better. Cloud storage got cheaper. A new competitor on the scene emerged in the form of Backblaze with their B2 product. B2 offers archival cloud storage for $0.005 (half a cent) per gigabyte USD. That’s notably less expensive than Glacier’s $0.007 per gigabyte. That makes Backblaze B2 $51.20 USD/month to store 10TB. That’s an inflection point in affordability. B2 and Glacier both cost to retrieve data, and they’re slow. They aren’t the sort of thing any organization should be using as a primary backup layer. It doesn’t make a lot of sense to use them to regularly restore files that staff accidentally delete, or to cover other day-to-day backup needs. Similarly, B2 and Glacier aren’t immediate-use disaster recovery solutions. You can’t back up your VMs to either solution and then push a button and turn them on...

Beyond the traditional storage gateway Jun17

Beyond the traditional storage gateway

Bringing together all storage into a single point of management, wherever it resides, has long been a dream for storage administrators. Many attempts have been made to make this possible, but each has in some way fallen short. Some of the problem is that previous vendors just haven’t dreamed big enough. A traditional storage gateway is designed to take multiple existing storage devices and present them as a single whole. Given that storage gateways are a (relatively) new concept, all too often the only storage devices supported would be enterprise favourites, such as EMC and NetApp. Startups love to target only the enterprise. Around the same time as storage gateways started emerging, however, the storage landscape exploded. EMC, Netapp, Dell, HP, and IBM slowly lost market share to the ominous “other” category. Facebook and other hyperscale giants popularized the idea of whitebox storage and shortly thereafter Supermicro became a $2 Billion per year company. As hyperscale talent and experience expanded more and more, public cloud storage providers started showing up. Suddenly a storage gateway that served as a means to move workloads between EMC and Netapp, or to bypass the exorbitant prices those companies charged for certain features just wasn’t good enough. Trying to force a marriage between different storage devices on a case-by-case basis isn’t sustainable. Even after the current proliferation of storage startups comes to an end, and the inevitable contraction occurs, there will still be dozens of viable midmarket and higher storage players with hundreds of products between them. No company – startup or not – can possibly hope to code specifically for all of them. Even if, by some miracle, a storage gateway taking this boil-the-ocean approach did manage to bring all relevant storage into their offering one product at a...

Data residency made easy Jun15

Data residency made easy

Where does your data live? For most organizations, data locality is as simple as making sure that the data is close to where it is being used. For some, data locality must also factor in geographical location and/or legal description. Regardless of the scope, data locality is a real world concern to an increasing number of organizations. If this is the first time you are encountering the term data locality you are likely asking yourself – quite reasonably – “do I care?” and “should I care?” For a lot of smaller organizations that don’t use public cloud computing or storage, the answer is likely no. Smaller organizations using on-premises IT likely aren’t using storage at scale or doing anything particularly fancy. By default, the storage small IT shops use lives with the workload that uses it. So far so good… except the number of organizations that fall into this category is shrinking rapidly. It is increasingly hard to find someone not using public cloud computing to some extent. Individuals as well as organizations use it for everything from hosted email to Dropbox-like storage to line-of-business applications and more. For the individual, cost and convenience are usually the only considerations. Things get murkier for organizations. The murky legal stuff Whether you run a business, government department, not-for-profit, or other type of organization, at some point you take in customer data. This data can be as simple as name, phone number, and address. It could also be confidential information shared between business partners, credit card information, or other sensitive data. Most organizations in most jurisdictions have a legal duty of care towards customers that requires their data be protected. Some jurisdictions are laxer than others, and some are more stringent with their requirements based on the...

Review: ioSafe 1513+ Nov06

Review: ioSafe 1513+

My introduction to ioSafe came in the same way that I would imagine a lot of people’s introduction came – with Trevor Pott, Josh Folland and co torching one for The Register. While I could see the point of them, I couldn’t imagine what I would possibly use one for… well until I was asked to review one anyway. So where exactly does one start when reviewing an ioSafe? Well personally, I would have loved to start by drowning it and then maybe playing with a blast chiller. I’d have loved to see how long it could perform while immersed and nearing freezing. Except this isn’t that type of review. Also, given that the unit I’m reviewing is located on the other side of the world, it makes it more than a little difficult. I guess we’ll shelve that idea for now and take a quick look at the “boring” bits that are just as important as destroying a NAS with ice (or so I’m told). The Interface When first logging in to the ioSafe you could be excused for thinking that they’ve pulled a QNAP and blatantly ripped off the look and feel of the Synology DSM. Well, you’d be almost right. The reason the web interface looks and feels like the DSM is because it IS. The ioSafe that I was tasked to review is the 1513+ and it’s powered by the Synology DSM. Finding the Synology DSM on the ioSafe 1513+ is like being reunited with an old friend – if your old friends are well supported, intensely tested and extremely user friendly. To test the user friendliness of the DSM, I tasked my distinctly non-technical office manager with discovering how much space was available on the drives. It took her...

Quaddra provides Storage Insight Nov05

Quaddra provides Storage Insight

Knowledge is power, and Quaddra Software aims to empower customers by giving them knowledge of their unstructured storage. Quaddra was founded by storage and computer science experts Rupak Majumdar, Jeffrey Fischer, and John Howarth, and their first product, Storage Insight, breaks new ground in the storage management and analytics market. So what is Storage Insight and how does it differ from other market offerings? First and foremost Storage Insight isn’t a cloud storage solution or a cloud storage gateway. It is high performance, highly-scalable file analytics software designed to give storage and systems administrators the information they need to manage unstructured data files. As you can see from figure 1 above, by providing your teams with the information they need, Storage Insight can reduce the time and money involved in making decisions about your storage assets. One of the ways that Storage Insight does this is by providing a clear and concise breakdown of the files in your existing unstructured storage. Storage Insight can assist your team in eliminating cold files from your expensive ‘hot’ storage. The category and filetype breakdowns also help you quickly and easily identify inappropriate use of storage assets. So how does Storage Insight work? Unlike traditional storage management, analytics and archival products, Storage Insight doesn’t need to ingest your data to work with it. Instead it comes as a VMware + Ubuntu virtual appliance that integrates into your current environment. Storage Insight has pre-built modules that allow it to work with standard file storage architectures like NFS and CIFS, and has the ability to add plug-in modules for other types of storage if it isn’t one of the standard architectures. Even better though, Storage Insight doesn’t just integrate with physical, hardware-based storage. In a first for storage management and...

Storage wars: leveling the playing field. Sep09

Storage wars: leveling the playing field.

The storage industry is going through its first truly major upheaval since the introduction of centralised storage. Enterprises have years – and millions of dollars – worth of investment in existing fibre channel infrastructure, most of which is underutilised. Novel storage paradigms are being introduced into markets of all sizes. The storage industry is in flux and buying a little time to correctly pick winners could save enterprises millions. Information technology is always changing. Calling any influx of novelty a “major upheaval” is easy to dismiss as overstatement of hype. Deduplication and/or compression were reasonably big deals that came out long after centralised storage, so what makes the current brouhaha so special? The difference is one of “product” versus “feature”. Deduplication or compression were never going to be products in and of themselves for particularly long. It was always destined to evolve into a feature that everyone offered. Today’s storage shakeup is different. Server SANs can do away with the need for centralised storage altogether, threatening to turn enterprise-class storage itself into a feature, not a product. Host-based caching companies are emerging with offerings that range from creating an entirely new, additional layer of storage in your datacenter to injecting themselves into your existing storage fabric without requiring disruption of your network design. We’re at about the halfway point in the storage wars now; the big “new ideas” have all been run up the flag pole and there are a dozen startups fighting the majors to be the best at each of the various idea categories. The “hearts and minds” portion of the war is well underway and that give us a few years until there’s some major shake-up or consolidation. The big fish will eat the small fish. Some companies will rise, others...