Mediasonic Probox HF2-SU3S2

3 replies [Last post]
eire1274
eire1274's picture
Offline
Joined: 09/12/2003
Posts: 1190

As I have moved my serving needs over to FreeBSD, ala FreeNAS, ZFS has provided an attraction over conventional RAID disc arrangements as my storage needs have progressed into larger storage capacities. ZFS, originally a SunOS drive protocol which is now mature under FreeBSD, supports hotswap as well as automatic array growth, which can provide a much simpler operation of a drive cluster, similar to Drobo NAS devices, but while still supplying all of the extended protocols and services of a full server.

Initially, I was using drives inside the tower (I have re-purposed a retired EMachines Athlon 4800+ based unit), but drive changes were a long and difficult process when using JBOD clusters, so I added a Highpoint RocketRAID 622 eSATA card and the Probox 4-bay device, with the initial load of 4 2Tb Green Western Digital drives.

The Mediasonic Probox devices come in a long range of sizes and capabilities, up to 8 bay, USB 2.0 or 3.0., and eSATA with either port multiplication or on-board RAID. The HF2-SU3S2 is a 4 bay (up to 4Tb per bay) SATA I/II/III enclosure with USB 3.0 and eSATA ports, with USB transfer speeds up to 5Gb/s and eSATA speeds up to 3Gb/s (SATA II equivalent, though the device is SATA III compliant). As I wished to use ZFS as the drive cluster protocol, the system is handling "RAID" via software, so no hardware RAID system was needed.

Initially, the machine impressed me with how small it is (pic below), and how outright heavy it is as well. Drives slip in trayless, though a small handle needs to be affixed to them before insertion so they can easily be removed. The drive bay cover is actually in two layers, one just a cosmetic cover and filter and the other a strong clip that locks the drives into the position. With the drives installed, a simple button press set the control hardware to eSATA, and the FreeNAS server immediately detected all of the drives.

With 4 drives set as a ZFS cluster in ZFS1, which provides 1 drive failure duplication, 5.2GB were available (as in RAID 5, 1/3 of the space is reserved for parity, and of course the file table structure consumes a portion as well) I am seeing fairly good transfer write speeds. Larger files hit 100MB/s (not Mb/s), though smaller files or sequences of files can slow it down to half of that speed. This drop from the full 133MB/s speed expected over a clean gigabit connection is due to the duplication of every piece of data as they are written to multiple drives. As my processor is an older dual-core with only 4GB of memory, I may need to double the memory and replace the CPU to improve these speeds, but in truth I was only peaking at 110MB/s on a single drives directly connected to the SATA controller, so the loss of speed isn't significant and is still well within what I need for photo and media backup.

For drive replacement, ZFS (also referred to as RAID-Z) does still need all drives to be the same capacity to provide proper duplication, but expansion is very simple. Simply remove one drive from the cluster, physically remove it, install a larger disk, add it to the cluster and let it re-silver, then follow the same steps with the remaining drive bays. When the final drive is added and the cluster has completer the re-silver process, the cluster capacity will generally automatically adjust (unless this feature has been disabled by the administrator) to reflect the larger capacity. So when I run out of space, 4 4Tb drives can easily bump me up to over 10Tb of usable protected space.

Mediasonic has gone overboard in the engineering of this series of drive enclosures. I do recommend the later versions (easily identifiable by the USB 3.0 capability) as they seem to be much more robust and run quicker than the older models. I will also add that I am not yet a fan of the USB 3.0 feature. USB 3.0 is still a very young technology, and I have seen a lot of issues with port hangups that can cripple enclosures like this with ridiculous slowdowns.

So, from me, at least, this is a kick-ass product! Any questions, please ask!

Nick McDermott

eire1274
eire1274's picture
Offline
Joined: 09/12/2003
Posts: 1190

The box:

IMG_20140223_162915.jpg

Nick McDermott

eire1274
eire1274's picture
Offline
Joined: 09/12/2003
Posts: 1190

The Probox in action:

IMG_20140223_154825.jpg

Nick McDermott

eire1274
eire1274's picture
Offline
Joined: 09/12/2003
Posts: 1190

It occurred to me that I forgot to mention the cooling fan. It can be set at 1/3, 2/3, or full airflow, intakes air through the front door, across all drives, and blows out the back, and even has an automatic setting that manages airflow rate on its own.

I'm using the automatic setting, and so far even though the server tower and the hard drive enclosure are locked in a cabinet (that does have passive air circulation via wire holes I have cut at the bottom, top, and all shelves) the fan has yet to move beyond the lowest setting.

The Green series hard drives are cooler than the Blue or Black 7200RPM drives, but although they do have a quick automatic spindown sleep function, ZFS and it's regular scrubbing and matching routines (that guarantee that a bad sector will never ruin your data, even on cheap drives) keep activity high enough that they don't often sleep, but the total heat output is still well within what the Probox was engineered to handle.

Nick McDermott