Follow SmallNetBuilder
Follow SmallNetBuilder on TwitterConnect On Facebook Google+Get the SmallNetBuilder RSS Feed
You are here: NAS NAS Reviews Iomega ix4-300d StorCenter Network Storage Reviewed

Iomega ix4-300d StorCenter Network Storage Reviewed

Print E-mail
Prev - Page 1 of 3 - Next >>
StorCenter ix4-300d Network Storage 4-bay (8 TB)
At a glance
ProductIomega 70B89001NA StorCenter ix4-300d Network Storage 4-bay (8 TB)   [Website]
SummaryFour bay Marvell Armada based NAS with cloud backup and sharing features
Pros• Supports backup to cloud, rsync and SMB targets
• Higher throughput than previous Marvell-based models
• USB 3.0
• Three year warranty
Cons• No IPv6 support
• Single volume with no RAID1, expansion or migration options
• Sparse collection of add-ins
• Cloud feature relies on router port forwarding/UPnP

Typical Price: $557  Check NewEgg  Check Amazon


The last time we looked at an Iomega NAS was the 1.8 GHz Atom D525-based two-bay px2-300d, back in November. This time, we are looking at a refresh in Iomega's lower-priced ix product line, the four-bay ix4-300d. The 300d comes in 4, 8 and 12 TB configurations. There is also a BYOD version, but it's not sold in the U.S. We'll be reviewing the 8 TB version (model 35566).

The ix4-300d, which replaces the ix4-200d, is based on a 1.3 GHz Marvell Armada XP CPU. The NAS comes with four Seagate 2 TB hard drives. It performs well compared to other Marvell NASes, but you'll be disappointed if you compare it to its Intel-based siblings.

The figure below is an angle view of the ix4, with its metal cover removed. All the ix4-300d chassis is metal, except for the black plastic front panel bezel. The lone USB 3.0 port is on the front, but at least it's not behind the drive door like on other NASes' we've reviewed such as the Thecus N5550.

It also shows the ix4-300d's side-loading configuration. Although you could remove the metal cover that is secured by two thumbscrews with the NAS powered-up, Iomega says that the ix4-300d's drives are not hot-swappable.

ix4-300d Front Panel

StorCenter ix4-300d Front Panel and Drive Configuration

The next figure shows the rear panel, where You can see two USB 2.0 ports, power connector, dual gigabit Ethernet ports with support for failover and 802.3ad aggregation and a nice, quiet fan. Unfortunately, there is no eSATA port to attach external drives for speedier storage expansion or attached backup.

ix4-300d Back Panel

StorcCenter ix4-300d Back Panel


The figure below shows the ix4's main board, which is based on Marvell's Armada XP clocked at 1.3 GHz supported by 512 MB of soldered-on DDR3 RAM and 512 MB of Samsung flash. Dual Marvell 88E1318 Alaskas provide the dual Gigabit Ethernet ports and a NEC D720200F1 provides the USB 3.0 port. SATA II interface for the four drives is handled by a Marvell 88SX7042 controller.

The 8 TB version that Iomega shipped for review came with four Seagate Barracuda 7200.14 2 TB (ST2000DM001) drives that are formatted with the EXT4 file system, which is a change from the XFS file system previous used by Iomega.

ix4-300d board

ix4-300d board

Table 1 has a summary of the key components for the ix4-300d compared with its -200d predecessor and px2-300d Atom-based sibling.

ix4-300d ix4-200d px2-300d
CPU Marvell Armada XP @ 1.3 GHz Marvell 88F6281 Kirkwood @ 1.2 GHz Intel Atom D525 @ 1.8 GHz
Ethernet Marvell 88E1318 Alaska (x2) Marvell 88E81116R (x2) Realtek RTL8111f (x2)
Flash 512 MB 64 MB 1 MB

Marvell 88SX7042

Marvell 88E6121

In 82801IB
USB 3.0 NEC D720200F1 N/A NEC D720200F1
Table 1: ix4-300d component comparison

Power consumption measured 44 W with the 4 drives spun up and 17 W with them spun down. Fan and drive noise could be classified as medium with both fan and drive noise audible in a quiet home office.

Related Items:

Iomega Adds Intel, Marvel NASes
Iomega Releases New Intel-Based NASes
Iomega NASes Get Approved for Backup Exec
Iomega StorCenter px2-300d Reviewed
Iomega Expands Into Video Surveillance

User reviews

Average user rating from: 2 user(s)

NOTE! Please post product reviews from actual experience only.
Questions, review comments and opinions about products not based on actual use will not be published.

User Rating    [Back to Top]
4.3 Features :
5.0 Performance :
4.0 Reliability :
Ratings (the higher the better)
    Please enter the security code.

Good for bare distro install

Overall rating: 
Reviewed by Benoitm974
July 22, 2014
Report this review

Just to mention that the device is very easily hackable, I did install bare Debian with mainline kernel (3.16) with everything working except CESA. I get very good performance for AFP mac file sharing (90M-100M write / 105M read) in raid0 on 2 disk. Impressive for a $200 device !


Iomega IX4-300D RAID5 disk expansion from 4 TB to 8 TB - duration

Overall rating: 
Reviewed by Eric
April 07, 2013
Report this review

On 5 April 2013 I installed my brand new Iomega Storcenter IX4-300D.
The unit came with 4 TB populated (2 x 2 Seagate Barracuda).
As I purchased the Storcenter with the intention to use it in a RAID 5 config, I opened the casing and added 2 new Barracuda ST2000DM001 2 TB drives. No problem there, Iomega supplied 2 plastic drive slides which you fold around the drives, then slide the units into their slots and ensure you push both connectors into their receptacles. Upon initial startup, the Iomega Storage Manager web-based console then reported I had 7.18 TB available in total, 157 MB used by the system itself, with no RAID yet.
So far so good.....

So I went into the Drive Management feature, and selected the RAID 5 protection option, using all drives.
Subsequently the Storcenter started its process of disk expansion so that all 8 TB are utilized in RAID 5 mode with parity.
However, this is a veeeerrry slooooow process. It took more than 24 hours to do the first 35 % of expansion (both the web console and the blue LED screen on the Storcenter itself show the percentage process). So I already mentally prepared myself for a 72-hour duration for the whole expansion job. Strangely enough though, the disk expansion job then speeded up gradually as it worked its way through the disks - maybe because it started on the outward tracks first ?
Anyway, miraculously the process ended after 30 hours in total, giving me just over 5 TB of net capacity die to RAID 5 overhead.

Anyone else with a different duration outcome for this process ?

I've rated reliability neutral as my judgement would be premature.....