I’ve been thinking about home networking a lot lately. Specifically a network attached storage (NAS), or basically a shared hard drive on the network.
No, it’s not because I love technology or gadgets or because I’m geeky like that. Actually, it’s mostly because I’m not living by myself anymore. All problems associated with computer files (e.g. backup, drive crash, computer crash, blue screens, no screen, defrag, crash without saving, power fail without saving, backup, out of space, too slow, too messy, security, mobility, restore, backup, sync, transfers, sharing, duplication, backup) that I normally have is now multiplied by two. And this translates to bad things happening twice as often.
The ideal solution
Put all files on a shared network drive which is fast and backed up. Then have nothing on the local computer except for applications. This has a lot of advantages and a couple disadvantages.
- Automatically backed up for anyone on the network.
- Hard drive space efficiency. Some people use a lot of space and some people use a little. Having all All free drive space pooled together allowed it to be shared so in the long run, you probably save money on hard drives.
- Share files without needing to email or copy them to a flash drive.
- Centralized maintenance and enhancement. An improvement on the network share is automatically realized by everyone using it.
- If you travel a lot, then synchronizing the files to your mobile drive can be a pain.
- Potentially slower. With the same exact hardware, a file loaded from a network drive cannot be faster than a file loaded from the local drive.
Since I do not travel a lot, the only disadvantage for me is the performance. Which bring me back to how I started… I’ve been thinking about home networking a lot lately, mostly about what kind of speed I can expect from a NAS.
To get a rough idea of how fast I can transfer a file from a NAS to a computer, I have to understand the performance of all components on the transfer path. Each person may have a different computer or and different usages so the transfer path will be different for each situation. The fastest transfer speed will be determined by the slowest component (bottleneck) on the path. I found numbers that were all normalized to megabytes (MB) per second and plotted them from slowest to fastest.
The chart above gives a pretty good idea of what really matters during a file transfer. For example, if you were trying to copy something to a usb 1.1 device, then it doesn’t matter how fast your network, or network drive is, the usb device is going to be too slow anyways.
Ideally, if I can make my gigabit wired network the slowest component in my transfer path, that would be pretty darn good. A single hard drive would only acheive roughly 70 MB/s which is not enough to keep up with a 125 MB/s gigabit network. To get network saturation, it looks like a hard drive in raid 0 or raid 5 configuration may be warranted.
- 7200 rpm hard drive [~70 MB/s]
- 7200 rpm hard drives with raid 0 [~140 MB/s]
- pata UDMA/100 (ide) [~100 MB/s]
- pata UDMA/133 (ide) [~133 MB/s]
- 1.5 Gb/s sata connection [~150 MB/s]
- 3 Gb/s sata connection [~300 MB/s]
- wired 100 Mb network / 100 Mb switch [~12.5 MB/s]
- wired gigabit network/ gigabit switch [~125 MB/s]
- wireless 802.11a [~6.75 MB/s]
- wireless 802.11b [~1.4 MB/s]
- wireless 802.11g [~6.75 MB/s]
- wireless 802.11n [~37.5 MB/s]
- usb 2.0 [~60 MB/s]
- usb 3.0 [~625 MB/s]
- firewire [~98.3 MB/s]
MB = megabyte, mb = megabit, 8 megabit (mb) = 1 megabyte (MB)
Increase total throughput with link aggregation
Some motherboards and routers support IEEE 802.3ad, also known as link aggregation, dual lan and teaming. The theory behind this is that two lan ports can be paried up on a single ip to accept connections and therefore handle more throughput than a single ethernet cable. Think of this like a load balancer for web servers. It won’t really increase the speed of a single request but can allow the network to handle more requests, thus increasing total throughput.
I have read a few forums now and it is still unclear to me whether link aggregation works in practice. Increased transfer rate for a single connection seems unlikely unless there is a way to over-come the overhead in dealing with out of order packets. But even without increase transfer rate for a single connection, load balancing multiple connections should help in a multi-user environment. For now the only thing that people appear to agree on is that it does provide network redundancy which is something I’m not interested in.