Storage-Interfaces

Well now that we have established the idea that SATA drives are usually the ideal choice, we have to deal with the next logical question:  How should we go about connecting a whole bunch of these drives to our editing system?  The primary considerations I will be examining are cost, throughput, reliability, and shared access.  The most popular solutions, offered by multiple vendors, are SCSI, Fibre Channel, Ethernet, iSCSI, eSATA and the recently implemented External PCIe.  There are few other proprietary options available, but those are the ones that are widely available.

Let’s start with SCSI, since it is the easiest to dismiss.  While we are discussing the connection of SATA drives, many of the first generation SATA arrays had intergrated controllers and Raid hardware, and then needed a fast connection to the host.  These arrays were designed to replace much more expensive SCSI drive based arrays, so the target customers trusted the SCSI interface, and already had high end SCSI controllers in their systems.  That made SCSI the optimal connection solution for early SATA arrays.  The SATA Raid controller masquerades the entire array as a single SCSI disk, allowing connection to systems through existing SCSI cards.  With up to 320MB/s of bandwidth, a single SCSI channel can efficiently support 5-7 SATA disks without much impact on performance.  The biggest reason to dismiss SCSI as a serious possibility is that eSATA is a better option for most, and the remaining will be much better served by a Fibre Channel interface, allowing for economical upgrading to a full SAN in the future.

The next step for high end SATA arrays was to replace the SCSI emulation with a much more flexible interface, Fibre Channel.  With up to 400MB/s, Fibre Channel has few disadvantages to SCSI, and one major benefit.  SATA disk arrays with Fibre Channel interfaces can usually be connected to switches, and shared between multiple systems, in a SAN.  All connected systems get direct block level access to the disks, which will almost always be faster and more responsive than sharing through an ethernet network.  With the proper Shared SAN software, these systems can also share the data down to the level of individual files.  For facilities where multiple users do collaberative work, based on the same source data, Fibre Channel is probably worth the added initial investment, even if a SAN is not immediately implemented with the purchased hardware.  The possible extensible use of an array beyond a single workstation should be well worth the increase in price, and as an added benefit, cable lengths can easily be increased enough to keep the noisy array out of what should be a peaceful creative environment.

 There are many products available that share storage directly to an ethernet network connection.  The consumer varients hardly have the performance to support DV editing, let alone anything more demanding.  The higher end options, with prices similar to SCSI and FC do offer some interesting possibilities, but will rarely be the optimal choice for a given situation.  Any gigabit ethernet connection is limited to 125MB/s, and in reality, the achievable performance is usually about half of that.  Gigabit network solutions will not be a solution for uncompressed work at HD or higher resolutions.  10Gb Ethernet would offer the desired performance, but is not currently an economical solution.  If compressed files are used, regualr gigabit ethernet can be used to transport the data in realtime, but I would still argue that arrays interfacing directly to ethernet are not the most efficent solution.  Any similar array directly connected to a workstation through a different interface will give much better performance to that system, and can still be shared on an ethernet network via that workstation.  There will be a performance hit on that station when sharing data to other system, but a network card with a TCP/IP Offload Engine (ToE) can minimize that effect, and the increased performance on that system do to the high speed storage directly attached should more than offset whatever is remaining.  This would involve using an array with one of the other interfaces we are examining.

A recent technology that uses ethernet to transfor data, is iSCSI.  Promoted as having many of the advantages of Fibre Channel SANs, iSCSI gives initiator devices (workstations) block level access to their target device (arrays).  This allows the target device on the network to emulate a local device on the initiator’s system.  The downsides are that maintaining data intergrity on shared target drives, requires most of the same expensive software infrastructure that a Fibre SAN does, and the inefficiencies of the TCP/IP protocol are still present to limit the realistically achievable maximum transfer rate.  If you have to deliver identical data to a large number of systems, and don’t want to spend money on the performance that Fibre Channel hardware can deliver, then iSCSI might be of benefit to you.  These products are targeted at large corporations, and don’t scale down in size without losing performance, and maintaining deployment complexity.  I don’t see this being the solution of choice for most desktop PC workstation professionals in post-production field.

 The next solution is offered in a staggering varietly different solutions, eSATA.  This can be fairly confusing due to the number of variations of this technology on the market.  eSATA is a very flexible standard, but not all implementations will deliver optimal results.  For example, some products support port multiplying to increase the number of drives without increasing the complexity of the interface cables or the Raid controller.  This solution is good for high volume solutions, but will not deliver the same level of performance as direct connection based solutions.  The simplest, professional level, eSATA array will be an external drive enclosure that passes each drive’s data interface directly back to the controller, which will usually be some varient of PCI card, inside the workstation.  This gives the card direct full-speed access to each disk drive, and all Raid processing is done on the controller card inside the workstation.  This will be the fastest and most efficient solution for the cheapest price, and I highly recommend it.  The limitations are the cables which usually have a 6 foot maximum length, and the fact that Fibre channel is easier to share.  But for the independent, budget conscious, single workstation user, this is the way to go.  Eight disks gives you enough storage for almost any concievable independent project, and eight drives should support uncompressed HD if desired, and may even work for 2K with an efficient Raid controller.  Solutions that use port multipliers to connect more drives, will increase storage but not performance, and usually require more expensive SAS compatible controller cards to support the port multiplying.  If you need more than 8TB of storage on your system, these might work well for you.

The most recent development in this area is the advent of the use of External PCI Express as an array interface.  A small PCIe passthru card is all that is required in the host system.  An x4 slot can transmit and recieve 10Gb/s of data, which is 1.2GB/s, and there is much less overhead than most other interfaces.  An x8 slot is capable of twice as much throughput for an insignificant margin cost increase.  With External PCIe, the drive controller and raid processing electronics are contained within the drive enclosure, and the controller has direct access to the disks.  As a result, the array could easily be moved to another system, without having to bring a separate controller card from within the system.  Each system would need an External PCIe bracket, but those are only forth about ten dollars.  Due to the nature of the External PCIe interface, the computer has the same level of access to the controller and its data that it would if those electronics resided on a board contained within the workstation.  Another benefit of PCIe, is that the new ExpressCard for notebooks is based on the same interface.  This allows a simple adapter to connect an External PCIe device to a notebook at x1 speeds (over 250MB/s will be fast enough for uncompressed HD).  Currently I am only aware of two vendors offering soluitions using this technology, CalDigit and Ciproco.  It will be interesting to watch as this technology continues to develop.

So my recommendation is that high end eSATA solutions are the most economical direct attached storage solutions, and can support uncompressed HD if needed.  Larger operations that are considering upgrading to a full shared SAN system in the future will probably find the increased initial investment of Fibre Channel arrays to be well worth the value when they re-utilize the same hardware in their SAN implementation sometime in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *