By Mike Cobb, Director of Engineering
Thinking about using RAID configurations in your data storage strategy but not sure what the alphabet soup acronym means, and which type to use? Start here and learn the basic language.
RAID is shorthand for redundant array of independent disks, and refers to drives that work together to store data across two or more disks. The term dates back to 1987, when University of California, Berkeley researchers noted that mainframe computers could benefit from the “inexpensive disks” then flooding the personal computer market. Indeed, the original paper referred to “redundant array of inexpensive disks.” This method allows data to be mapped across disks to provide redundancy. The technology was first applied to storage media for mainframes, with the idea being that if any one of the drives failed, the data would be preserved. While RAID initially applied to hard disks, SSD storage can be configured in this format as well, either with other SSDs or with hard disks.
At its most basic, RAID describes how data can be distributed across different storage devices. How data is distributed or accessed will depend upon which level you choose.
HOW RAID WORKS
In nearly all levels of RAID, the technology relies on interpreting “parity” in service of “fault tolerance.” Parity refers to error protection. Fault tolerance allows a system to operate in the event of a component failure. Those levels with parity rely on striping blocks of data across two or more drives.
There are six core levels—0, 1, 5, 6, 10 and 50. Levels 2, 3 and 4 are either obsolete or rarely used. In all of these, the idea is that multiple storage devices are represented as a single drive volume. Depending upon the implementation, you will see an increase in the total volume storage capacity based on the use of multiple disks. (Only RAID 1 mirrored results in a net reduction of capacity.) However, total useable storage capacity will always be 25–50% less than the actual available space.
SOFTWARE AND HARDWARE RAID
RAID requires a disk controller or software that governs how the data should be written to disk. The hardware option uses the physical RAID controller, which includes a dedicated CPU. Software RAID is built into the operating system, where specific operating systems may support specific levels and nestings. A solution may be hardware, software or a combination.
WHY USE RAID?
The reasons for choosing to use RAID over other data storage approaches comes back to the technology’s use of “parity” to provide fault tolerance. With RAID, you have a level of data redundancy that’s designed to give you security that your data will be safe in the event of a hardware (storage media) failure. This type of system won’t negate the concept of needing a second backup but it does offer an extra safeguard. It’s a good hedge to protect against hardware failure, as opposed to data corruption caused by the operating system or other software glitch.
RAID’S ROLE IN A BACKUP STRATEGY
It cannot be stressed enough that using a RAID is not a backup strategy unto itself. Think of it as a data survival strategy as opposed to a backup strategy. Depending upon the level you use, RAID can provide a performance boost or an additional layer of protection against a hardware failure; however, this technology should not be equated with backup. It’s just one component within a backup strategy that should include cloud backup and a second hardware backup as well. Particularly for home- and small-office implementations, these redundant systems are now easier to back up than it used to be, thanks to the ever-increasing hard drive capacity and how some NAS boxes include a USB port for backing up the array to another device.
Parity, or parity bits, are binary code that act as checksums for tracking errors in data transmissions. A data bit (a 1 or a 0) gets added to the end of each data block, creating either odd or even bits, and enables a damaged disk to be reconstructed without any loss of data. For a RAID to be functioning properly, it will go through error checking algorithms that ensure data integrity and detects errors using logical data operations.
In RAID, the idea is that fault tolerance comes from the fact that data parity is spread across two or more drives—or can be on one dedicated “parity drive,” depending upon the RAID level. Should a designated data drive fail, the parity data can be used to rebuild the data—and rebuild the drive array.
In levels 1 and higher, there is less usable storage than the actual drives add up to. In a similar principle, RAID levels work based off of the lowest common denominator, i.e. the smallest drive capacity in play. That means that if, for some reason, you mix a 1TB drive with three 2TB drives, all drives will only act as if they’re using the 1TB of space. This is less of an issue than it used to be, given the ever-dropping cost of storage, but it remains a point of interest that you’ll maximize your storage capacity by using drives with matching capacity.
RAID LEVELS EXPLAINED
RAID has, historically, been one of the more complex and confusingly described technologies in storage. While RAID is generally recognized to include levels 0, 1, 5, 6, 10 and 50, 0, 1 and 10 do not provide the same level of redundancy and fault tolerance of 5, 6 and 50. So let’s consider the different levels and which ones are no longer relevant.
Often referred to as “data striping,” this is the other RAID level that’s a misnomer since it inherently lacks redundancy and parity. The total capacity of a RAID 0 volume equals the capacity of the drives paired together. For example, if two 2TB drives are striped together in RAID 0, they would become a single 4TB volume. If one drive fails, the entire volume fails, since two drives “striped” together are seen as a single volume. In this example, the data is written evenly across both drives, which boosts both the read and write throughput. This setup is often used for storage in performance-driven systems, such as gaming PCs, as well as for live streaming and video-on-demand applications in which data reliability takes a backseat to throughput. Works with a minimum of two disks.
Known as “data mirroring,” this level also lacks redundancy. RAID 1 mirrored involves data being written to a pair of drives, hence why it’s referred to as “mirroring.” You don’t look to this type of setup for performance enhancements; rather, you can expect slower write performance since data has to be written to both drives simultaneously. Since this setup lacks parity and striping, it’s not well-suited for backup. If one drive fails, the data is duplicated on the second drive and can be rebuilt to a new disk. However, if one drive ends up with a problem due to a software corruption, that will be reflected on the mirrored disk, thereby causing the failover storage RAID 1 provides moot. This setup works with two or more pairs of disks.
As of 2014, this level was no longer being used commercially.
This level is another relic of the past; while it does exist in the wild, but is otherwise not a commonly used setup. RAID 3 is considered a byte-level, dual parity system, with parity data written to two locations per the SNIA definition of the term. Disks are synchronized while in rotation, and the data stripe size scales up to the size of the exported block size. Also not commonly used.
RAID 4 another varietal that is no longer in vogue. It is described as block-level striping, with a dedicated parity drive. This is the first level where multiple input/output read operations can happen in parallel, as opposed to having one read operation across all drives. This results in better performance over levels 2 and 3, for example, especially when dealing with small files. Since every write operation requires a write to the parity drive, the parity drive can experience more wear and tear than the data drives.
This RAID level supports both fault tolerance and parity, and was designed as an alternative to RAID 4’s dedicated parity drive. With this setup, the array stripes parity data across all drives, reserving the equivalent of a drive’s capacity across all drives in the array so data can be parsed among the drives. By distributing the parity bits, it reduces wear-and-tear on any one drive—parity bits would rotate evenly among the drives and improves the read performance since data is accessed from multiple disks (although, with servers and large data sets, parity can slow write performance). The data stripe size is at least the same size as the exported block size, but could exceed the block size.
RAID 5 uses an extra disk to rebuild lost data, which means the minimum disk configuration is three drives. This will protect against a single drive failure; however, there is one gotcha: After you replace a failed drive, this setup puts an extreme load on the other drives as it works to rebuild the array, which in turn could lead to a second drive failure—particularly if the reconstruction process encounters an unrecoverable read error rate on a drive (at which point, reconstruction ends). It is for this reason that RAID 5 is not recommended for use with enterprise storage, though it is well-suited for file storage servers, including home and small-office NAS systems. This setup requires a minimum of three disks.
This could be considered the most fault tolerant varietal but it also provides the lowest usable capacity. Like RAID 5, 6 uses block-level data striping; however, it differs from by having what’s called “double parity,” or an extra parity block. The use of double parity works to provide fault tolerance for two drives, as opposed to the one drive in RAID 5. This approaches against the possibility of both a drive failure and an unrecoverable read error, and is why RAID 6 is especially appropriate for larger disk capacities, file storage servers and application servers. Four disks is the minimum number for this setup.
Some would define RAID 10 as the first of the “nested” RAID levels, and is often alternately referred to as 1+0 or 0+1, depending on how it is implemented. This setup offers the best combination of security and performance by combining the striping of RAID 0 with the data mirroring of RAID 1.
If RAID 10 is used to stripe mirrored drives, the mirroring nomenclature comes first, and it is referred to as 1+0. If it is used to accomplish the reverse, where you’re mirroring striped drives, the striped nomenclature dominates and it is called 0+1. As with striping in RAID 0, RAID 10 will exhibit performance improvements over straightforward mirroring. To use this setup in either nested variation, you’ll need a minimum of four disk drives, and your maximum capacity will be the equivalent capacity of two of those drives.
Like RAID 10, RAID 50 is a nested level. It may also be referred to as 5+0. This setup combines striping with distributed parity by striping data across two RAID 5 arrays. This combination allows up to four failed drives, provided each failed drive is on a different RAID 5 within the 5+0. However, if multiple drives fail on the same array, you may not be so lucky.
Interested in learning more about RAID? Here are some articles you may find helpful.