A Case for Redundant Arrays of Inexpensive Disks (RAID)
by David A. Patterson, Garth Gibson, Randy H. Katz
url show details
Details
publisher: | ACM Press | pages: | 109--116 | volume: | 17 | editor: | Hai Jin and Toni Cortes and Rajkumar Buyya | number: | CSD-87-391 | month: | jun | chapter: | 1 | abstract: | As processor and memory speeds increase at an exponential rate and single disk access times remain relatively constant, it is apparent that I/O bandwidth is likely to become a bottleneck in the performance of systems. One way to address this problem is by using disk arrays, i.e., sets of relatively inexpensive disks which can improve I/O bandwidth via parallel access. The problem with this approach is that simply using disk arrays can drastically reduce reliability. The approach of RAID is to use redundant disks of check data to bring reliability up to acceptable levels (i.e., failure rates better than expected useful life of the disks). Five levels of the RAID design are presented to address the issues of overhead cost (in terms of number of disks), useable storage capacity | institution: | University of California, Berkeley | address: | Chicago, IL | booktitle: | High Performance Mass Storage and Parallel I/O: Technologies and Applications | type: | misc | journal: | SIGMOD Record | year: | 1988 | annote: | to read |
|
|
You need to log in to add tags and post comments.