 
Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: March 18, 2024
In this tutorial, we shall discuss the various types of data storage used for professional computing. In computing systems, we can identify some basic resources: processor units (CPUs) handle the data processing, volatile memory (RAM) hold in-processing data, and persistent data storage (among other data input/output devices). Through time, many technologies and approaches were used. We’re going to name the more common, how they differ, and their best use-case scenarios.
The first mass-storage device used for computing that comes to mind is direct-attached storage  (DAS). In this approach, the disks, whatever technology is used, are connected to I/O boards installed on the servers. The more advanced boards have features like disk hot swapping, non-volatile caching, integrated mirroring, and error correction:    The non-volatile caching minimizes the risk of data loss during power failures. For mirroring and error correction will use a technique known as Redundant Array of Inexpensive Disks (RAID) that can be hardware-assisted or software-based. RAID arrangements are categorized into levels, the more common are:
RAID levels 2, 3, and 4 are rarely or never used. RAID arrangements make a lot more sense for performance improvements with conventional magnetic hard drives. In those, the mechanical movement of the magnetic heads adds a lot of latency for random access operations. However, even modern solid-state drives (SSD) can see improvement with RAID arrangements.
In larger environments, we can have the disks on external chassis holding hundreds or even thousands of individual disks. In such cases, it’s hard for a single server to use and manage too many disk devices efficiently. Also,  in such scenarios, it’s useful to have multiple servers seeing the same disks. This is needed for several distributed computing architectures such as clusters: Even though the same disk enclosures can serve multiple servers, it doesn’t scale well. For instance, if a server is not transacting at one moment, its I/O channel to the disks will be idle. Consequently, we’ll lose part of the performance gains of using massive RAID arrangements. Instead of doing that, we can use storage network switches that will multiplex the access to the disks among a large number of servers. That is exactly what we call Storage Area Network. It has a lot of similarities with standard local area networks but operates with storage-aware specific interfaces and protocols. It also can add some storage virtualization functionalities, hiding from the servers the distribution and specifics of each storage chassis used. Another nice thing is that those switches can connect various types of storage (flash-only storage, conventional hard drives, and even backup tape drives) to multiple types of servers, such as x86 or RISC servers, and mainframes. This technology is useful whenever multiple servers need access to a large number of disks requiring high throughput and little latency. Also, the servers will see the RAID arrangements as disks similar to what they would see with direct attached storage. That means that the servers will manage the data on them using their native filesystems.
While SAN gives servers access that is as close as possible to directly attached storage, sometimes it is best to share files among servers. That is exactly why we have been using distributed file systems such as Microsft SMB/CIFS or Posix NFS3/4 for a long time now. They are examples of what we call Network Area Storage: In this scenario, one (or more servers) share files that they manage (using their storage, directly attached, or through a storage area network) with other servers or client equipment. The main advantage is that the file systems are managed on fewer servers, simplifying access and management. The performance, however, is lower than the previous alternatives since it adds some overhead. The distributed filesystem has to deal with multiple clients trying to access the same files, for instance.
A newer trend that we must know about is Software-Defined Storage. This architecture’s premise is to achieve even higher abstraction from the storage hardware used. In this case, we take the storage virtualization to an even higher level. It uses inexpensive direct attached disks without any special use controller (such as RAID boards) and lets the software-defined storage deal with redundancy, data distribution, and access. It can add even more advanced virtualization features at lower costs. For instance, it can do data deduplication, compression, more data replicas, easier provisioning, and decommissioning of storage nodes. It can even spread the storage clusters in globally distributed data centers, giving access to location-aware data. I.e., clients will access data on closer storage nodes. Of course, this level of abstraction also imposes some overhead. However, adding more nodes and optimizing data distribution can mitigate this. Most software-defined storage optimizes data distribution with minimal user intervention. Finally, the virtual storage can have multiple access methods such as block-oriented, key/value storage, or file-oriented. That way, various types of clients can use the same storage cluster. This architecture is progressively becoming the standard for large distributed containerized deployments. Storage nodes may even be containers themselves.
In this tutorial, we introduced the main storage architectures available for data centers today. There is no silver bullet, depending on what we want or need, we may have to use a different architecture. Actually, it is common to have all of them currently running data centers. Choosing the right one for each system deployment can greatly help with the required redundancy, performance, and functionality fulfillment.