Have you heard anyone say the amount of data they need to store is going down? No? Me neither. In fact, data available for business use is growing exponentially. To remain competitive, businesses must evaluate differentiated storage strategies to ensure collected data can be cost effectively stored and analyzed.
With traditional centralized storage systems, the architecture uses a single controller head in a frame that provides access to tens or hundreds of drives. When the single controller becomes a bottleneck, or the maximum number of drives in the frame has been reached, it’s both costly and disruptive to upgrade.
Using a software-defined storage architecture is another way to design a storage system. This trending model uses a software layer to aggregate distributed direct-attached storage (DAS), which is normally captive to the server. Examples of this at the OS layer are Microsoft Windows Server Storage Spaces and VMware vSphere with vSAN. Benefits of distributed DAS architectures include lower acquisition costs through purchase of standardized hardware and pay-as-you-grow scalability.
Server hardware is an important consideration in distributed data solutions. Key attributes of a server used in these solutions are storage density with varied performance choices, plus robust network capability.