News ThinkServer

IT planners and product designers are looking at products and data centers in new ways. They are exploring ideas such as a rack containing separate chassis for CPU, memory and storage in hyper-scale computing, or choosing best-of-breed solutions for each tier of an enterprise network. The reasons for exploring these ideas revolve around increasing flexibility in the infrastructure and reducing costs. Generically, this is called disaggregation. Does it make sense to disaggregate your mainstream server? The rate of technology change continues to accelerate. This means adding new capabilities into IT Infrastructures to support new strategic directions. However, integrating these new capabilities into the existing infrastructure can present challenges or missed opportunities. Many times existing servers cannot take full advantage of new technologies. Embedded NICs can’t utilize the maximum throughput speed of a new switch, for instance. Perhaps it’s as simple as the fast new drives aren’t compatible with the drive bays. When it is possible to upgrade using PCIe cards or a similar option, you may still give up manageability or strand the onboard components. When choosing servers in today’s world, it makes sense to find servers that can evolve with your infrastructure, without wasting resources or requiring you to buy extra capacity that may never be utilized. Disaggregation of key components may offer a better solution. For more...

Continue reading “Server Disaggregation”

Almost every organization is either considering a private cloud or has already begun implementation. The host server architecture is a key component of the private cloud virtual infrastructure. The cost-performance sweet spot for virtualization hosting is typically an x86 server featuring two-socket multi-core processors. Historically, virtualization scalability has been constrained by processor and memory limitations. However, advances in processor technology yield impressive virtualization ratios. With the virtual-to-physical ratios enabled by multi-core processor advances and supporting memory, it doesn’t matter what the size of the company is, everyone can benefit from a private cloud. Before you dismiss this as cloud washing, consider: a two-socket/twelve-core physical server based on the current Intel Xeon processor platform can theoretically support up to 192 virtual machines in a private cloud environment. Depending on the rack configuration (including servers, storage, networking and backup power), that’s the equivalent of four or five racks of servers supporting single applications now consolidated on a single server. Factor in the cost of power, cooling and physical maintenance and we’re considering serious savings that no one can afford to overlook. However, relying solely on server processing and memory capacity is no longer the sole criteria for...

Continue reading “Private Cloud Server Sweet Spot”

Have you heard anyone say the amount of data they need to store is going down?  No?  Me neither. In fact, data available for business use is growing exponentially. To remain competitive, businesses must evaluate differentiated storage strategies to ensure collected data can be cost effectively stored and analyzed. With traditional centralized storage systems, the architecture uses a single controller head in a frame that provides access to tens or hundreds of drives. When the single controller becomes a bottleneck, or the maximum number of drives in the frame has been reached, it’s both costly and disruptive to upgrade. Using a software-defined storage architecture is another way to design a storage system. This trending model uses a software layer to aggregate distributed direct-attached storage (DAS), which is normally captive to the server. Examples of this at the OS layer are Microsoft Windows Server Storage Spaces and VMware vSphere with vSAN.  Benefits of distributed DAS architectures include lower acquisition costs through purchase of standardized hardware and pay-as-you-grow scalability.  Server hardware is an important consideration in distributed data solutions. Key attributes of a server used in these solutions are storage density with varied performance choices, plus robust network capability.  For more information, see the

Continue reading “Too Much Information?”

In this edition of the Lenovo ThinkServer blog, we wanted to shed some light on how Lenovo servers and workstations are being deployed for the high-performance computing (HPC) industry. Joining me in this discussion is Vertical Marketing Manager, Chris McCoy, from the Lenovo ThinkStation team. Edgar: Hi Chris, can you tell us a bit about your work experience and how you see Lenovo ThinkStation fitting into the key verticals you manage? Also, do you see synergies with Lenovo ThinkServer? Chris: Hey Edgar, it’s a pleasure to be here with you. To answer the first part of your question, I joined the Lenovo ThinkStation product team about 18 months ago. Prior to coming to Lenovo, I spent more than 15 years working for global IT solution providers in both the private and public sectors. Lenovo has made tremendous investments in ThinkStation product engineering and critical technology partner alliances to devise, test and implement various hardware and software technologies in an effort to improve workflow and system performance as well as platform efficiencies. We do this to ensure that we’re designing and building platforms that can be tuned and configured to support very specific workflows across a very wide spectrum of customers, in industries such as Banking, Finance, Manufacturing, Architecture, Medical and Life Sciences, Media and Entertainment and the Energy Sector. To answer the second part of your question, in Lenovo, servers and...

Continue reading “Lenovo ThinkServer & ThinkStation Solutions For High-Performance Computing”

Studies have shown that data center cooling costs are equal to or greater than the cost of powering the IT equipment itself. One way to reduce these cooling costs is by operating the data center at higher temperatures. This allows innovative data-center cooling strategies to be employed. As examples, fresh-air and chiller-less cooling technologies can be used. In fact, many new data-center facilities are being built in areas where the local climate lends itself to these technologies. Existing facilities can also benefit from cold-isle containment strategies segregating high-temperature-capable equipment from less-capable equipment. But can servers take the heat? Even though most servers can operate at temperatures much higher than are found in the controlled environments of most data centers, some server vendors recommend staying within the current recommended operating temperature range at all times. Others will allow higher-temperature operation, but only for brief periods of time. Additionally, there are those who will be concerned that prolonged, higher ambient temperatures may affect server long-term reliability and induce failures, even though the components that make up the equipment are specified and tested to operate at well above nominal operating temperatures. It’s clear that the trends towards adoption of higher data-center operating temperatures will continue because of the pressure to reduce operating costs. When considering servers...

Continue reading “Lowering Data Center Costs”