Willie Sutton, the famous bank robber, said that he robbed banks specifically because "that's where the money is." When we wondered about Windows server performance, we went to where the expertise is: Dell Computer Corp.
This year, Dell said that an independent survey of the server market put the Round Rock, Tex. computer maker in first place in server sales. That must mean the company is doing something right. So we asked them how they set up servers for best performance. The answers we got -- while not too surprising -- can't be over stated if you want your computers to run as quickly as they can for the best throughput and with the best possible reliability.
So what do you do? It's a no-brainer that making the hardware faster will make the overall system faster. Also, you undoubtedly know that when running Windows, more memory will often do more than a faster processor to speed up your applications' performance. But there's more to the whole thing than that. There is the latency of memory and the bottlenecks in the I/O process, just to name a few. The bottom line: Tuning a computer for added performance is a multi-faceted affair.
System design is key
The first thing to consider is the design of the system. Let's take the case of an Intel-based server running Exchange. How do you know that the server will run as required for the user load that you will have on it?
Richard Hou, a systems engineer/consultant for Dell, starts with the user load itself. For a given number of users, Hou said you have to assume a certain amount of per-user storage -- and be prepared to provide that storage plus a safety factor. If the message store runs out of room, you can have a server stoppage, and it will take time to get the server back up and running again.
"We use I/O patterns and response times," he said, to get an idea of the kind of load that an Exchange server would experience. Then he gives consideration to the service level agreements (SLAs) that are required for given classes of users for backup and restore operations. Obviously, the kind of SLA you have for a 24/7 sales operation would be different from that for a normal 9-to-5 operation.
"From the user I/O," Hou said, "you can determine the number of spindles that you'll need." It's a simple calculation, the user I/Os per second times the number of users, divided by the I/O handling rate of the drive. So if you had one user I/O per second, and 1000 users that you expect, and a drive that can handle 100 user I/Os per second, you'd need 10 spindles.
But that's just for basic data handling. If you were running in a RAID V configuration, you'd need those 10 plus one, he said, and for RAID 10, you'd need 20 drives. So you configure the storage pool you need based on the number of users, the capability of your storage devices and the level of data assurance that you consider applicable given your SLAs.
Dell does something of this itself in its testing program. Like other server manufacturers, Dell has to compete on the basis of performance. It does this by measuring its servers' performance on various standardized tests, generally subscribed to and accepted by other manufacturers as indicating the capability of a server in a given set of circumstances. Perhaps the best known of these is the TPC series of tests, sponsored by the Transaction Processing Performance Council. The council is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. It claims membership by all the major server manufacturers, and many database vendors as well.
Companies that are members of TCP, such as Dell, run their own tests that conform with benchmark tests as defined by TPC. The organization audits the testing by sending auditors to the test site at company labs who ensure that the server configuration is as claimed, and that the tests were run in accordance with the standards promulgated.
Follow the leader
So it is clear that when a member company is running a TCP benchmark using a Windows-based server and Windows software, it wants to run that combination to the max. At Dell, that's the responsibility of Mike Molloy, a senior manager of the System Performance Team. He said that the company runs the TPC-C and TPC-W from TPC, SPECWEB and SPECCPU, from the Standard Performance Evaluation Corporation, as well as a number of other tests, including the Intel Iometer test that measures how well server I/O is running in a variety of scenarios.
Running these tests, he said, "gives us the ability to compare our servers with competitors' boxes. When we set up for a test, we push the system to the limit. So with our results, if there's any CPU left, it's not our intention."
Customers can review the tuning information that's available on the TPC Web site to see exactly what Dell (or any of its competitors) did to achieve the results they did. "We learn a lot on how to optimize our server performance by running these tests," Malloy noted. Dell also offers white papers on its Web site that discuss how the tuning is done and how servers are set up for specific performance environments. While these papers only cover Dell products, there's sufficient generality in them that they can be useful for those running other servers as well. (See a sample.)
David Gabel is executive technology editor for TechTarget.
>> Featured Topic: Nab those performance killers
Network performance killers are subtle. This Featured Topic offers expert advice about and real-life examples of ways to pinpoint the causes of poor performance.
>> Featured Topic: Meeting the demand for high availability
What are the best techniques for reducing system downtime? SearchWindowsManageability answers that question in these articles about clustering, load balancing, fault tolerance, disaster preparedness and high availability.
>> Webcast: Maximizing Windows 2000 performance,
Are your Windows 2000 servers performing at their peak? Learn how to control your environment and substantially improve system performance from the authors of "Windows 2000 Performance Tuning & Optimization."
This was first published in September 2002