I had already devised a contingency plan, should things slow to down to a point which I deemed unacceptable. By limiting the number of machines connected to each switch, I decreased the likelihood of traffic getting jammed on a high-demand node.
Each 5324 has 24 gigabit Ethernet ports. So you could theoretically connect 24 machines to each switch. This wouldnt be taking into account, however, the high volume scenario which, combined with backups, would likely slow network responsiveness enough to make me look bad. (And you never get the second chance to make a first impression.)
Figure 4: The Contingency Plan.
So by connecting fewer (in this case, five) machines to each switch and terminating each switch into its own separate server NIC, I reduced the chance of network congestion when things got heavy. (Figure 4 shows how ten production clients with two workstations each were connected to this network. Note that each fileserver also had a built-in NIC up front to connect to the domain controller switch, bringing the total to five NICs.)
But just how heavy would things get? It is entirely possible that I could've gotten away with ten or more clients per switch and spread out the switches a bit more once I connected the office personnel. What kind of load should I have been anticipating in this particular scenario? What exactly was the breaking point and would I ever even come close to reaching it? And finally, what was the most effective way to route traffic to minimize delays and maximize network stability?
The answers to this and more in Part Two