This has been a journey of epic proportions so to speak. It has been over the course of at least a year and a half of testing. Here at pure storage we have an appliance called the Flashblade. This is a scale out NAS device capable of throughput of 6Gbps up to are largest today of 75Gbps! This thing screams! Its massive parrallel performance charactieristics make it a great Appliance to run many different work loads on and in parallel.
In my lab I run VMware NFS data stores, Oracle DB, vdBENCH performance, and analytics just to name a few! The challenge however comes into play when you want to run large throughput on VMware. I am talking more then 5Gbps per host. More then what you can push through a single 40Gbps card.
Now there are about a million ways to manage networking on ESXI. Standard switches DV switchs, NSX, Cisco 1000 etc. We needed to run some performance testing on singled hosts or a few hosts using large pipes to acheeive the performance numbers the box is capabple with as few compte nodes as possible. This required some large bonded pipes. Now if you have every looked at VMware Lags there are about 20 different ways you can configure the LAG. route on ip hash, mac, original port ID, etc. This gets even more complex with the intro of DV switches.