HP Apollo 6000 (2014)
After deployed HP Apollo 6000, like to high light below limitation as HP sale team didn't mention those before :
1. Putting 4 or 5 switches in the middle of rack is not a good idea, hot air come out from the middle of rack and suck into top level chassis, hot spot creates ; recommend to put switch on top of rack: [ updated this on 12/Dec/2014, at the end after read HP doc by ourselves, we found HP network team sent us the wrong fan model, which push hot air to front cold aisle although we have set fan direction " power to port" , after replaced the fan with correct model, no more such issue.]
2. APM reports node power wrongly, in the max load, it reports every compute node uses 170w+ ; total 140 nodes x 170w = 23.8kw , but the actual is, total whole rack , 140 nodes use power 13kw only .
3. Power shelf limitation: for PSU, it is N+N; but if the AC to DC board is failed, all chassis which use this power shelf will be power off, each power shelf could connect 6 chassis, the impact is huge .
4. Each Chassis has 5 fan modules, each module has 2 fans; if one of module is down, whole chassis will be down. if one of 10 fans is down, to replace the fan module, engineer must put in new module within 60 seconds after remove the faulty fan module. if not , the whole chassis might be down.