Hey Pete, interesting topic, here's my position on this...
We have tried and set-up numerous monitoring systems over the years. Many look very similar to what you have. It's useful to an extent, we get notified by SMS when things get out of acceptable bands etc. We know certain things within our environment we should check when things peak (eg Execution Context, Database monitors for long running queries etc).
BUT...
The question is what to do with the info?
The Aware logs lack the specific info you need (in a format that is usable) to tie an episode of degraded server performance with an app/bsv event, especially if you are running multiple BSVs on the one server. I've banged on before about logging improvements required to do this (search forum for logging).
So, in my opinion you can get insight into things like:
"my server is peaking at this time of day"
"it's Aware server process (or Tomcat process) that is peaking"
BUT you can't answer the obvious next question "what in Aware is causing this?"
So, is it useful? Yes - you must have some server monitoring. But without improved logging it isn't THAT useful.
Happy to be proven wrong and someone can show me the way to do this better at conference. In fact it would make a great Conference topic in Portugal, because as you scale we have found this topic to be a major drain on resources as we scramble and try and figure out why a server is running slow/peaking.
btw - we have tried services like DataDog to compare server peaks with logs but it isn't useful due to the way Aware logs.