This has been a common practice in Windows Form development for years. All my Windows Forms had this feature for menus and buttons.
However, in the web world when you have a Windows Form (the browser) as a host application running that is intercepting keyboard calls and has it's own menus and buttons and then it's running JS code inside that create it's own menus and buttons.
Now let's add another layer of complexity to the above bunch.
You create an Aware app, with buttons. The Aware app can show another form in a Tab. This Form also menus and buttons.
So, by now you have three level deep that needs to figure out who gets lets say "CTRL-I" keyboard call. Does the browser intercept it and do something or does the Aware main menu catches that keyboard or the Form in a tab section?
Very likely, Browser will catch it and submits a "handled" result back to OS, and neither Aware nor your form will ever see that keyboard call.
And that's why this notion has disappeared in Web development where an app is running in a Sandbox.
This notion was for "Windows" world, now browsers run on Android, iOS, Mac and etc. They deal with Keyboard intercept differently.
Hope this helps!