Sadly I am not a developer in any way fluent enough to say what specifically would need to be present, But for those using text to speech the main tools are Apple’s “Voice Over Utility” built into the OS, and Windows users are either working with “NVDA” or “JAWS”. EXS and Logic on the whole are developed by Apple, so they are already built to be compatible with the voice over utility.
What would need to be figured out if possible is either, How to make UI elements readable by the screen reader technology, Or to add in some level of screen reader access that can be in the engine and initialised by some element that is readable by an outside reader.
A good place to start may be to reach out to either an accessibility developer, Or even to an instructor in accessible technology like my friend Tony, who can at least point you towards some possible people to help. Tony’s info is in the video description. I know he has one friend who’s been trying to work with Native Instruments on Accessibility and may be able to connect you. Thank you for your comment by the way.