Welcome to Pete Brown's 10rem.net

First time here? If you are a developer or are interested in Microsoft tools and technology, please consider subscribing to the latest posts.

You may also be interested in my blog archives, the articles section, or some of my lab projects such as the C64 emulator written in Silverlight.

(hide this)

ViewModel Pattern and Speech as UI. Help me Obi Wan, where’s the View?

Pete Brown - 16 December 2009

image When working through the Speech Synthesis post last night, I kept getting a nagging feeling that the speech code I was putting in my ViewModel just didn’t really belong there. The speech was the UI, it was the thing the user actually interacted with, so it smelled wrong to have it in the ViewModel. It felt like I was putting textboxes and button-emitting code inside VM functions.

That said, Speech isn’t exactly a traditional visual View it’s a …uh…sound :)

I’m looking at building some more samples that have deeper speech integration, including recognition, but I don’t think that code should be in the ViewModel. At the same time, it represents a significant amount of functionality that in a real app would need to be tested, and like a View, it doesn’t test overly well programmatically.

So, MVVM / ViewModel patterns Gurus and moderately interested others: where would you put this code?

Would you create a separate class that interacts with the viewmodel just like the View does, parsing input and then calling VM functions, and call it an “Audible View” or something? What would own it, and how would you instantiate it? Would you tie it to a visible View?

Would you just put the code in the ViewModel and say the “real” UI is the view, and the speech is just ancillary? That could make for a pretty hefty viewmodel, with lots of things that aren’t really testable.

Would you do something else?

Help me readers, you’re my only hope.

         
posted by Pete Brown on Wednesday, December 16, 2009
filed under:          

5 comments for “ViewModel Pattern and Speech as UI. Help me Obi Wan, where’s the View?”

  1. Corrado Cavallisays:
    Functionality should be in a separate class that gets injected into viewmodel (maybe via IOC or just via ViewModelLocator), the class should also implement an interface to facilitate mocking/testing. The VM then just interacts with provided interface.
  2. Ryeolsays:
    Speech can have a View.
    For example, the TextToSpeak TextBox and the Speak Button can be composed to create a UserControl for Speech.
    You can put the speech code in the UserControl.

    Another approach is to create an Attached Behavior such as SpeakButtonBehavior.
  3. Younes Ouhbisays:
    I think it would be great to create a more general SoundCommand that would go into the ViewModel but would be activated by sound.

    This way, it would be a lot more coherent as like you said, sound is just another form of interaction.
  4. Glenn Blocksays:
    I go with Corrado's suggestion. Speech is a separate concern, move it off into it's own service which can be injected into the VM. Makes the VM more testable and easier to maintain, and better encapsulates the speach functionality.

Comment on this Post

Remember me