我目前正在使用我尝试使用的aync方法遇到一些意外/不需要的行为.异步方法是RecognizeAsync
.我无法等待此方法,因为它返回void.发生了什么,是该ProcessAudio
方法将首先被调用,并且似乎将完成,但网页永远不会返回我的"联系"视图,因为它应该或错误.方法运行完成后,我的处理程序中的断点开始被击中.如果我让它完成,完成后不会发生重定向 - 在chrome调试器的网络选项卡中,"status"将保持标记为挂起并挂起.我相信我的问题是由异步性问题引起的,但却无法确定它究竟是什么.
所有帮助表示赞赏.
[HttpPost] public async TaskProcessAudio() { SpeechRecognitionEngine speechEngine = new SpeechRecognitionEngine(); speechEngine.SetInputToWaveFile(Server.MapPath("~/Content/AudioAssets/speechSample.wav")); var grammar = new DictationGrammar(); speechEngine.LoadGrammar(grammar); speechEngine.SpeechRecognized += new EventHandler (SpeechRecognizedHandler); speechEngine.SpeechHypothesized += new EventHandler (SpeechHypothesizedHandler); speechEngine.RecognizeAsync(RecognizeMode.Multiple); return View("Contact", vm); //first breakpoint hit occurs on this line //but it doesnt seem to be executed? } private void SpeechRecognizedHandler(object sender, EventArgs e) { //do some work //3rd breakpoint is hit here } private void SpeechHypothesizedHandler(object sender, EventArgs e) { //do some different work //2nd breakpoint is hit here }
更新:根据建议,我已将我的代码更改为(在ProcessAudio中):
using (speechEngine) { speechEngine.SetInputToWaveFile(Server.MapPath("~/Content/AudioAssets/speechSample.wav")); var grammar = new DictationGrammar(); speechEngine.LoadGrammar(grammar); speechEngine.SpeechRecognized += new EventHandler(SpeechRecognizedHandler); speechEngine.SpeechHypothesized += new EventHandler (SpeechHypothesizedHandler); var tcsRecognized = new TaskCompletionSource (); speechEngine.RecognizeCompleted += (sender, eventArgs) => tcsRecognized.SetResult(eventArgs); speechEngine.RecognizeAsync(RecognizeMode.Multiple); try { var eventArgsRecognized = await tcsRecognized.Task; } catch(Exception e) { throw (e); } }
这导致了一些错误的行为:return View("Contact",vm)
现在在处理程序完成触发后会触发断点但是仍然没有发生重定向.我从未被定向到我的联系页面.我就像以前一样无限期地使用原始页面.
你太早了.当你上return View
线时,语音引擎可能还没有开始.
您需要等到语音引擎触发最终事件.最好的方法是从基于事件的异步转换为基于TAP的异步.
这可以通过使用来实现 TaskCompletionSource
让我们处理(我相信)应该是speechEngine.RecognizeAsync
被调用后的最后一个事件,即SpeechRecognized
.我假设这是在语音引擎计算最终结果时触发的事件.
所以,首先:
var tcs = new TaskCompletionSource();
现在让我们SpeechRecognized
使用内联lambda样式的方法声明将它挂起以在触发时完成:
speechEngine.SpeechRecognized += (sender, eventArgs) => tcs.SetResult(eventArgs);
(...等等......如果没有识别出语音会发生什么?我们还需要连接SpeechRecognitionRejected
事件并为这种类型的事件定义一个自定义的Exception子类......这里我只是称之为RecognitionFailedException
.现在我们正在捕捉识别过程的所有可能结果,因此我们希望TaskCompletionSource
所有结果都能完成.)
speechEngine.SpeechRecognitionRejected += (sender, eventArgs) => tcs.SetException(new RecognitionFailedException());
然后
speechEngine.RecognizeAsync(RecognizeMode.Multiple);
现在,我们可以获得我们await
的Task
财产TaskCompletionSource
:
try { var eventArgs = await tcs.Task; } catch(RecognitionFailedException ex) { //this would signal that nothing was recognized }
对作为Task的结果的EventArgs进行一些处理,并将可行的结果返回给客户端.
在执行此操作的过程中,您将创建IDisposable
需要正确处理的实例.
所以:
using(SpeechRecognitionEngine speechEngine = new SpeechRecognitionEngine()) { //use the speechEngine with TaskCompletionSource //wait until it's finished try { var eventArgs = await tcs.Task; } catch(RecognitionFailedException ex) { //this would signal that nothing was recognized } } //dispose