• 安卓Android科大讯飞语音识别代码使用详解


    科大讯飞的语音识别功能用在安卓代码中,我把语音识别写成了Service,然后在Fragment直接调用service服务。科大讯飞语音识别用的是带对话框的那个,直接调用科大讯飞的语音接口,代码采用链表结果集的方式获取数据。 
    这个语音识别需要在官网申请APPID

    本博来自:http://blog.csdn.net/zhaocundang 小波LinuxQQ463431476

    测试:

    这里写图片描述

    这里写图片描述

    自己项目采用了科大讯飞语音识别服务,报告中是这样解释的:

    语音Service服务代码设计

    (1)要想写好Service代码,必须了解Service的生命周期.

    (2)首先启动Service服务的方法是: 
    getActivity().startService(new Intent(getActivity(),VoiceService.class)); 
    停止Service服务: 
    getActivity().stopService(new Intent(getActivity(),VoiceService.class)); 
    (3)将类继承与Service: 
    public class VoiceService extends Service{ 
    } 
    自动重载OnBind()函数,通过OnBind()的返回值,将Service的实例返回调用者。 
    (3) 调用科大讯飞语音API接口代码 
    先调用手机麦克风录音: 
    rd.setSampleRate(RATE.rate16k); 
    调用语音API包中的语音识别对话框,将录音发送到服务器,并接受服务器返回的结果,将数据以链表数据结构的形式传过来,获取结果: 
    final StringBuilder sb = new StringBuilder(); 
    rd.setListener(new RecognizerDialogListener() { 
    public void onResults(ArrayList result, boolean isLast) { 
    for (RecognizerResult recognizerResult : result) { 
    sb.append(recognizerResult.text); 
    } 
    } 
    public void onEnd(SpeechError error) { 
    } 
    }); 
    (4)文本语音朗读的调用: 
    先是声明播放对象: 
    private static SynthesizerPlayer player ; 
    这里我直接封装一个朗读函数,appid是申请的应用授权id,代码如下: 
    public void speak(String words){ 
    player = SynthesizerPlayer.createSynthesizerPlayer(getActivity(),”appid=57527406”); 
    player.playText(words, null,null); //播放文本 
    }

    主要的代码:

    开启和关闭服务:

    
    public void onClick(View v) {
            // TODO Auto-generated method stub
           switch(v.getId()){
                case R.id.button1:
           getActivity().startService(new Intent(getActivity(),VoiceService.class));     
                    break;
                case R.id.button2:
           getActivity().stopService(new Intent(getActivity(),VoiceService.class)); 
                    break;
            }
        }  

    服务中:

    package zcd.voice;
    
    import java.util.ArrayList;
    
    import android.app.Service;
    import android.content.Intent;
    import android.os.IBinder;
    import android.view.WindowManager;
    import android.widget.Toast;
    
    import com.iflytek.speech.RecognizerResult;
    import com.iflytek.speech.SpeechConfig.RATE;
    import com.iflytek.speech.SpeechError;
    import com.iflytek.ui.RecognizerDialog;
    import com.iflytek.ui.RecognizerDialogListener;
    
    public class VoiceService extends Service{
        private RecognizerDialog rd;
        private   String text;
        @Override
        public IBinder onBind(Intent intent) {
            // TODO Auto-generated method stub
            return null;
        }
          @Override
            public void onCreate() {
                // TODO Auto-generated method stub
                super.onCreate();
    
              // Toast.makeText(this, "Service onCreated", Toast.LENGTH_LONG).show();
               rd = new RecognizerDialog(this ,"appid=57627d9c");
    
            }
    
          public void onStart(Intent intent, int startId) {
    
             //  Toast.makeText(this, " Service onStart", Toast.LENGTH_LONG).show();
               showReconigizerDialog();
    
            }
    
          private void showReconigizerDialog() {
                 //sms 简单语音识别文本
                 rd.setEngine("sms", null, null);
                 //设置麦克风采样频率
                rd.setSampleRate(RATE.rate16k);
                 final StringBuilder re = new StringBuilder();
                //设置识别后的回调结果
                rd.setListener(new RecognizerDialogListener() {
                    @Override
                    public void onResults(ArrayList<RecognizerResult> result, boolean isLast) {
                        for (RecognizerResult recognizerResult : result) {
                            re.append(recognizerResult.text);
                        }
                    }
                    @Override
                    public void onEnd(SpeechError error) {
                        //识别完成 
                        //R.id.txt_result.setText(sb.toString()); 
                         text = re.toString();
                         Toast.makeText(VoiceService.this,re.toString(), Toast.LENGTH_LONG).show();
                         sendmsg();
                    }
    
                });
    
                //txt_result.setText(""); //先设置为空,等识别完成后设置内容
                rd.getWindow().setType(WindowManager.LayoutParams.TYPE_SYSTEM_ALERT); // service 中getwindowmanager 设置优先级显示对话框
                rd.show();
            }
    
    
            public  void sendmsg()
         {
              //broadcast 
                // service 通过广播来发送识别结果到Voice Fragment
                Intent intent=new Intent();
                intent.putExtra("message",text);
                intent.setAction("zcd.voice");
                sendBroadcast(intent);
         }
    
    
    }
    

    Service中是无法显示对话框的,显示对话框的方式就是使用getwindow的方法,设置窗口最高优先级即可了!

  • 相关阅读:
    每日一篇文献:Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
    每日一篇文献:Intuitive Bare-Hand Teleoperation of a Robotic Manipulator Using Virtual Reality and Leap Motion
    每日一篇文献:Virtual Kinesthetic Teaching for Bimanual Telemanipulation
    HEBI Robotic Arm VR Teleoperation
    「iQuotient Case」AR device teleoperated robotic arm
    VR and Digital Twin Based Teleoperation of Robotic Arm
    HEBI Robotic Arm VR Teleoperation
    Human Robot Interaction
    Immersive Teleoperation Project
    机器人演示学习
  • 原文地址:https://www.cnblogs.com/zhaocundang/p/5606737.html
Copyright © 2020-2023  润新知