• Android 4.4KitKat AudioTrack 流程分析


    Android Audio 系统的主要内容:

    • AudioManager:这个主要是用来管理Audio系统的,需要考虑整个系统上声音的策略问题,例如来电话铃声,短信铃声等,主要是策略上的问题。
    • AudioTrack:这个主要是用来播放声音的
    • AudioRecord:这个主要是用来录音的

    当前分析AudioTrack的文章较多,先以AudioTrack为例进行分析。

    JAVA层的AudioTrack class:frameworkasemediajavaandroidmediaAudioTrack.java中。

    AudioTrack的使用方法实例:

     1 //根据采样率,采样精度,单双声道来得到frame的大小。
     2 int bufsize = AudioTrack.getMinBufferSize(8000,//每秒8K个点
     3 AudioFormat.CHANNEL_CONFIGURATION_STEREO,//双声道
     4 AudioFormat.ENCODING_PCM_16BIT);//一个采样点16比特-2个字节
     5 //注意,按照数字音频的知识,这个算出来的是一秒钟buffer的大小。
     6 //创建AudioTrack
     7 AudioTrack trackplayer = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
     8 AudioFormat.CHANNEL_CONFIGURATION_ STEREO,
     9 AudioFormat.ENCODING_PCM_16BIT,bufsize,AudioTrack.MODE_STREAM);//
    10 trackplayer.play() ;//开始
    11 trackplayer.write(bytes_pkg, 0, bytes_pkg.length) ;//往track中写数据
    12 ….
    13 trackplayer.stop();//停止播放
    14 trackplayer.release();//释放底层资源。

      AudioTrack.MODE_STREAMAudioTrack中有MODE_STATIC和MODE_STREAM两种分类。STREAM的意思是由用户在应用程序通过write方式把数据一次一次得写到audiotrack中。这个和我们在socket中发送数据一样,应用层从某个地方获取数据,例如通过编解码得到PCM数据,然后write到audiotrack。这种方式的坏处就是总是在JAVA层和Native层交互,效率损失较大。而STATIC的意思是一开始创建的时候,就把音频数据放到一个固定的buffer,然后直接传给audiotrack,后续就不用一次次得write了。AudioTrack会自己播放这个buffer中的数据。这种方法对于铃声等内存占用较小,延时要求较高的声音来说很适用。

      StreamType:这个在构造AudioTrack的第一个参数中使用。这个参数和Android中的AudioManager有关系,涉及到手机上的音频管理策略。Android将系统的声音分为以下几类常见的(未写全):

    • STREAM_ALARM:警告声
    • STREAM_MUSCI:音乐声,例如music等
    • STREAM_RING:铃声
    • STREAM_SYSTEM:系统声音
    • STREAM_VOCIE_CALL:电话声音

      为什么要分这么多呢?以前在台式机上开发的时候很少知道有这么多的声音类型,不过仔细思考下,发现这样做是有道理的。例如你在听music的时候接到电话,这个时候music播放肯定会停止,此时你只能听到电话,如果你调节音量的话,这个调节肯定只对电话起作用。当电话打完了,再回到music,你肯定不用再调节音量了。其实系统将这几种声音的数据分开管理,所以,这个参数对AudioTrack来说,它的含义就是告诉系统,我现在想使用的是哪种类型的声音,这样系统就可以对应管理他们了。

      从AudioTrack的使用实例来逐个分析其中用到的方法,首先是getMinBufferSize:

      getMinBufferSize

    /**
         * Returns the minimum buffer size required for the successful creation of an AudioTrack
         * object to be created in the {@link #MODE_STREAM} mode. Note that this size doesn't
         * guarantee a smooth playback under load, and higher values should be chosen according to
         * the expected frequency at which the buffer will be refilled with additional data to play.
         * For example, if you intend to dynamically set the source sample rate of an AudioTrack
         * to a higher value than the initial source sample rate, be sure to configure the buffer size
         * based on the highest planned sample rate.
         * @param sampleRateInHz the source sample rate expressed in Hz.
         * @param channelConfig describes the configuration of the audio channels.
         *   See {@link AudioFormat#CHANNEL_OUT_MONO} and
         *   {@link AudioFormat#CHANNEL_OUT_STEREO}
         * @param audioFormat the format in which the audio data is represented.
         *   See {@link AudioFormat#ENCODING_PCM_16BIT} and
         *   {@link AudioFormat#ENCODING_PCM_8BIT}
         * @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,
         *   or {@link #ERROR} if unable to query for output properties,
         *   or the minimum buffer size expressed in bytes.
         */
        static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
            int channelCount = 0;
            switch(channelConfig) {
            case AudioFormat.CHANNEL_OUT_MONO:
            case AudioFormat.CHANNEL_CONFIGURATION_MONO:
                channelCount = 1;
                break;
            case AudioFormat.CHANNEL_OUT_STEREO:
            case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
                channelCount = 2;
                break;
            default:
                if ((channelConfig & SUPPORTED_OUT_CHANNELS) != channelConfig) {
                    // input channel configuration features unsupported channels
                    loge("getMinBufferSize(): Invalid channel configuration.");
                    return ERROR_BAD_VALUE;
                } else {
                    channelCount = Integer.bitCount(channelConfig);
                }
            }
          //目前只支持PCM8和PCM16精度的音频
            if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT)
                && (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {
                loge("getMinBufferSize(): Invalid audio format.");
                return ERROR_BAD_VALUE;
            }
         //ft,对采样频率也有要求,太低或太高都不行,人耳分辨率在20HZ到40KHZ之间
            // sample rate, note these values are subject to change
            if ( (sampleRateInHz < SAMPLE_RATE_HZ_MIN) || (sampleRateInHz > SAMPLE_RATE_HZ_MAX) ) {
                loge("getMinBufferSize(): " + sampleRateInHz + " Hz is not a supported sample rate.");
                return ERROR_BAD_VALUE;
            }
         //调用native函数
            int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
            if (size <= 0) {
                loge("getMinBufferSize(): error querying hardware");
                return ERROR;
            }
            else {
                return size;
            }
        }
    View Code

      native_get_min_buff_size函数进入了framework/base/core/jni/android_media_AudioTrack.cpp中的android_media_AudioTrack_get_min_buff_size:

    // returns the minimum required size for the successful creation of a streaming AudioTrack
    // returns -1 if there was an error querying the hardware.
    static jint android_media_AudioTrack_get_min_buff_size(JNIEnv *env,  jobject thiz,
        jint sampleRateInHertz, jint nbChannels, jint audioFormat) {
        size_t frameCount = 0;
        if (AudioTrack::getMinFrameCount(&frameCount, AUDIO_STREAM_DEFAULT,sampleRateInHertz) != NO_ERROR) {
            return -1;
        }
        return frameCount * nbChannels * (audioFormat == ENCODING_PCM_16BIT ? 2 : 1);
    }

       根据最小的framecount计算最小的buffersize。音频中最常见的是frame这个单位,一个frame就是1个采样点的字节数*声道。为啥搞个frame出来?因为对于多//声道的话,用1个采样点的字节数表示不全,因为播放的时候肯定是多个声道的数据都要播出来//才行。所以为了方便,就说1秒钟有多少个frame,这样就能抛开声道数,把意思表示全了getMinBufSize函数完了后,我们得到一个满足最小要求的缓冲区大小。这样用户分配缓冲区就有了依据。下面就需要创建AudioTrack对象了 

     创建AudioTrack对象

    先看AudioTrack.java中的构造函数: 

    /**
         * Class constructor with audio session. Use this constructor when the AudioTrack must be
         * attached to a particular audio session. The primary use of the audio session ID is to
         * associate audio effects to a particular instance of AudioTrack: if an audio session ID
         * is provided when creating an AudioEffect, this effect will be applied only to audio tracks
         * and media players in the same session and not to the output mix.
         * When an AudioTrack is created without specifying a session, it will create its own session
         * which can be retrieved by calling the {@link #getAudioSessionId()} method.
         * If a non-zero session ID is provided, this AudioTrack will share effects attached to this
         * session
         * with all other media players or audio tracks in the same session, otherwise a new session
         * will be created for this track if none is supplied.
         * @param streamType the type of the audio stream. See
         *   {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},
         *   {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC},
         *   {@link AudioManager#STREAM_ALARM}, and {@link AudioManager#STREAM_NOTIFICATION}.
         * @param sampleRateInHz the initial source sample rate expressed in Hz.
         * @param channelConfig describes the configuration of the audio channels.
         *   See {@link AudioFormat#CHANNEL_OUT_MONO} and
         *   {@link AudioFormat#CHANNEL_OUT_STEREO}
         * @param audioFormat the format in which the audio data is represented.
         *   See {@link AudioFormat#ENCODING_PCM_16BIT} and
         *   {@link AudioFormat#ENCODING_PCM_8BIT}
         * @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read
         *   from for playback. If using the AudioTrack in streaming mode, you can write data into
         *   this buffer in smaller chunks than this size. If using the AudioTrack in static mode,
         *   this is the maximum size of the sound that will be played for this instance.
         *   See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size
         *   for the successful creation of an AudioTrack instance in streaming mode. Using values
         *   smaller than getMinBufferSize() will result in an initialization failure.
         * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
         * @param sessionId Id of audio session the AudioTrack must be attached to
         * @throws java.lang.IllegalArgumentException
         */
        public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
                int bufferSizeInBytes, int mode, int sessionId) throws IllegalArgumentException {
            // mState already == STATE_UNINITIALIZED
            // remember which looper is associated with the AudioTrack instantiation
            Looper looper;
         // 获得主线程的Looper,这个在MediaScanner中有相关介绍。
    if ((looper = Looper.myLooper()) == null) { looper = Looper.getMainLooper(); } mInitializationLooper = looper;
         //检查参数是否合法之类的,可以不管它 audioParamCheck(streamType, sampleRateInHz, channelConfig, audioFormat, mode); audioBuffSizeCheck(bufferSizeInBytes);
    if (sessionId < 0) { throw new IllegalArgumentException("Invalid audio session ID: "+sessionId); } int[] session = new int[1]; session[0] = sessionId; // native initialization
         // 调用native层的native_setup,把自己的WeakReference传进去了 int initResult = native_setup(new WeakReference<AudioTrack>(this), mStreamType, mSampleRate, mChannels, mAudioFormat, mNativeBufferSizeInBytes, mDataLoadMode, session); if (initResult != SUCCESS) { loge("Error code "+initResult+" when initializing AudioTrack."); return; // with mState == STATE_UNINITIALIZED } mSessionId = session[0]; if (mDataLoadMode == MODE_STATIC) { mState = STATE_NO_STATIC_DATA; } else { mState = STATE_INITIALIZED; } }

     native_setup函数进入了framework/base/core/jni/android_media_AudioTrack.cpp中的android_media_AudioTrack_native_setup:

    static int android_media_AudioTrack_native_setup(JNIEnv *env, jobject thiz, jobject weak_this,
            jint streamType, jint sampleRateInHertz, jint javaChannelMask,
            jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession)
    {
        ALOGV("sampleRate=%d, audioFormat(from Java)=%d, channel mask=%x, buffSize=%d",
            sampleRateInHertz, audioFormat, javaChannelMask, buffSizeInBytes);
        uint32_t afSampleRate;
        size_t afFrameCount;
    
        if (AudioSystem::getOutputFrameCount(&afFrameCount, (audio_stream_type_t) streamType) != NO_ERROR) {
            ALOGE("Error creating AudioTrack: Could not get AudioSystem frame count.");
            return AUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
        }
        if (AudioSystem::getOutputSamplingRate(&afSampleRate, (audio_stream_type_t) streamType) != NO_ERROR) {
            ALOGE("Error creating AudioTrack: Could not get AudioSystem sampling rate.");
            return AUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
        }
    
        // Java channel masks don't map directly to the native definition, but it's a simple shift
        // to skip the two deprecated channel configurations "default" and "mono".
        uint32_t nativeChannelMask = ((uint32_t)javaChannelMask) >> 2;
    
        if (!audio_is_output_channel(nativeChannelMask)) {
            ALOGE("Error creating AudioTrack: invalid channel mask %#x.", javaChannelMask);
            return AUDIOTRACK_ERROR_SETUP_INVALIDCHANNELMASK;
        }
       //popCount是统计一个整数中有多少位为1的算法
        int nbChannels = popcount(nativeChannelMask);
    
        // check the stream type
        audio_stream_type_t atStreamType;
        switch (streamType) {
        case AUDIO_STREAM_VOICE_CALL:
        case AUDIO_STREAM_SYSTEM:
        case AUDIO_STREAM_RING:
        case AUDIO_STREAM_MUSIC:
        case AUDIO_STREAM_ALARM:
        case AUDIO_STREAM_NOTIFICATION:
        case AUDIO_STREAM_BLUETOOTH_SCO:
        case AUDIO_STREAM_DTMF:
            atStreamType = (audio_stream_type_t) streamType;
            break;
        default:
            ALOGE("Error creating AudioTrack: unknown stream type.");
            return AUDIOTRACK_ERROR_SETUP_INVALIDSTREAMTYPE;
        }
    
        // check the format.
        // This function was called from Java, so we compare the format against the Java constants
        if ((audioFormat != ENCODING_PCM_16BIT) && (audioFormat != ENCODING_PCM_8BIT)) {
            ALOGE("Error creating AudioTrack: unsupported audio format.");
            return AUDIOTRACK_ERROR_SETUP_INVALIDFORMAT;
        }
        // for the moment 8bitPCM in MODE_STATIC is not supported natively in the AudioTrack C++ class
        // so we declare everything as 16bitPCM, the 8->16bit conversion for MODE_STATIC will be handled
        // in android_media_AudioTrack_native_write_byte()
        if ((audioFormat == ENCODING_PCM_8BIT)&& (memoryMode == MODE_STATIC)) {
            ALOGV("android_media_AudioTrack_native_setup(): requesting MODE_STATIC for 8bit 
                buff size of %dbytes, switching to 16bit, buff size of %dbytes",
                buffSizeInBytes, 2*buffSizeInBytes);
            audioFormat = ENCODING_PCM_16BIT;
            // we will need twice the memory to store the data
            buffSizeInBytes *= 2;
        }
    
        // compute the frame count
        int bytesPerSample = audioFormat == ENCODING_PCM_16BIT ? 2 : 1;
        audio_format_t format = audioFormat == ENCODING_PCM_16BIT ? AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT;
       //根据Buffer大小和一个Frame大小来计算帧数。
        int frameCount = buffSizeInBytes / (nbChannels * bytesPerSample);
        jclass clazz = env->GetObjectClass(thiz);
        if (clazz == NULL) {
            ALOGE("Can't find %s when setting up callback.", kClassPathName);
            return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
        }
        if (jSession == NULL) {
            ALOGE("Error creating AudioTrack: invalid session ID pointer");
            return AUDIOTRACK_ERROR;
        }
        jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
        if (nSession == NULL) {
            ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
            return AUDIOTRACK_ERROR;
        }
        int sessionId = nSession[0];
        env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
        nSession = NULL;
    
        // create the native AudioTrack object
      //创建真正的AudioTrack对象
        sp<AudioTrack> lpTrack = new AudioTrack();
    
        // initialize the callback information:
        // this data will be passed with every AudioTrack callback
      // AudioTrackJniStorage,就是一个保存一些数据的地方,这里边有一些有用的知识,下面再详细解释
        AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage();
        lpJniStorage->mStreamType = atStreamType;
        lpJniStorage->mCallbackData.audioTrack_class = (jclass)env->NewGlobalRef(clazz);
        // we use a weak reference so the AudioTrack object can be garbage collected.
        lpJniStorage->mCallbackData.audioTrack_ref = env->NewGlobalRef(weak_this);
        lpJniStorage->mCallbackData.busy = false;
    
        // initialize the native AudioTrack object
        switch (memoryMode) {
        case MODE_STREAM:
            lpTrack->set(
                atStreamType,// stream type
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                0,// shared mem
                true,// thread can call Java
                sessionId);// audio session ID
            break;
        case MODE_STATIC:
            // AudioTrack is using shared memory
        //如果是static模式,需要用户一次性把数据写进去,然后再由audioTrack自己去把数据读出来,
        //所以需要一个共享内存这里的共享内存是指C++ AudioTrack和AudioFlinger之间共享的内容因为真正播放的工作是由AudioFlinger来完成的。
            if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
                ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
                goto native_init_failure;
            }
            lpTrack->set(
                atStreamType,// stream type
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                lpJniStorage->mMemBase,// shared mem
                true,// thread can call Java
                sessionId);// audio session ID
            break;
        default:
            ALOGE("Unknown mode %d", memoryMode);
            goto native_init_failure;
        }
        if (lpTrack->initCheck() != NO_ERROR) {
            ALOGE("Error initializing AudioTrack");
            goto native_init_failure;
        }
        nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
        if (nSession == NULL) {
            ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
            goto native_init_failure;
        }
        // read the audio session ID back from AudioTrack in case we create a new session
        nSession[0] = lpTrack->getSessionId();
        env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
        nSession = NULL;
        {   // scope for the lock
            Mutex::Autolock l(sLock);
            sAudioTrackCallBackCookies.add(&lpJniStorage->mCallbackData);
        }
        // save our newly created C++ AudioTrack in the "nativeTrackInJavaObj" field
        // of the Java object (in mNativeTrackInJavaObj)
        setAudioTrack(env, thiz, lpTrack);
        //把C++AudioTrack对象指针保存到JAVA对象的一个变量中,这样,Native层的AudioTrack对象就和JAVA层的AudioTrack对象关联起来了.// save the JNI resources so we can free them later
        //ALOGV("storing lpJniStorage: %x
    ", (int)lpJniStorage);
        env->SetIntField(thiz, javaAudioTrackFields.jniData, (int)lpJniStorage);
        return AUDIOTRACK_SUCCESS;
        // failures:
     native_init_failure:
        if (nSession != NULL) {
            env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
        }
        env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_class);
        env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_ref);
        delete lpJniStorage;
        env->SetIntField(thiz, javaAudioTrackFields.jniData, 0);
    
        return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
    }
    View Code

     AudioTrackJniStorage

      这个类其实就是一个辅助类,但是里边有一些知识很重要,尤其是Android封装的一套共享内存的机制。把这块搞清楚了,我们就能轻松得在两个进程间进行内存的拷贝。

    class AudioTrackJniStorage {
        public:
            sp<MemoryHeapBase>         mMemHeap;
            sp<MemoryBase>             mMemBase;
            audiotrack_callback_cookie mCallbackData;
            audio_stream_type_t        mStreamType;
    
        AudioTrackJniStorage() {
            mCallbackData.audioTrack_class = 0;
            mCallbackData.audioTrack_ref = 0;
            mStreamType = AUDIO_STREAM_DEFAULT;
        }
    
        ~AudioTrackJniStorage() {
            mMemBase.clear();
            mMemHeap.clear();
        }
    
        bool allocSharedMem(int sizeInBytes) {
            mMemHeap = new MemoryHeapBase(sizeInBytes, 0, "AudioTrack Heap Base");
            if (mMemHeap->getHeapID() < 0) {
                return false;
            }
            mMemBase = new MemoryBase(mMemHeap, 0, sizeInBytes);
        //注意用法,先弄一个MemoryHeapBase,再把MemoryHeapBase传入到MemoryBase中去。
    return true; } };

      MemroyHeapBase,MemoryBaseAndroid搞的一套基于Binder机制的对内存操作的类。既然是Binder机制,那么肯定有一个服务端Bnxxx,一个代理端Bpxxx

    MemoryXXX大概的使用方法如下:

      BnXXX端先分配BnMemoryHeapBase和BnMemoryBase,

      然后把BnMemoryBase传递到BpXXX

      BpXXX就可以使用BpMemoryBase得到BnXXX端分配的共享内存了。

    注意,既然是进程间共享内存,那么Bp端肯定使用memcpy之类的函数来操作内存,这些函数是没有同步保护的,而且Android也不可能在系统内部为这种共享内存去做增加同步保护。所以看来后续在操作这些共享内存的时候,肯定存在一个跨进程的同步保护机制。我们在后面讲实际播放的时候会碰到。

      另外,这里的SharedBuffer最终会在Bp端也就是AudioFlinger那用到。

     play和write

    JAVA层到这一步后就是调用play和write了。JAVA层这两个函数没什么内容,都是直接转到native层干活了。先看看play函数对应的JNI函数:

    static void
    android_media_AudioTrack_start(JNIEnv *env, jobject thiz)
    {
      //从JAVA那个AudioTrack对象获取保存的C++层的AudioTrack对象指针
      //从int类型直接转换成指针。要是以后ARM变成64位平台了,看google怎么改! sp
    <AudioTrack> lpTrack = getAudioTrack(env, thiz); if (lpTrack == NULL) { jniThrowException(env, "java/lang/IllegalStateException", "Unable to retrieve AudioTrack pointer for start()"); return; } lpTrack->start(); }

    再看write。我们写的是short数组

    static jint android_media_AudioTrack_native_write_short(JNIEnv *env,  jobject thiz,
                                                      jshortArray javaAudioData,
                                                      jint offsetInShorts, jint sizeInShorts,
                                                      jint javaAudioFormat) {
        jint written = android_media_AudioTrack_native_write_byte(env, thiz,
                                                     (jbyteArray) javaAudioData,
                                                     offsetInShorts*2, sizeInShorts*2,
                                                     javaAudioFormat);
        if (written > 0) {
            written /= 2;
        }
        return written;
    }
    static jint android_media_AudioTrack_native_write_byte(JNIEnv *env,  jobject thiz,
                                                      jbyteArray javaAudioData,
                                                      jint offsetInBytes, jint sizeInBytes,
                                                      jint javaAudioFormat) {
        //ALOGV("android_media_AudioTrack_native_write_byte(offset=%d, sizeInBytes=%d) called",
        //    offsetInBytes, sizeInBytes);
        sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
        if (lpTrack == NULL) {
            jniThrowException(env, "java/lang/IllegalStateException",
                "Unable to retrieve AudioTrack pointer for write()");
            return 0;
        }
    
        // get the pointer for the audio data from the java array
        // NOTE: We may use GetPrimitiveArrayCritical() when the JNI implementation changes in such
        // a way that it becomes much more efficient. When doing so, we will have to prevent the
        // AudioSystem callback to be called while in critical section (in case of media server
        // process crash for instance)
        jbyte* cAudioData = NULL;
        if (javaAudioData) {
            cAudioData = (jbyte *)env->GetByteArrayElements(javaAudioData, NULL);
            if (cAudioData == NULL) {
                ALOGE("Error retrieving source of audio data to play, can't play");
                return 0; // out of memory or no data to load
            }
        } else {
            ALOGE("NULL java array of audio data to play, can't play");
            return 0;
        }
    
        jint written = writeToTrack(lpTrack, javaAudioFormat, cAudioData, offsetInBytes, sizeInBytes);
    
        env->ReleaseByteArrayElements(javaAudioData, cAudioData, 0);
    
        //ALOGV("write wrote %d (tried %d) bytes in the native AudioTrack with offset %d",
        //     (int)written, (int)(sizeInBytes), (int)offsetInBytes);
        return written;
    }
    jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, jbyte* data,
                      jint offsetInBytes, jint sizeInBytes) {
        // give the data to the native AudioTrack object (the data starts at the offset)
        ssize_t written = 0;
        // regular write() or copy the data to the AudioTrack's shared memory?
        if (track->sharedBuffer() == 0) {
        //创建的是流的方式,所以没有共享内存在track中 written
    = track->write(data + offsetInBytes, sizeInBytes); // for compatibility with earlier behavior of write(), return 0 in this case if (written == (ssize_t) WOULD_BLOCK) { written = 0; } } else { if (audioFormat == ENCODING_PCM_16BIT) { // writing to shared memory, check for capacity if ((size_t)sizeInBytes > track->sharedBuffer()->size()) { sizeInBytes = track->sharedBuffer()->size(); }
           //STATIC模式的,就直接把数据拷贝到共享内存里 memcpy(track
    ->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes); written = sizeInBytes; } else if (audioFormat == ENCODING_PCM_8BIT) {
           //PCM8格式的要先转换成PCM16
    // data contains 8bit data we need to expand to 16bit before copying // to the shared memory // writing to shared memory, check for capacity, // note that input data will occupy 2X the input space due to 8 to 16bit conversion if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) { sizeInBytes = track->sharedBuffer()->size() / 2; } int count = sizeInBytes; int16_t *dst = (int16_t *)track->sharedBuffer()->pointer(); const int8_t *src = (const int8_t *)(data + offsetInBytes); while (count--) { *dst++ = (int16_t)(*src++^0x80) << 8; } // even though we wrote 2*sizeInBytes, we only report sizeInBytes as written to hide // the 8bit mixer restriction from the user of this function written = sizeInBytes; } } return written; }

    到这里,似乎很简单,JAVA层的AudioTrack,无非就是调用write函数,而实际由JNI层的C++ AudioTrack write数据。

     未完,看累了,歇几天继续

    Reprinted from:http://www.cnblogs.com/innost/archive/2011/01/09/1931457.html

  • 相关阅读:
    【ASP.NET 问题】IIS发布网站后出现 "处理程序“PageHandlerFactory-Integrated”在其模块列表中有一个错误"的解决办法
    在引用阿里云库或其他库的时候,经常发生框架不兼容(原因是系统采用:Microsoft .NET Framework 4 Client Profile ),请改为Microsoft .NET Framework 4
    jquery之cookie操作
    Kubernetes Pod 镜像拉取策略
    Kubernetes 远程工具连接k8s集群
    Kubernetes 部署Web UI (Dashboard)
    Kubernetes 企业级集群部署方式
    Prometheus 运维监控
    Prometheus 编写告警规则案例
    Prometheus 一条告警的触发流程、等待时间
  • 原文地址:https://www.cnblogs.com/qiengo/p/4168979.html
Copyright © 2020-2023  润新知