• (二)Audio子系统之new AudioRecord()


    在上一篇文章《(一)Audio子系统之AudioRecord.getMinBufferSize》中已经介绍了AudioRecord如何获取最小缓冲区大小,接下来,继续分析AudioRecorder方法中的new AudioRecorder的实现,本文基于Android5.1,Android4.4请戳这里

    函数原型:

       public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,int bufferSizeInBytes) throws IllegalArgumentException

      作用:

        创建AudioRecord对象

      参数:

        audioSource:录制源,这里设置MediaRecorder.AudioSource.MIC,其他请见MediaRecorder.AudioSource录制源定义,比如MediaRecorder.AudioSource.FM_TUNER等;

        sampleRateInHz:默认采样率,单位Hz,这里设置为44100,44100Hz是当前唯一能保证在所有设备上工作的采样率;

        channelConfig: 描述音频通道设置,这里设置为AudioFormat.CHANNEL_CONFIGURATION_MONO,CHANNEL_CONFIGURATION_MONO保证能在所有设备上工作;

        audioFormat:音频数据保证支持此格式,这里设置为AudioFormat.ENCODING_16BIT;

        bufferSizeInBytes:在录制过程中,音频数据写入缓冲区的总数(字节),即getMinVufferSize()获取到的值。

      异常:

        当参数设置不正确或不支持的参数时,将会抛出IllegalArgumentException

     

    接下来进入系统分析具体实现

    frameworks/base/media/java/android/media/AudioRecord.java

        public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
                int bufferSizeInBytes)
        throws IllegalArgumentException {
            this((new AudioAttributes.Builder())
                        .setInternalCapturePreset(audioSource)
                        .build(),
                    (new AudioFormat.Builder())
                        .setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,//0x10
                                            true/*allow legacy configurations*/))
                        .setEncoding(audioFormat)
                        .setSampleRate(sampleRateInHz)
                        .build(),
                    bufferSizeInBytes,
                    AudioManager.AUDIO_SESSION_ID_GENERATE);
        }
    

    调用相应的方法,检查参数的合法性,然后对参数进行保存等操作,然后调用自己的构造函数this()

        public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
                int sessionId) throws IllegalArgumentException {
            mRecordingState = RECORDSTATE_STOPPED;
    
            if (attributes == null) {
                throw new IllegalArgumentException("Illegal null AudioAttributes");
            }
            if (format == null) {
                throw new IllegalArgumentException("Illegal null AudioFormat");
            }
    
            // remember which looper is associated with the AudioRecord instanciation
            if ((mInitializationLooper = Looper.myLooper()) == null) {
                mInitializationLooper = Looper.getMainLooper();
            }
    
            // is this AudioRecord using REMOTE_SUBMIX at full volume?
            if (attributes.getCapturePreset() == MediaRecorder.AudioSource.REMOTE_SUBMIX) {
                final AudioAttributes.Builder filteredAttr = new AudioAttributes.Builder();
                final Iterator<String> tagsIter = attributes.getTags().iterator();
                while (tagsIter.hasNext()) {
                    final String tag = tagsIter.next();
                    if (tag.equalsIgnoreCase(SUBMIX_FIXED_VOLUME)) {
                        mIsSubmixFullVolume = true;
                        Log.v(TAG, "Will record from REMOTE_SUBMIX at full fixed volume");
                    } else { // SUBMIX_FIXED_VOLUME: is not to be propagated to the native layers
                        filteredAttr.addTag(tag);
                    }
                }
                filteredAttr.setInternalCapturePreset(attributes.getCapturePreset());
                mAudioAttributes = filteredAttr.build();
            } else {
                mAudioAttributes = attributes;
            }
    
            int rate = 0;
            if ((format.getPropertySetMask()
                    & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_SAMPLE_RATE) != 0)
            {
                rate = format.getSampleRate();
            } else {
                rate = AudioSystem.getPrimaryOutputSamplingRate();
                if (rate <= 0) {
                    rate = 44100;
                }
            }
    
            int encoding = AudioFormat.ENCODING_DEFAULT;
            if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0)
            {
                encoding = format.getEncoding();
            }
    
            audioParamCheck(attributes.getCapturePreset(), rate, encoding);
    
            mChannelCount = AudioFormat.channelCountFromInChannelMask(format.getChannelMask());
            mChannelMask = getChannelMaskFromLegacyConfig(format.getChannelMask(), false);
    
            audioBuffSizeCheck(bufferSizeInBytes);
    
            int[] session = new int[1];
            session[0] = sessionId;
            //TODO: update native initialization when information about hardware init failure
            //      due to capture device already open is available.
            int initResult = native_setup( new WeakReference<AudioRecord>(this),
                    mAudioAttributes, mSampleRate, mChannelMask, mAudioFormat, mNativeBufferSizeInBytes,
                    session);
            if (initResult != SUCCESS) {
                loge("Error code "+initResult+" when initializing native AudioRecord object.");
                return; // with mState == STATE_UNINITIALIZED
            }
    
            mSessionId = session[0];
    	
            mState = STATE_INITIALIZED;
        }

    在这个函数中,主要做了如下工作

        1.标记mRecordingState为stoped状态;

        2.获取一个MainLooper;

        3.判断录音源是否是REMOTE_SUBMIX,有兴趣的童鞋可以深入研究;

        4.重新获取rate与format参数,这里会根据AUDIO_FORMAT_HAS_PROPERTY_X来判断从哪里获取参数,而在之前的构造函数中,设置参数的时候已经标记了该标志位,所以这两个参数还是我们设置的;

        5.调用audioParamCheck对参数再一次进行检查合法性;

        6.获取声道数以及声道掩码,单声道掩码为0x10,双声道掩码为0x0c;

        7.调用audioBuffSizeCheck检查最小缓冲区大小是否合法

        8.调用native_setup的native函数,注意这里传过去的参数包括:指向自己的指针,录制源,rate,声道掩码,format,minBuffSize,session[];

        9.标记mRecordingState为inited状态;

            注:关于SessionId
                 一个Session就是一个会话,每个会话都有一个独一无二的Id来标识。该Id的最终管理在AudioFlinger中。
                 一个会话可以被多个AudioTrack对象和MediaPlayer共用。
                 共用一个Session的AudioTrack和MediaPlayer共享相同的AudioEffect(音效)。

    我们只分析native_setup函数

    frameworks/base/core/jni/android_media_AudioRecord.cpp

    static jint
    android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
            jobject jaa, jint sampleRateInHertz, jint channelMask,
                    // Java channel masks map directly to the native definition
            jint audioFormat, jint buffSizeInBytes, jintArray jSession)
    {
        if (jaa == 0) {
            ALOGE("Error creating AudioRecord: invalid audio attributes");
            return (jint) AUDIO_JAVA_ERROR;
        }
    
        if (!audio_is_input_channel(channelMask)) {
            ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", channelMask);
            return (jint) AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;
        }
        uint32_t channelCount = audio_channel_count_from_in_mask(channelMask);
    
        // compare the format against the Java constants
        audio_format_t format = audioFormatToNative(audioFormat);
        if (format == AUDIO_FORMAT_INVALID) {
            ALOGE("Error creating AudioRecord: unsupported audio format %d.", audioFormat);
            return (jint) AUDIORECORD_ERROR_SETUP_INVALIDFORMAT;
        }
    
        size_t bytesPerSample = audio_bytes_per_sample(format);
    
        if (buffSizeInBytes == 0) {
             ALOGE("Error creating AudioRecord: frameCount is 0.");
            return (jint) AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT;
        }
        size_t frameSize = channelCount * bytesPerSample;
        size_t frameCount = buffSizeInBytes / frameSize;
    
        jclass clazz = env->GetObjectClass(thiz);
        if (clazz == NULL) {
            ALOGE("Can't find %s when setting up callback.", kClassPathName);
            return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
        }
    
        if (jSession == NULL) {
            ALOGE("Error creating AudioRecord: invalid session ID pointer");
            return (jint) AUDIO_JAVA_ERROR;
        }
    
        jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
        if (nSession == NULL) {
            ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
            return (jint) AUDIO_JAVA_ERROR;
        }
        int sessionId = nSession[0];
        env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
        nSession = NULL;
    
        // create an uninitialized AudioRecord object
        sp<AudioRecord> lpRecorder = new AudioRecord();
    
        audio_attributes_t *paa = NULL;
        // read the AudioAttributes values
        paa = (audio_attributes_t *) calloc(1, sizeof(audio_attributes_t));
        const jstring jtags =
                (jstring) env->GetObjectField(jaa, javaAudioAttrFields.fieldFormattedTags);
        const char* tags = env->GetStringUTFChars(jtags, NULL);
        // copying array size -1, char array for tags was calloc'd, no need to NULL-terminate it
        strncpy(paa->tags, tags, AUDIO_ATTRIBUTES_TAGS_MAX_SIZE - 1);
        env->ReleaseStringUTFChars(jtags, tags);
        paa->source = (audio_source_t) env->GetIntField(jaa, javaAudioAttrFields.fieldRecSource);
        paa->flags = (audio_flags_mask_t)env->GetIntField(jaa, javaAudioAttrFields.fieldFlags);
        ALOGV("AudioRecord_setup for source=%d tags=%s flags=%08x", paa->source, paa->tags, paa->flags);
    
        audio_input_flags_t flags = AUDIO_INPUT_FLAG_NONE;
        if (paa->flags & AUDIO_FLAG_HW_HOTWORD) {
            flags = AUDIO_INPUT_FLAG_HW_HOTWORD;
        }
        // create the callback information:
        // this data will be passed with every AudioRecord callback
        audiorecord_callback_cookie *lpCallbackData = new audiorecord_callback_cookie;
        lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
        // we use a weak reference so the AudioRecord object can be garbage collected.
        lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
        lpCallbackData->busy = false;
    
        const status_t status = lpRecorder->set(paa->source,
            sampleRateInHertz,
            format,        // word length, PCM
            channelMask,
            frameCount,
            recorderCallback,// callback_t
            lpCallbackData,// void* user
            0,             // notificationFrames,
            true,          // threadCanCallJava
            sessionId,
            AudioRecord::TRANSFER_DEFAULT,
            flags,
            paa);
    
        if (status != NO_ERROR) {
            ALOGE("Error creating AudioRecord instance: initialization check failed with status %d.",
                    status);
            goto native_init_failure;
        }
        nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
        if (nSession == NULL) {
            ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
            goto native_init_failure;
        }
        // read the audio session ID back from AudioRecord in case a new session was created during set()
        nSession[0] = lpRecorder->getSessionId();
        env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
        nSession = NULL;
    
        {   // scope for the lock
            Mutex::Autolock l(sLock);
            sAudioRecordCallBackCookies.add(lpCallbackData);
        }
        // save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field
        // of the Java object
        setAudioRecord(env, thiz, lpRecorder);
    
        // save our newly created callback information in the "nativeCallbackCookie" field
        // of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize()
        env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, (jlong)lpCallbackData);
    
        return (jint) AUDIO_JAVA_SUCCESS;
    
        // failure:
    native_init_failure:
        env->DeleteGlobalRef(lpCallbackData->audioRecord_class);
        env->DeleteGlobalRef(lpCallbackData->audioRecord_ref);
        delete lpCallbackData;
        env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0);
    
        return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
    }

    在这个函数中主要工作如下:

        1.判断声道掩码是否合法,然后通过掩码计算出声道数;

        2.由于最小缓冲区大小是采样帧数量*每个采样帧大小得出,每个采样帧大小为所有声道数所占的字节数,从而求出采样帧数量frameCount;

        3.进行一系列的JNI处理录音源,以及把AudioRecord.java的指针绑定到lpCallbackData回调数据中,这样就能把数据通过回调的方式通知到上层;

        4.调用AudioRecord的set函数,这里注意下flags,他的类型为audio_input_flags_t,定义在systemcoreincludesystemaudio.h中,作为音频输入的标志,这里设置为AUDIO_INPUT_FLAG_NONE

    typedef enum {
        AUDIO_INPUT_FLAG_NONE       = 0x0,  // no attributes
        AUDIO_INPUT_FLAG_FAST       = 0x1,  // prefer an input that supports "fast tracks"
        AUDIO_INPUT_FLAG_HW_HOTWORD = 0x2,  // prefer an input that captures from hw hotword source
    } audio_input_flags_t;

        5.把lpRecorder对象以及lpCallbackData回调保存到javaAudioRecordFields的相应字段中。

    这里分析lpRecorder->set函数

    frameworksavmedialibmediaAudioRecord.cpp

    status_t AudioRecord::set(
            audio_source_t inputSource,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            callback_t cbf,
            void* user,
            uint32_t notificationFrames,
            bool threadCanCallJava,
            int sessionId,
            transfer_type transferType,
            audio_input_flags_t flags,
            const audio_attributes_t* pAttributes)
    {
        switch (transferType) {
        case TRANSFER_DEFAULT:
            if (cbf == NULL || threadCanCallJava) {
                transferType = TRANSFER_SYNC;
            } else {
                transferType = TRANSFER_CALLBACK;
            }
            break;
        case TRANSFER_CALLBACK:
            if (cbf == NULL) {
                ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL");
                return BAD_VALUE;
            }
            break;
        case TRANSFER_OBTAIN:
        case TRANSFER_SYNC:
            break;
        default:
            ALOGE("Invalid transfer type %d", transferType);
            return BAD_VALUE;
        }
        mTransfer = transferType;
    
        AutoMutex lock(mLock);
    
        // invariant that mAudioRecord != 0 is true only after set() returns successfully
        if (mAudioRecord != 0) {
            ALOGE("Track already in use");
            return INVALID_OPERATION;
        }
    
        if (pAttributes == NULL) {
            memset(&mAttributes, 0, sizeof(audio_attributes_t));
            mAttributes.source = inputSource;
        } else {
            // stream type shouldn't be looked at, this track has audio attributes
            memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
            ALOGV("Building AudioRecord with attributes: source=%d flags=0x%x tags=[%s]",
                  mAttributes.source, mAttributes.flags, mAttributes.tags);
        }
    
        if (sampleRate == 0) {
            ALOGE("Invalid sample rate %u", sampleRate);
            return BAD_VALUE;
        }
        mSampleRate = sampleRate;
    
        // these below should probably come from the audioFlinger too...
        if (format == AUDIO_FORMAT_DEFAULT) {
            format = AUDIO_FORMAT_PCM_16_BIT;
        }
    
        // validate parameters
        if (!audio_is_valid_format(format)) {
            ALOGE("Invalid format %#x", format);
            return BAD_VALUE;
        }
        // Temporary restriction: AudioFlinger currently supports 16-bit PCM only
        if (format != AUDIO_FORMAT_PCM_16_BIT) {
            ALOGE("Format %#x is not supported", format);
            return BAD_VALUE;
        }
        mFormat = format;
    
        if (!audio_is_input_channel(channelMask)) {
            ALOGE("Invalid channel mask %#x", channelMask);
            return BAD_VALUE;
        }
        mChannelMask = channelMask;
        uint32_t channelCount = audio_channel_count_from_in_mask(channelMask);
        mChannelCount = channelCount;
    
        if (audio_is_linear_pcm(format)) {
            mFrameSize = channelCount * audio_bytes_per_sample(format);
        } else {
            mFrameSize = sizeof(uint8_t);
        }
    
        // mFrameCount is initialized in openRecord_l
        mReqFrameCount = frameCount;
    
        mNotificationFramesReq = notificationFrames;
        // mNotificationFramesAct is initialized in openRecord_l
    
        if (sessionId == AUDIO_SESSION_ALLOCATE) {
            mSessionId = AudioSystem::newAudioUniqueId();
        } else {
            mSessionId = sessionId;
        }
        ALOGV("set(): mSessionId %d", mSessionId);
    
        mFlags = flags;
        mCbf = cbf;
    
        if (cbf != NULL) {
            mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);
            mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
        }
    
        // create the IAudioRecord
        status_t status = openRecord_l(0 /*epoch*/);
    
        if (status != NO_ERROR) {
            if (mAudioRecordThread != 0) {
                mAudioRecordThread->requestExit();   // see comment in AudioRecord.h
                mAudioRecordThread->requestExitAndWait();
                mAudioRecordThread.clear();
            }
            return status;
        }
    
        mStatus = NO_ERROR;
        mActive = false;
        mUserData = user;
        // TODO: add audio hardware input latency here
        mLatency = (1000*mFrameCount) / sampleRate;
        mMarkerPosition = 0;
        mMarkerReached = false;
        mNewPosition = 0;
        mUpdatePeriod = 0;
        AudioSystem::acquireAudioSessionId(mSessionId, -1);
        mSequence = 1;
        mObservedSequence = mSequence;
        mInOverrun = false;
    
        return NO_ERROR;
    }

    在这个函数中主要工作如下:

        1.在JNI中传递过来的参数:transferType为TRANSFER_DEFAULT,cbf!=null,threadCanCallJava=true,所以mTransfer设置为TRANSFER_SYNC,他是决定如何从AudioRecord传输数据方式,后面会用到;

        2.保存相关的参数,如录制源mAttributes.source,采样率mSampleRate,采样精度mFormat,声道掩码mChannelMask,声道数mChannelCount,采样帧大小mFrameSize,采样帧数量mReqFrameCount,通知帧计数mNotificationFramesReq,mSessionId在这里更新了,音频输入标志mFlags还是之前的AUDIO_INPUT_FLAG_NONE

        3.当cbf数据回调函数不为null时,开启一个录音线程AudioRecordThread;

        4.调用openRecord_l(0)创建IAudioRecord对象;

        5.如果建立失败,就销毁录音线程AudioRecordThread,否则更新参数;

    这里继续分析如何创建IAudioRecord对象

    status_t AudioRecord::openRecord_l(size_t epoch)
    {
        status_t status;
        const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
        if (audioFlinger == 0) {
            ALOGE("Could not get audioflinger");
            return NO_INIT;
        }
    
        // Fast tracks must be at the primary _output_ [sic] sampling rate,
        // because there is currently no concept of a primary input sampling rate
        uint32_t afSampleRate = AudioSystem::getPrimaryOutputSamplingRate();
        if (afSampleRate == 0) {
            ALOGW("getPrimaryOutputSamplingRate failed");
        }
    
        // Client can only express a preference for FAST.  Server will perform additional tests.
        if ((mFlags & AUDIO_INPUT_FLAG_FAST) && !(
                // use case: callback transfer mode
                (mTransfer == TRANSFER_CALLBACK) &&
                // matching sample rate
                (mSampleRate == afSampleRate))) {
            ALOGW("AUDIO_INPUT_FLAG_FAST denied by client");
            // once denied, do not request again if IAudioRecord is re-created
            mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
        }
    
        IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;
    
        pid_t tid = -1;
        if (mFlags & AUDIO_INPUT_FLAG_FAST) {
            trackFlags |= IAudioFlinger::TRACK_FAST;
            if (mAudioRecordThread != 0) {
                tid = mAudioRecordThread->getTid();
            }
        }
    
        audio_io_handle_t input;
        status = AudioSystem::getInputForAttr(&mAttributes, &input, (audio_session_t)mSessionId,
                                            mSampleRate, mFormat, mChannelMask, mFlags);
    
        if (status != NO_ERROR) {
            ALOGE("Could not get audio input for record source %d, sample rate %u, format %#x, "
                  "channel mask %#x, session %d, flags %#x",
                  mAttributes.source, mSampleRate, mFormat, mChannelMask, mSessionId, mFlags);
            return BAD_VALUE;
        }
        {
        // Now that we have a reference to an I/O handle and have not yet handed it off to AudioFlinger,
        // we must release it ourselves if anything goes wrong.
    
        size_t frameCount = mReqFrameCount;
        size_t temp = frameCount;   // temp may be replaced by a revised value of frameCount,
                                    // but we will still need the original value also
        int originalSessionId = mSessionId;
    
        // The notification frame count is the period between callbacks, as suggested by the server.
        size_t notificationFrames = mNotificationFramesReq;
    
        sp<IMemory> iMem;           // for cblk
        sp<IMemory> bufferMem;
    
    	//return recordHandle = new RecordHandle(recordTrack);
    	//class RecordHandle : public android::BnAudioRecord
        sp<IAudioRecord> record = audioFlinger->openRecord(input,
                                                           mSampleRate, mFormat,
                                                           mChannelMask,
                                                           &temp,
                                                           &trackFlags,
                                                           tid,
                                                           &mSessionId,
                                                           &notificationFrames,
                                                           iMem,
                                                           bufferMem,
                                                           &status);
        ALOGE_IF(originalSessionId != AUDIO_SESSION_ALLOCATE && mSessionId != originalSessionId,
                "session ID changed from %d to %d", originalSessionId, mSessionId);
    
        if (status != NO_ERROR) {
            ALOGE("AudioFlinger could not create record track, status: %d", status);
            goto release;
        }
        ALOG_ASSERT(record != 0);
    
        // AudioFlinger now owns the reference to the I/O handle,
        // so we are no longer responsible for releasing it.
    
        if (iMem == 0) {
            ALOGE("Could not get control block");
            return NO_INIT;
        }
        void *iMemPointer = iMem->pointer();
        if (iMemPointer == NULL) {
            ALOGE("Could not get control block pointer");
            return NO_INIT;
        }
        audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
    
        // Starting address of buffers in shared memory.
        // The buffers are either immediately after the control block,
        // or in a separate area at discretion of server.
        void *buffers;
        if (bufferMem == 0) {
            buffers = cblk + 1;
        } else {
            buffers = bufferMem->pointer();
            if (buffers == NULL) {
                ALOGE("Could not get buffer pointer");
                return NO_INIT;
            }
        }
    
        // invariant that mAudioRecord != 0 is true only after set() returns successfully
        if (mAudioRecord != 0) {
            mAudioRecord->asBinder()->unlinkToDeath(mDeathNotifier, this);
            mDeathNotifier.clear();
        }
        mAudioRecord = record;
        mCblkMemory = iMem;
        mBufferMemory = bufferMem;
        IPCThreadState::self()->flushCommands();
    
        mCblk = cblk;
        // note that temp is the (possibly revised) value of frameCount
        if (temp < frameCount || (frameCount == 0 && temp == 0)) {
            ALOGW("Requested frameCount %zu but received frameCount %zu", frameCount, temp);
        }
        frameCount = temp;
    
        mAwaitBoost = false;
        if (mFlags & AUDIO_INPUT_FLAG_FAST) {
            if (trackFlags & IAudioFlinger::TRACK_FAST) {
                ALOGV("AUDIO_INPUT_FLAG_FAST successful; frameCount %zu", frameCount);
                mAwaitBoost = true;
            } else {
                ALOGV("AUDIO_INPUT_FLAG_FAST denied by server; frameCount %zu", frameCount);
                // once denied, do not request again if IAudioRecord is re-created
                mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
            }
        }
    
        // Make sure that application is notified with sufficient margin before overrun
        if (notificationFrames == 0 || notificationFrames > frameCount) {
            ALOGW("Received notificationFrames %zu for frameCount %zu", notificationFrames, frameCount);
        }
        mNotificationFramesAct = notificationFrames;
    
        // We retain a copy of the I/O handle, but don't own the reference
        mInput = input;
        mRefreshRemaining = true;
    
        mFrameCount = frameCount;
        // If IAudioRecord is re-created, don't let the requested frameCount
        // decrease.  This can confuse clients that cache frameCount().
        if (frameCount > mReqFrameCount) {
            mReqFrameCount = frameCount;
        }
    
        // update proxy
        mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
        mProxy->setEpoch(epoch);
        mProxy->setMinimum(mNotificationFramesAct);
    
        mDeathNotifier = new DeathNotifier(this);
        mAudioRecord->asBinder()->linkToDeath(mDeathNotifier, this);
    
        return NO_ERROR;
        }
    
    release:
        AudioSystem::releaseInput(input, (audio_session_t)mSessionId);
        if (status == NO_ERROR) {
            status = NO_INIT;
        }
        return status;
    }

    在这个函数中主要工作如下:

        1.获取IAudioFlinger对象,其通过binder和AudioFlinger通信,所以也就是相当于直接调用到AudioFlinger服务中了;

        2.判断音频输入标志,是否需要清除AUDIO_INPUT_FLAG_FAST标志位,这里不需要,一直是AUDIO_INPUT_FLAG_NONE;

        3.调用AudioSystem::getInputForAttr获取输入流的句柄input;

        4.调用audioFlinger->openRecord创建IAudioRecord对象;

        5.通过IMemory共享内存,获取录音数据;

        6.更新AudioRecordClientProxy客户端代理的录音数据;

    下面主要分析第3、4点:

        首先看下AudioRecord.cpp::openRecord_l(0)的第3步.获取输入流的句柄input

    frameworksavmedialibmediaAudioSystem.cpp

    status_t AudioSystem::getInputForAttr(const audio_attributes_t *attr,
                                    audio_io_handle_t *input,
                                    audio_session_t session,
                                    uint32_t samplingRate,
                                    audio_format_t format,
                                    audio_channel_mask_t channelMask,
                                    audio_input_flags_t flags)
    {
        const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
        if (aps == 0) return NO_INIT;
        return aps->getInputForAttr(attr, input, session, samplingRate, format, channelMask, flags);
    }

    获取AudioPolicy的服务,继续调用AudioPolicyService的函数

    frameworksavservicesaudiopolicyAudioPolicyInterfaceImpl.cpp

    status_t AudioPolicyService::getInputForAttr(const audio_attributes_t *attr,
                                                 audio_io_handle_t *input,
                                                 audio_session_t session,
                                                 uint32_t samplingRate,
                                                 audio_format_t format,
                                                 audio_channel_mask_t channelMask,
                                                 audio_input_flags_t flags)
    {
        if (mAudioPolicyManager == NULL) {
            return NO_INIT;
        }
    
        // already checked by client, but double-check in case the client wrapper is bypassed
        if (attr->source >= AUDIO_SOURCE_CNT && attr->source != AUDIO_SOURCE_HOTWORD &&
            attr->source != AUDIO_SOURCE_FM_TUNER) {
            return BAD_VALUE;
        }
    
        if (((attr->source == AUDIO_SOURCE_HOTWORD) && !captureHotwordAllowed()) ||
            ((attr->source == AUDIO_SOURCE_FM_TUNER) && !captureFmTunerAllowed())) {
            return BAD_VALUE;
        }
        sp<AudioPolicyEffects>audioPolicyEffects;
        status_t status;
        AudioPolicyInterface::input_type_t inputType;
        {
            Mutex::Autolock _l(mLock);
            // the audio_in_acoustics_t parameter is ignored by get_input()
            status = mAudioPolicyManager->getInputForAttr(attr, input, session,
                                                         samplingRate, format, channelMask,
                                                         flags, &inputType);
            audioPolicyEffects = mAudioPolicyEffects;
    
            if (status == NO_ERROR) {
                // enforce permission (if any) required for each type of input
                switch (inputType) {
                case AudioPolicyInterface::API_INPUT_LEGACY:
                    break;
                case AudioPolicyInterface::API_INPUT_MIX_CAPTURE:
                    if (!captureAudioOutputAllowed()) {
                        ALOGE("getInputForAttr() permission denied: capture not allowed");
                        status = PERMISSION_DENIED;
                    }
                    break;
                case AudioPolicyInterface::API_INPUT_MIX_EXT_POLICY_REROUTE:
                    if (!modifyAudioRoutingAllowed()) {
                        ALOGE("getInputForAttr() permission denied: modify audio routing not allowed");
                        status = PERMISSION_DENIED;
                    }
                    break;
                case AudioPolicyInterface::API_INPUT_INVALID:
                default:
                    LOG_ALWAYS_FATAL("getInputForAttr() encountered an invalid input type %d",
                            (int)inputType);
                }
            }
    
            if (status != NO_ERROR) {
                if (status == PERMISSION_DENIED) {
                    mAudioPolicyManager->releaseInput(*input, session);
                }
                return status;
            }
        }
    
        if (audioPolicyEffects != 0) {
            // create audio pre processors according to input source
            status_t status = audioPolicyEffects->addInputEffects(*input, attr->source, session);
            if (status != NO_ERROR && status != ALREADY_EXISTS) {
                ALOGW("Failed to add effects on input %d", *input);
            }
        }
        return NO_ERROR;
    }
    

    在这个函数中主要的工作如下:

        1.对source为HOTWORD或FM_TUNER的录音源,判断是否具有相应的录音权限(根据应用进程号);

        2.继续调用AudioPolicyManager的方法获取input以及inputType;

        3.检查应用是否具有该inputType的录音权限;

        4.判断是否需要添加音效(audioPolicyEffects),需要则使用audioPolicyEffects->addInputEffects添加音效;

    继续分析第2步

    frameworksavservicesaudiopolicyAudioPolicyManager.cpp

    status_t AudioPolicyManager::getInputForAttr(const audio_attributes_t *attr,
                                                 audio_io_handle_t *input,
                                                 audio_session_t session,
                                                 uint32_t samplingRate,
                                                 audio_format_t format,
                                                 audio_channel_mask_t channelMask,
                                                 audio_input_flags_t flags,
                                                 input_type_t *inputType)
    {
        *input = AUDIO_IO_HANDLE_NONE;
        *inputType = API_INPUT_INVALID;
        audio_devices_t device;
        // handle legacy remote submix case where the address was not always specified
        String8 address = String8("");
        bool isSoundTrigger = false;
        audio_source_t inputSource = attr->source;
        audio_source_t halInputSource;
        AudioMix *policyMix = NULL;
    
        if (inputSource == AUDIO_SOURCE_DEFAULT) {
            inputSource = AUDIO_SOURCE_MIC;
        }
        halInputSource = inputSource;
    
        if (inputSource == AUDIO_SOURCE_REMOTE_SUBMIX &&
                strncmp(attr->tags, "addr=", strlen("addr=")) == 0) {
    
            device = AUDIO_DEVICE_IN_REMOTE_SUBMIX;
            address = String8(attr->tags + strlen("addr="));
            ssize_t index = mPolicyMixes.indexOfKey(address);
            if (index < 0) {
                ALOGW("getInputForAttr() no policy for address %s", address.string());
                return BAD_VALUE;
            }
            if (mPolicyMixes[index]->mMix.mMixType != MIX_TYPE_PLAYERS) {
                ALOGW("getInputForAttr() bad policy mix type for address %s", address.string());
                return BAD_VALUE;
            }
            policyMix = &mPolicyMixes[index]->mMix;
            *inputType = API_INPUT_MIX_EXT_POLICY_REROUTE;
        } else {
            device = getDeviceAndMixForInputSource(inputSource, &policyMix);
            if (device == AUDIO_DEVICE_NONE) {
                ALOGW("getInputForAttr() could not find device for source %d", inputSource);
                return BAD_VALUE;
            }
            if (policyMix != NULL) {
                address = policyMix->mRegistrationId;
                if (policyMix->mMixType == MIX_TYPE_RECORDERS) {
                    // there is an external policy, but this input is attached to a mix of recorders,
                    // meaning it receives audio injected into the framework, so the recorder doesn't
                    // know about it and is therefore considered "legacy"
                    *inputType = API_INPUT_LEGACY;
                } else {
                    // recording a mix of players defined by an external policy, we're rerouting for
                    // an external policy
                    *inputType = API_INPUT_MIX_EXT_POLICY_REROUTE;
                }
            } else if (audio_is_remote_submix_device(device)) {
                address = String8("0");
                *inputType = API_INPUT_MIX_CAPTURE;
            } else {
                *inputType = API_INPUT_LEGACY;
            }
            // adapt channel selection to input source
            switch (inputSource) {
            case AUDIO_SOURCE_VOICE_UPLINK:
                channelMask = AUDIO_CHANNEL_IN_VOICE_UPLINK;
                break;
            case AUDIO_SOURCE_VOICE_DOWNLINK:
                channelMask = AUDIO_CHANNEL_IN_VOICE_DNLINK;
                break;
            case AUDIO_SOURCE_VOICE_CALL:
                channelMask = AUDIO_CHANNEL_IN_VOICE_UPLINK | AUDIO_CHANNEL_IN_VOICE_DNLINK;
                break;
            default:
                break;
            }
            if (inputSource == AUDIO_SOURCE_HOTWORD) {
                ssize_t index = mSoundTriggerSessions.indexOfKey(session);
                if (index >= 0) {
                    *input = mSoundTriggerSessions.valueFor(session);
                    isSoundTrigger = true;
                    flags = (audio_input_flags_t)(flags | AUDIO_INPUT_FLAG_HW_HOTWORD);
                    ALOGV("SoundTrigger capture on session %d input %d", session, *input);
                } else {
                    halInputSource = AUDIO_SOURCE_VOICE_RECOGNITION;
                }
            }
        }
    
        sp<IOProfile> profile = getInputProfile(device, address,
                                                samplingRate, format, channelMask,
                                                flags);
        if (profile == 0) {
    		PLOGV("profile == 0");
            //retry without flags
            audio_input_flags_t log_flags = flags;
            flags = AUDIO_INPUT_FLAG_NONE;
            profile = getInputProfile(device, address,
                                      samplingRate, format, channelMask,
                                      flags);
            if (profile == 0) {
                ALOGW("getInputForAttr() could not find profile for device 0x%X, samplingRate %u,"
                        "format %#x, channelMask 0x%X, flags %#x",
                        device, samplingRate, format, channelMask, log_flags);
                return BAD_VALUE;
            }
        }
    
        if (profile->mModule->mHandle == 0) {
    		PLOGV("getInputForAttr(): HW module %s not opened", profile->mModule->mName);
            ALOGE("getInputForAttr(): HW module %s not opened", profile->mModule->mName);
            return NO_INIT;
        }
    
        audio_config_t config = AUDIO_CONFIG_INITIALIZER;
        config.sample_rate = samplingRate;
        config.channel_mask = channelMask;
        config.format = format;
    
        status_t status = mpClientInterface->openInput(profile->mModule->mHandle,
                                                       input,
                                                       &config,
                                                       &device,
                                                       address,
                                                       halInputSource,
                                                       flags);
        // only accept input with the exact requested set of parameters
        if (status != NO_ERROR || *input == AUDIO_IO_HANDLE_NONE ||
            (samplingRate != config.sample_rate) ||
            (format != config.format) ||
            (channelMask != config.channel_mask)) {
            ALOGW("getInputForAttr() failed opening input: samplingRate %d, format %d, channelMask %x",
                    samplingRate, format, channelMask);
            if (*input != AUDIO_IO_HANDLE_NONE) {
                mpClientInterface->closeInput(*input);
            }
            return BAD_VALUE;
        }
    
        sp<AudioInputDescriptor> inputDesc = new AudioInputDescriptor(profile);
        inputDesc->mInputSource = inputSource;
        inputDesc->mRefCount = 0;
        inputDesc->mOpenRefCount = 1;
        inputDesc->mSamplingRate = samplingRate;
        inputDesc->mFormat = format;
        inputDesc->mChannelMask = channelMask;
        inputDesc->mDevice = device;
        inputDesc->mSessions.add(session);
        inputDesc->mIsSoundTrigger = isSoundTrigger;
        inputDesc->mPolicyMix = policyMix;
    
        ALOGV("getInputForAttr() returns input type = %d", inputType);
    
        addInput(*input, inputDesc);
        mpClientInterface->onAudioPortListUpdate();
        return NO_ERROR;
    }

    在这个函数中主要工作如下:

        1.调用getDeviceAndMixForInputSource函数获取policyMix设备以及对应的audio_device_t设备类型(device),device定义在systemcoreincludesystemaudio.h中,这里使用了内置的MIC,所以device为AUDIO_DEVICE_IN_BUILTIN_MIC,另外如果还需要新增一种音频设备的话,需要在这里增加;

    enum {
        AUDIO_DEVICE_NONE                          = 0x0,
        /* reserved bits */
        AUDIO_DEVICE_BIT_IN                        = 0x80000000,
        AUDIO_DEVICE_BIT_DEFAULT                   = 0x40000000,
        /* output devices */
        AUDIO_DEVICE_OUT_EARPIECE                  = 0x1,
        AUDIO_DEVICE_OUT_SPEAKER                   = 0x2,
        AUDIO_DEVICE_OUT_WIRED_HEADSET             = 0x4,
    ...
        /* input devices */
        AUDIO_DEVICE_IN_COMMUNICATION         = AUDIO_DEVICE_BIT_IN | 0x1,
        AUDIO_DEVICE_IN_AMBIENT               = AUDIO_DEVICE_BIT_IN | 0x2,
        AUDIO_DEVICE_IN_BUILTIN_MIC           = AUDIO_DEVICE_BIT_IN | 0x4,
    ...
        AUDIO_DEVICE_IN_ALL     = (AUDIO_DEVICE_IN_COMMUNICATION |
                                   AUDIO_DEVICE_IN_AMBIENT |
                                   AUDIO_DEVICE_IN_BUILTIN_MIC |
                                   AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET |
                                   AUDIO_DEVICE_IN_WIRED_HEADSET |
                                   AUDIO_DEVICE_IN_HDMI |
                                   AUDIO_DEVICE_IN_TELEPHONY_RX |
                                   AUDIO_DEVICE_IN_BACK_MIC |
                                   AUDIO_DEVICE_IN_REMOTE_SUBMIX |
                                   AUDIO_DEVICE_IN_ANLG_DOCK_HEADSET |
                                   AUDIO_DEVICE_IN_DGTL_DOCK_HEADSET |
                                   AUDIO_DEVICE_IN_USB_ACCESSORY |
                                   AUDIO_DEVICE_IN_USB_DEVICE |
                                   AUDIO_DEVICE_IN_FM_TUNER |
                                   AUDIO_DEVICE_IN_TV_TUNER |
                                   AUDIO_DEVICE_IN_LINE |
                                   AUDIO_DEVICE_IN_SPDIF |
                                   AUDIO_DEVICE_IN_BLUETOOTH_A2DP |
                                   AUDIO_DEVICE_IN_LOOPBACK |
    							   AUDIO_DEVICE_IN_AF |
                                   AUDIO_DEVICE_IN_DEFAULT),
        AUDIO_DEVICE_IN_ALL_SCO = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET,
        AUDIO_DEVICE_IN_ALL_USB  = (AUDIO_DEVICE_IN_USB_ACCESSORY |
                                    AUDIO_DEVICE_IN_USB_DEVICE),
    };
    
    typedef uint32_t audio_devices_t;

        2.获取inputType的类型

        typedef enum {
            API_INPUT_INVALID = -1,
            API_INPUT_LEGACY  = 0,// e.g. audio recording from a microphone
            API_INPUT_MIX_CAPTURE,// used for "remote submix", capture of the media to play it remotely
            API_INPUT_MIX_EXT_POLICY_REROUTE,// used for platform audio rerouting, where mixes are
                                             // handled by external and dynamically installed
                                             // policies which reroute audio mixes
        } input_type_t;

        3.更新channelMask,适配声道到输入源

        4.调用getInputProfile,根据传进来的采样率/精度/掩码等参数与获得的设备支持的Input Profile比较,返回一个与设备Profile匹配的IOProfile对象,IOProfile是用来描述输出或输入流的能力,策略管理器使用它来确定输出或输入是否适合于给定的用例,相应地打开/关闭它,以及连接/断开音频轨道

        5.如果获取失败的话,则使用AUDIO_INPUT_FLAG_NONE再次获取一遍,如果依然失败,则return一个bad news;

        6.继续调用mpClientInterface->openInput建立起输入流;

        7.根据IOProfile对象构造AudioInputDescriptor,并绑定到input流中,最后更新AudioPortList;

    这里我们着重分析下第1,6步

       首先看下AudioPolicyManager.cpp::getInputForAttr()的第1步.获取policyMix设备以及对应的audio_device_t设备类型(device)

    audio_devices_t AudioPolicyManager::getDeviceAndMixForInputSource(audio_source_t inputSource,
                                                                AudioMix **policyMix)
    {
        audio_devices_t availableDeviceTypes = mAvailableInputDevices.types() &
                                                ~AUDIO_DEVICE_BIT_IN;
    
        for (size_t i = 0; i < mPolicyMixes.size(); i++) {
            if (mPolicyMixes[i]->mMix.mMixType != MIX_TYPE_RECORDERS) {
                continue;
            }
            for (size_t j = 0; j < mPolicyMixes[i]->mMix.mCriteria.size(); j++) {
                if ((RULE_MATCH_ATTRIBUTE_CAPTURE_PRESET == mPolicyMixes[i]->mMix.mCriteria[j].mRule &&
                        mPolicyMixes[i]->mMix.mCriteria[j].mAttr.mSource == inputSource) ||
                   (RULE_EXCLUDE_ATTRIBUTE_CAPTURE_PRESET == mPolicyMixes[i]->mMix.mCriteria[j].mRule &&
                        mPolicyMixes[i]->mMix.mCriteria[j].mAttr.mSource != inputSource)) {
                    if (availableDeviceTypes & AUDIO_DEVICE_IN_REMOTE_SUBMIX) {
                        if (policyMix != NULL) {
                            *policyMix = &mPolicyMixes[i]->mMix;
                        }
                        return AUDIO_DEVICE_IN_REMOTE_SUBMIX;
                    }
                    break;
                }
            }
        }
    
        return getDeviceForInputSource(inputSource);
    }
    
    audio_devices_t AudioPolicyManager::getDeviceForInputSource(audio_source_t inputSource)
    {
        uint32_t device = AUDIO_DEVICE_NONE;
        audio_devices_t availableDeviceTypes = mAvailableInputDevices.types() &
                                                ~AUDIO_DEVICE_BIT_IN;
    
        switch (inputSource) {
        case AUDIO_SOURCE_VOICE_UPLINK:
          if (availableDeviceTypes & AUDIO_DEVICE_IN_VOICE_CALL) {
              device = AUDIO_DEVICE_IN_VOICE_CALL;
              break;
          }
          break;
    
        case AUDIO_SOURCE_DEFAULT:
        case AUDIO_SOURCE_MIC:
        if (availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_A2DP) {
            device = AUDIO_DEVICE_IN_BLUETOOTH_A2DP;
        } else if ((mForceUse[AUDIO_POLICY_FORCE_FOR_RECORD] == AUDIO_POLICY_FORCE_BT_SCO) &&
            (availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET)) {
            device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
        } else if (availableDeviceTypes & AUDIO_DEVICE_IN_WIRED_HEADSET) {
            device = AUDIO_DEVICE_IN_WIRED_HEADSET;
        } else if (availableDeviceTypes & AUDIO_DEVICE_IN_USB_DEVICE) {
            device = AUDIO_DEVICE_IN_USB_DEVICE;
        } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
            device = AUDIO_DEVICE_IN_BUILTIN_MIC;
        }
        break;
    
        case AUDIO_SOURCE_VOICE_COMMUNICATION:
            // Allow only use of devices on primary input if in call and HAL does not support routing
            // to voice call path.
            if ((mPhoneState == AUDIO_MODE_IN_CALL) &&
                    (mAvailableOutputDevices.types() & AUDIO_DEVICE_OUT_TELEPHONY_TX) == 0) {
                availableDeviceTypes = availablePrimaryInputDevices() & ~AUDIO_DEVICE_BIT_IN;
            }
    
            switch (mForceUse[AUDIO_POLICY_FORCE_FOR_COMMUNICATION]) {
            case AUDIO_POLICY_FORCE_BT_SCO:
                // if SCO device is requested but no SCO device is available, fall back to default case
                if (availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET) {
                    device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
                    break;
                }
                // FALL THROUGH
    
            default:    // FORCE_NONE
                if (availableDeviceTypes & AUDIO_DEVICE_IN_WIRED_HEADSET) {
                    device = AUDIO_DEVICE_IN_WIRED_HEADSET;
                } else if (availableDeviceTypes & AUDIO_DEVICE_IN_USB_DEVICE) {
                    device = AUDIO_DEVICE_IN_USB_DEVICE;
                } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
                    device = AUDIO_DEVICE_IN_BUILTIN_MIC;
                }
                break;
    
            case AUDIO_POLICY_FORCE_SPEAKER:
                if (availableDeviceTypes & AUDIO_DEVICE_IN_BACK_MIC) {
                    device = AUDIO_DEVICE_IN_BACK_MIC;
                } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
                    device = AUDIO_DEVICE_IN_BUILTIN_MIC;
                }
                break;
            }
            break;
    
        case AUDIO_SOURCE_VOICE_RECOGNITION:
        case AUDIO_SOURCE_HOTWORD:
            if (mForceUse[AUDIO_POLICY_FORCE_FOR_RECORD] == AUDIO_POLICY_FORCE_BT_SCO &&
                    availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET) {
                device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
            } else if (availableDeviceTypes & AUDIO_DEVICE_IN_WIRED_HEADSET) {
                device = AUDIO_DEVICE_IN_WIRED_HEADSET;
            } else if (availableDeviceTypes & AUDIO_DEVICE_IN_USB_DEVICE) {
                device = AUDIO_DEVICE_IN_USB_DEVICE;
            } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
                device = AUDIO_DEVICE_IN_BUILTIN_MIC;
            }
            break;
        case AUDIO_SOURCE_CAMCORDER:
            if (availableDeviceTypes & AUDIO_DEVICE_IN_BACK_MIC) {
                device = AUDIO_DEVICE_IN_BACK_MIC;
            } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
                device = AUDIO_DEVICE_IN_BUILTIN_MIC;
            }
            break;
        case AUDIO_SOURCE_VOICE_DOWNLINK:
        case AUDIO_SOURCE_VOICE_CALL:
            if (availableDeviceTypes & AUDIO_DEVICE_IN_VOICE_CALL) {
                device = AUDIO_DEVICE_IN_VOICE_CALL;
            }
            break;
        case AUDIO_SOURCE_REMOTE_SUBMIX:
            if (availableDeviceTypes & AUDIO_DEVICE_IN_REMOTE_SUBMIX) {
                device = AUDIO_DEVICE_IN_REMOTE_SUBMIX;
            }
            break;
         case AUDIO_SOURCE_FM_TUNER:
            if (availableDeviceTypes & AUDIO_DEVICE_IN_FM_TUNER) {
                device = AUDIO_DEVICE_IN_FM_TUNER;
            }
            break;
        default:
            ALOGW("getDeviceForInputSource() invalid input source %d", inputSource);
            break;
        }
        ALOGV("getDeviceForInputSource()input source %d, device %08x", inputSource, device);
        return device;
    }

       这里就是通过InputSource去获取相应的policyMix与audio_device_t设备类型了,从这里也可以看出Android系统上对Audio设备的分类有多少种了。

    然后再看下AudioPolicyManager.cpp::getInputForAttr()的第6步.mpClientInterface->openInput如何建立起输入流

    frameworksavservicesaudiopolicyAudioPolicyClientImpl.cpp

    status_t AudioPolicyService::AudioPolicyClient::openInput(audio_module_handle_t module,
                                                              audio_io_handle_t *input,
                                                              audio_config_t *config,
                                                              audio_devices_t *device,
                                                              const String8& address,
                                                              audio_source_t source,
                                                              audio_input_flags_t flags)
    {
        sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
        if (af == 0) {
            ALOGW("%s: could not get AudioFlinger", __func__);
            return PERMISSION_DENIED;
        }
    
        return af->openInput(module, input, config, device, address, source, flags);
    }

    这里就调用到了AF端的openInput函数了

    frameworksavservicesaudioflingerAudioFlinger.cpp

    status_t AudioFlinger::openInput(audio_module_handle_t module,
                                              audio_io_handle_t *input,
                                              audio_config_t *config,
                                              audio_devices_t *device,
                                              const String8& address,
                                              audio_source_t source,
                                              audio_input_flags_t flags)
    {
        Mutex::Autolock _l(mLock);
    
        if (*device == AUDIO_DEVICE_NONE) {
            return BAD_VALUE;
        }
    
        sp<RecordThread> thread = openInput_l(module, input, config, *device, address, source, flags);
    
        if (thread != 0) {
            // notify client processes of the new input creation
            thread->audioConfigChanged(AudioSystem::INPUT_OPENED);
            return NO_ERROR;
        }
        return NO_INIT;
    }
    
    sp<AudioFlinger::RecordThread> AudioFlinger::openInput_l(audio_module_handle_t module,
                                                             audio_io_handle_t *input,
                                                             audio_config_t *config,
                                                             audio_devices_t device,
                                                             const String8& address,
                                                             audio_source_t source,
                                                             audio_input_flags_t flags)
    {
        AudioHwDevice *inHwDev = findSuitableHwDev_l(module, device);
        if (inHwDev == NULL) {
            *input = AUDIO_IO_HANDLE_NONE;
            return 0;
        }
    
        if (*input == AUDIO_IO_HANDLE_NONE) {
            *input = nextUniqueId();
        }
    
        audio_config_t halconfig = *config;
        audio_hw_device_t *inHwHal = inHwDev->hwDevice();
        audio_stream_in_t *inStream = NULL;
        //获取inStream对象
        status_t status = inHwHal->open_input_stream(inHwHal, *input, device, &halconfig,
                                            &inStream, flags, address.string(), source);
    
        // If the input could not be opened with the requested parameters and we can handle the
        // conversion internally, try to open again with the proposed parameters. The AudioFlinger can
        // resample the input and do mono to stereo or stereo to mono conversions on 16 bit PCM inputs.
        if (status == BAD_VALUE &&
                config->format == halconfig.format && halconfig.format == AUDIO_FORMAT_PCM_16_BIT &&
            (halconfig.sample_rate <= 2 * config->sample_rate) &&
            (audio_channel_count_from_in_mask(halconfig.channel_mask) <= FCC_2) &&
            (audio_channel_count_from_in_mask(config->channel_mask) <= FCC_2)) {
            // FIXME describe the change proposed by HAL (save old values so we can log them here)
            ALOGV("openInput_l() reopening with proposed sampling rate and channel mask");
            inStream = NULL;
            status = inHwHal->open_input_stream(inHwHal, *input, device, &halconfig,
                                                &inStream, flags, address.string(), source);
            // FIXME log this new status; HAL should not propose any further changes
        }
    
        if (status == NO_ERROR && inStream != NULL) {
    
    #ifdef TEE_SINK 
            // Try to re-use most recently used Pipe to archive a copy of input for dumpsys,
            // or (re-)create if current Pipe is idle and does not match the new format
            sp<NBAIO_Sink> teeSink;
            enum {
                TEE_SINK_NO,    // don't copy input
                TEE_SINK_NEW,   // copy input using a new pipe
                TEE_SINK_OLD,   // copy input using an existing pipe
            } kind;
            NBAIO_Format format = Format_from_SR_C(halconfig.sample_rate,
                    audio_channel_count_from_in_mask(halconfig.channel_mask), halconfig.format);
            if (!mTeeSinkInputEnabled) {
                kind = TEE_SINK_NO;
            } else if (!Format_isValid(format)) {
                kind = TEE_SINK_NO;
            } else if (mRecordTeeSink == 0) {
                kind = TEE_SINK_NEW;
            } else if (mRecordTeeSink->getStrongCount() != 1) {
                kind = TEE_SINK_NO;
            } else if (Format_isEqual(format, mRecordTeeSink->format())) {
                kind = TEE_SINK_OLD;
            } else {
                kind = TEE_SINK_NEW;
            }
            switch (kind) {
            case TEE_SINK_NEW: {
                Pipe *pipe = new Pipe(mTeeSinkInputFrames, format);
                size_t numCounterOffers = 0;
                const NBAIO_Format offers[1] = {format};
                ssize_t index = pipe->negotiate(offers, 1, NULL, numCounterOffers);
                ALOG_ASSERT(index == 0);
                PipeReader *pipeReader = new PipeReader(*pipe);
                numCounterOffers = 0;
                index = pipeReader->negotiate(offers, 1, NULL, numCounterOffers);
                ALOG_ASSERT(index == 0);
                mRecordTeeSink = pipe;
                mRecordTeeSource = pipeReader;
                teeSink = pipe;
                }
                break;
            case TEE_SINK_OLD:
                teeSink = mRecordTeeSink;
                break;
            case TEE_SINK_NO:
            default:
                break;
            }
    #endif
            AudioStreamIn *inputStream = new AudioStreamIn(inHwDev, inStream);
            // Start record thread
            // RecordThread requires both input and output device indication to forward to audio
            // pre processing modules
            sp<RecordThread> thread = new RecordThread(this,
                                      inputStream,
                                      *input,
                                      primaryOutputDevice_l(),
                                      device
    #ifdef TEE_SINK
                                      , teeSink
    #endif
                                      );
            mRecordThreads.add(*input, thread);
            ALOGV("openInput_l() created record thread: ID %d thread %p", *input, thread.get());
            return thread;
        }
    
        *input = AUDIO_IO_HANDLE_NONE;
        return 0;
    }

    在这个函数中主要工作如下:

        1.findSuitableHwDev_l中通过IOProfile中的module.handle与audio_device_t设备类型找到Hw模块;

        2.调用HAL层inHwHal->open_input_stream打开输入流;

        3.如果失败了,再继续调用一次;

        4.根据inHwDev与inStream创建AudioStreamIn对象,如此,就建立起了一个输入流了,AudioStreamIn定义在frameworksavservicesaudioflingerAudioFlinger.h;

        5.创建一个RecordThread线程,并把该线程加入到mRecordThreads线程中,这个线程是在AudioRecord.cpp::set()函数中创建的;

    这里我们着重分析第2、5步:

    首先看下AudioFlinger.cpp::openInput()的第2步:打开输入流

    hardwareawaudio ulipaudio_hw.c

    static int adev_open_input_stream(struct audio_hw_device *dev,
                                      audio_io_handle_t handle,
                                      audio_devices_t devices,
                                      struct audio_config *config,
                                      struct audio_stream_in **stream_in)
    {
        struct sunxi_audio_device *ladev = (struct sunxi_audio_device *)dev;
        struct sunxi_stream_in *in;
        int ret;
        int channel_count = popcount(config->channel_mask);
    
        *stream_in = NULL;
    
        if (check_input_parameters(config->sample_rate, config->format, channel_count) != 0)
            return -EINVAL;
    
        in = (struct sunxi_stream_in *)calloc(1, sizeof(struct sunxi_stream_in));
        if (!in)
            return -ENOMEM;
    
        in->stream.common.get_sample_rate 	= in_get_sample_rate;
        in->stream.common.set_sample_rate 	= in_set_sample_rate;
        in->stream.common.get_buffer_size 	= in_get_buffer_size;
        in->stream.common.get_channels 		= in_get_channels;
        in->stream.common.get_format 		= in_get_format;
        in->stream.common.set_format 		= in_set_format;
        in->stream.common.standby 			= in_standby;
        in->stream.common.dump 				= in_dump;
        in->stream.common.set_parameters 	= in_set_parameters;
        in->stream.common.get_parameters 	= in_get_parameters;
        in->stream.common.add_audio_effect 	= in_add_audio_effect;
        in->stream.common.remove_audio_effect = in_remove_audio_effect;
        in->stream.set_gain = in_set_gain;
        in->stream.read 	= in_read;
        in->stream.get_input_frames_lost = in_get_input_frames_lost;
    
        in->requested_rate 	= config->sample_rate;
    
        // default config
        memcpy(&in->config, &pcm_config_mm_in, sizeof(pcm_config_mm_in));
        in->config.channels = channel_count;
        //in->config.in_init_channels = channel_count;
    
        in->buffer = malloc(in->config.period_size *
                            audio_stream_frame_size(&in->stream.common) * 8);
    
        if (!in->buffer) {
            ret = -ENOMEM;
            goto err;
        }
        memset(in->buffer, 0, in->config.period_size *
                    audio_stream_frame_size(&in->stream.common) * 8); //mute
    
        ladev->af_capture_flag = false;
        //devices = AUDIO_DEVICE_IN_WIFI_DISPLAY;//for test
    
        if (devices == AUDIO_DEVICE_IN_AF) {
    		ALOGV("to malloc PcmManagerBuffer: Buffer_size: %d", AF_BUFFER_SIZE);
    		ladev->PcmManager.BufStart= (unsigned char *)malloc(AF_BUFFER_SIZE);
    
    		if(!ladev->PcmManager.BufStart) {
    			ret = -ENOMEM;
    			goto err;
       		}
    
    		ladev->PcmManager.BufExist 		= true;
    		ladev->PcmManager.BufTotalLen 	= AF_BUFFER_SIZE;
    		ladev->PcmManager.BufWritPtr 	= ladev->PcmManager.BufStart;
    		ladev->PcmManager.BufReadPtr 	= ladev->PcmManager.BufStart;
    		ladev->PcmManager.BufValideLen	= ladev->PcmManager.BufTotalLen;
    		ladev->PcmManager.DataLen 		= 0;
    		ladev->PcmManager.SampleRate 	= config->sample_rate;
    		ladev->PcmManager.Channel 		= 2;
    		ladev->af_capture_flag 			= true;
    
    		ladev->PcmManager.dev 			= (struct sunxi_audio_device *)ladev;
        }
    
        in->dev 	= ladev;
        in->standby = 1;
        in->device 	= devices & ~AUDIO_DEVICE_BIT_IN;
    
        *stream_in 	= &in->stream;
        return 0;
    
    err:
        if (in->resampler)
            release_resampler(in->resampler);
    
        free(in);
        return ret;
    }

    在这个函数中主要工作如下:

        1.检查rate,format,channel参数是否支持;

        2.给sunxi_stream_in输入流对象分配内存空间;

        3.绑定相应参数的获取/设置方法;

        4.为输入流创建buff空间:in->config.period_size *audio_stream_frame_size(&in->stream.common) * 8;

        5.如果是AUDIO_DEVICE_IN_AF类型的设备的话,则对PcmManager做相应处理;

    这个输入流对象会绑定到AF中的AudioStreamIn对象中,所以到这里,输入流对象就已经完全创建好了。

    然后继续分析AudioFlinger.cpp::openInput()的第5步:创建RecordThread线程:

    frameworksavservicesaudioflingerThreads.cpp

    AudioFlinger::RecordThread::RecordThread(const sp<AudioFlinger>& audioFlinger,
                                             AudioStreamIn *input,
                                             audio_io_handle_t id,
                                             audio_devices_t outDevice,
                                             audio_devices_t inDevice
    #ifdef TEE_SINK
                                             , const sp<NBAIO_Sink>& teeSink
    #endif
                                             ) :
        ThreadBase(audioFlinger, id, outDevice, inDevice, RECORD),
        mInput(input), mActiveTracksGen(0), mRsmpInBuffer(NULL),
        // mRsmpInFrames and mRsmpInFramesP2 are set by readInputParameters_l()
        mRsmpInRear(0)
    #ifdef TEE_SINK
        , mTeeSink(teeSink)
    #endif
        , mReadOnlyHeap(new MemoryDealer(kRecordThreadReadOnlyHeapSize,
                "RecordThreadRO", MemoryHeapBase::READ_ONLY))
        // mFastCapture below
        , mFastCaptureFutex(0)
        // mInputSource
        // mPipeSink
        // mPipeSource
        , mPipeFramesP2(0)
        // mPipeMemory
        // mFastCaptureNBLogWriter
        , mFastTrackAvail(false)
    {
        snprintf(mName, kNameLength, "AudioIn_%X", id);
        mNBLogWriter = audioFlinger->newWriter_l(kLogSize, mName);
    
        readInputParameters_l();
        // create an NBAIO source for the HAL input stream, and negotiate
        mInputSource = new AudioStreamInSource(input->stream);
        size_t numCounterOffers = 0;
        const NBAIO_Format offers[1] = {Format_from_SR_C(mSampleRate, mChannelCount, mFormat)};
        ssize_t index = mInputSource->negotiate(offers, 1, NULL, numCounterOffers);
        ALOG_ASSERT(index == 0);
    
        // initialize fast capture depending on configuration
        bool initFastCapture;
        switch (kUseFastCapture) {
        case FastCapture_Never:
            initFastCapture = false;
            break;
        case FastCapture_Always:
            initFastCapture = true;
            break;
        case FastCapture_Static:
            uint32_t primaryOutputSampleRate;
            {
                AutoMutex _l(audioFlinger->mHardwareLock);
                primaryOutputSampleRate = audioFlinger->mPrimaryOutputSampleRate;
            }
            initFastCapture =
                    // either capture sample rate is same as (a reasonable) primary output sample rate
                    (((primaryOutputSampleRate == 44100 || primaryOutputSampleRate == 48000) &&
                        (mSampleRate == primaryOutputSampleRate)) ||
                    // or primary output sample rate is unknown, and capture sample rate is reasonable
                    ((primaryOutputSampleRate == 0) &&
                        ((mSampleRate == 44100 || mSampleRate == 48000)))) &&
                    // and the buffer size is < 12 ms
                    (mFrameCount * 1000) / mSampleRate < 12;
            break;
        // case FastCapture_Dynamic:
        }
    
        if (initFastCapture) {
            // create a Pipe for FastMixer to write to, and for us and fast tracks to read from
            NBAIO_Format format = mInputSource->format();
            size_t pipeFramesP2 = roundup(mSampleRate / 25);    // double-buffering of 20 ms each
            size_t pipeSize = pipeFramesP2 * Format_frameSize(format);
            void *pipeBuffer;
            const sp<MemoryDealer> roHeap(readOnlyHeap());
            sp<IMemory> pipeMemory;
            if ((roHeap == 0) ||
                    (pipeMemory = roHeap->allocate(pipeSize)) == 0 ||
                    (pipeBuffer = pipeMemory->pointer()) == NULL) {
                ALOGE("not enough memory for pipe buffer size=%zu", pipeSize);
                goto failed;
            }
            // pipe will be shared directly with fast clients, so clear to avoid leaking old information
            memset(pipeBuffer, 0, pipeSize);
            Pipe *pipe = new Pipe(pipeFramesP2, format, pipeBuffer);
            const NBAIO_Format offers[1] = {format};
            size_t numCounterOffers = 0;
            ssize_t index = pipe->negotiate(offers, 1, NULL, numCounterOffers);
            ALOG_ASSERT(index == 0);
            mPipeSink = pipe;
            PipeReader *pipeReader = new PipeReader(*pipe);
            numCounterOffers = 0;
            index = pipeReader->negotiate(offers, 1, NULL, numCounterOffers);
            ALOG_ASSERT(index == 0);
            mPipeSource = pipeReader;
            mPipeFramesP2 = pipeFramesP2;
            mPipeMemory = pipeMemory;
    
            // create fast capture
            mFastCapture = new FastCapture();
            FastCaptureStateQueue *sq = mFastCapture->sq();
    #ifdef STATE_QUEUE_DUMP
            // FIXME
    #endif
            FastCaptureState *state = sq->begin();
            state->mCblk = NULL;
            state->mInputSource = mInputSource.get();
            state->mInputSourceGen++;
            state->mPipeSink = pipe;
            state->mPipeSinkGen++;
            state->mFrameCount = mFrameCount;
            state->mCommand = FastCaptureState::COLD_IDLE;
            // already done in constructor initialization list
            //mFastCaptureFutex = 0;
            state->mColdFutexAddr = &mFastCaptureFutex;
            state->mColdGen++;
            state->mDumpState = &mFastCaptureDumpState;
    #ifdef TEE_SINK
            // FIXME
    #endif
            mFastCaptureNBLogWriter = audioFlinger->newWriter_l(kFastCaptureLogSize, "FastCapture");
            state->mNBLogWriter = mFastCaptureNBLogWriter.get();
            sq->end();
            sq->push(FastCaptureStateQueue::BLOCK_UNTIL_PUSHED);
    
            // start the fast capture
            mFastCapture->run("FastCapture", ANDROID_PRIORITY_URGENT_AUDIO);
            pid_t tid = mFastCapture->getTid();
            int err = requestPriority(getpid_cached, tid, kPriorityFastMixer);
            if (err != 0) {
                ALOGW("Policy SCHED_FIFO priority %d is unavailable for pid %d tid %d; error %d",
                        kPriorityFastCapture, getpid_cached, tid, err);
            }
    
    #ifdef AUDIO_WATCHDOG
            // FIXME
    #endif
    
            mFastTrackAvail = true;
        }
    failed: ;
    
        // FIXME mNormalSource
    }

    在这个线程中主要工作如下:

        1.调用readInputParameters_l函数把录音参数读取到线程空间中;

        2.创建AudioStreamInSource对象,作为线程中间中的输入流,其实现是在frameworksavmedialibnbaioAudioStreamInSource.cpp;

    所以可以猜到,后续在启动录音时,RecordThread中将会通过AudioStreamInSource对象进行获取数据,实时更新共享内存中的数据。

    再回到AudioRecord.cpp::openRecord_l(0)的第4步.创建IAudioRecord对象

    sp<IAudioRecord> AudioFlinger::openRecord(
            audio_io_handle_t input,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t *frameCount,
            IAudioFlinger::track_flags_t *flags,
            pid_t tid,
            int *sessionId,
            size_t *notificationFrames,
            sp<IMemory>& cblk,
            sp<IMemory>& buffers,
            status_t *status)
    {
        sp<RecordThread::RecordTrack> recordTrack;
        sp<RecordHandle> recordHandle;
        sp<Client> client;
        status_t lStatus;
        int lSessionId;
    
        cblk.clear();	
        buffers.clear();
    
        // check calling permissions
        if (!recordingAllowed()) {
            ALOGE("openRecord() permission denied: recording not allowed");
            lStatus = PERMISSION_DENIED;
            goto Exit;
        }
    
        // further sample rate checks are performed by createRecordTrack_l()
        if (sampleRate == 0) {
            ALOGE("openRecord() invalid sample rate %u", sampleRate);
            lStatus = BAD_VALUE;
            goto Exit;
        }
    
        // we don't yet support anything other than 16-bit PCM
        if (!(audio_is_valid_format(format) &&
                audio_is_linear_pcm(format) && format == AUDIO_FORMAT_PCM_16_BIT)) {
            ALOGE("openRecord() invalid format %#x", format);
            lStatus = BAD_VALUE;
            goto Exit;
        }
    
        // further channel mask checks are performed by createRecordTrack_l()
        if (!audio_is_input_channel(channelMask)) {
            ALOGE("openRecord() invalid channel mask %#x", channelMask);
            lStatus = BAD_VALUE;
            goto Exit;
        }
    
        {
            Mutex::Autolock _l(mLock);
            RecordThread *thread = checkRecordThread_l(input);
            if (thread == NULL) {
                ALOGE("openRecord() checkRecordThread_l failed");
                lStatus = BAD_VALUE;
                goto Exit;
            }
    
            pid_t pid = IPCThreadState::self()->getCallingPid();
            client = registerPid(pid);
    
            if (sessionId != NULL && *sessionId != AUDIO_SESSION_ALLOCATE) {
                lSessionId = *sessionId;
            } else {
                // if no audio session id is provided, create one here
                lSessionId = nextUniqueId();
                if (sessionId != NULL) {
                    *sessionId = lSessionId;
                }
            }
    
            // TODO: the uid should be passed in as a parameter to openRecord
            recordTrack = thread->createRecordTrack_l(client, sampleRate, format, channelMask,
                                                      frameCount, lSessionId, notificationFrames,
                                                      IPCThreadState::self()->getCallingUid(),
                                                      flags, tid, &lStatus);
            LOG_ALWAYS_FATAL_IF((lStatus == NO_ERROR) && (recordTrack == 0));
    
            if (lStatus == NO_ERROR) {
                // Check if one effect chain was awaiting for an AudioRecord to be created on this
                // session and move it to this thread.
                sp<EffectChain> chain = getOrphanEffectChain_l((audio_session_t)lSessionId);
                if (chain != 0) {
                    Mutex::Autolock _l(thread->mLock);
                    thread->addEffectChain_l(chain);
                }
            }
        }
    
        if (lStatus != NO_ERROR) {
            // remove local strong reference to Client before deleting the RecordTrack so that the
            // Client destructor is called by the TrackBase destructor with mClientLock held
            // Don't hold mClientLock when releasing the reference on the track as the
            // destructor will acquire it.
            {
                Mutex::Autolock _cl(mClientLock);
                client.clear();
            }
            recordTrack.clear();
            goto Exit;
        }
    
        cblk = recordTrack->getCblk();
        buffers = recordTrack->getBuffers();
    
        // return handle to client
        recordHandle = new RecordHandle(recordTrack);
    
    Exit:
        *status = lStatus;
        return recordHandle;
    }

    在这个函数中主要工作如下:

        1.调用recordingAllowed检查录音权限;

        2.判断参数是否非法;

        3.调用checkRecordThread_l函数,根据input从AudioRecordThread线程中获取该input的RecordThread,在前面的分析中可以得知,这个RecordThread是在AudioFlinger.cpp::openInput函数中创建并添加到AudioRecordThread中的;

        4.调用createRecordTrack_l方法创建一个RecordTrack对象,RecordThread::RecordTrack对象的作用是管理RecordThread中的音频数据;

        5.通过SessionId获取是否存在effect chain,若有,则加到RecordThread中;

        6.通过RecordTrack获取cblk以及buffers,他们就是CblkMemory以及BufferMemory;

        7.根据recordTrack,创建RecordHandle对象,实现位置:frameworksavservicesaudioflingerTracks.cpp,也就完成了IAudioRecord对象的创建了,也就是说IAudioRecord的方法是在Tracks.cpp中实现的;

    这里再看下第2步:createRecordTrack_l函数

    sp<AudioFlinger::RecordThread::RecordTrack> AudioFlinger::RecordThread::createRecordTrack_l(
            const sp<AudioFlinger::Client>& client,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t *pFrameCount,
            int sessionId,
            size_t *notificationFrames,
            int uid,
            IAudioFlinger::track_flags_t *flags,
            pid_t tid,
            status_t *status)
    {
        size_t frameCount = *pFrameCount;
        sp<RecordTrack> track;
        status_t lStatus;
    
        // client expresses a preference for FAST, but we get the final say
        if (*flags & IAudioFlinger::TRACK_FAST) {
          if (
                // use case: callback handler
                (tid != -1) &&
                // frame count is not specified, or is exactly the pipe depth
                ((frameCount == 0) || (frameCount == mPipeFramesP2)) &&
                // PCM data
                audio_is_linear_pcm(format) &&
                // native format
                (format == mFormat) &&
                // native channel mask
                (channelMask == mChannelMask) &&
                // native hardware sample rate
                (sampleRate == mSampleRate) &&
                // record thread has an associated fast capture
                hasFastCapture() &&
                // there are sufficient fast track slots available
                mFastTrackAvail
            ) {
            ALOGV("AUDIO_INPUT_FLAG_FAST accepted: frameCount=%u mFrameCount=%u",
                    frameCount, mFrameCount);
          } else {
            ALOGV("AUDIO_INPUT_FLAG_FAST denied: frameCount=%u mFrameCount=%u mPipeFramesP2=%u "
                    "format=%#x isLinear=%d channelMask=%#x sampleRate=%u mSampleRate=%u "
                    "hasFastCapture=%d tid=%d mFastTrackAvail=%d",
                    frameCount, mFrameCount, mPipeFramesP2,
                    format, audio_is_linear_pcm(format), channelMask, sampleRate, mSampleRate,
                    hasFastCapture(), tid, mFastTrackAvail);
            *flags &= ~IAudioFlinger::TRACK_FAST;
          }
        }
    
        // compute track buffer size in frames, and suggest the notification frame count
        if (*flags & IAudioFlinger::TRACK_FAST) {
            // fast track: frame count is exactly the pipe depth
            frameCount = mPipeFramesP2;
            // ignore requested notificationFrames, and always notify exactly once every HAL buffer
            *notificationFrames = mFrameCount;
        } else {
            // not fast track: max notification period is resampled equivalent of one HAL buffer time
            //                 or 20 ms if there is a fast capture
            // TODO This could be a roundupRatio inline, and const
            size_t maxNotificationFrames = ((int64_t) (hasFastCapture() ? mSampleRate/50 : mFrameCount)
                    * sampleRate + mSampleRate - 1) / mSampleRate;
            // minimum number of notification periods is at least kMinNotifications,
            // and at least kMinMs rounded up to a whole notification period (minNotificationsByMs)
            static const size_t kMinNotifications = 3;
            static const uint32_t kMinMs = 30;
            // TODO This could be a roundupRatio inline
            const size_t minFramesByMs = (sampleRate * kMinMs + 1000 - 1) / 1000;
            // TODO This could be a roundupRatio inline
            const size_t minNotificationsByMs = (minFramesByMs + maxNotificationFrames - 1) /
                    maxNotificationFrames;
            const size_t minFrameCount = maxNotificationFrames *
                    max(kMinNotifications, minNotificationsByMs);
            frameCount = max(frameCount, minFrameCount);
            if (*notificationFrames == 0 || *notificationFrames > maxNotificationFrames) {
                *notificationFrames = maxNotificationFrames;
            }
        }
        *pFrameCount = frameCount;
    
        lStatus = initCheck();
        if (lStatus != NO_ERROR) {
            ALOGE("createRecordTrack_l() audio driver not initialized");
            goto Exit;
        }
    
        { // scope for mLock
            Mutex::Autolock _l(mLock);
    
            track = new RecordTrack(this, client, sampleRate,
                          format, channelMask, frameCount, NULL, sessionId, uid,
                          *flags, TrackBase::TYPE_DEFAULT);
    
            lStatus = track->initCheck();
            if (lStatus != NO_ERROR) {
                ALOGE("createRecordTrack_l() initCheck failed %d; no control block?", lStatus);
                // track must be cleared from the caller as the caller has the AF lock
                goto Exit;
            }
            mTracks.add(track);
    
            // disable AEC and NS if the device is a BT SCO headset supporting those pre processings
            bool suspend = audio_is_bluetooth_sco_device(mInDevice) &&
                            mAudioFlinger->btNrecIsOff();
            setEffectSuspended_l(FX_IID_AEC, suspend, sessionId);
            setEffectSuspended_l(FX_IID_NS, suspend, sessionId);
    
            if ((*flags & IAudioFlinger::TRACK_FAST) && (tid != -1)) {
                pid_t callingPid = IPCThreadState::self()->getCallingPid();
                // we don't have CAP_SYS_NICE, nor do we want to have it as it's too powerful,
                // so ask activity manager to do this on our behalf
                sendPrioConfigEvent_l(callingPid, tid, kPriorityAudioApp);
            }
        }
    
        lStatus = NO_ERROR;
    
    Exit:
        *status = lStatus;
        return track;
    }

    这个函数重要的一点就是重新计算了frameCount大小,然后根据新的参数创建了RecordTrack对象,然后return。

    总结:

        当应用层new AudioRecord时,系统建立起了输入流,并创建了RecordThread线程,现在录音的准备工作已经完成,就等待应用层开启录音了。

    由于作者内功有限,若文章中存在错误或不足的地方,还请给位大佬指出,不胜感激!

  • 相关阅读:
    acdream 瑶瑶带你玩激光坦克 (模拟)
    acdream 小晴天老师系列——苹果大丰收(DP)
    acdream 小晴天老师系列——晴天的后花园 (暴力+剪枝)
    acdream 小晴天老师系列——竖式乘法(简单穷举)
    acdream LCM Challenge (最小公倍数)
    LeetCode Product of Array Except Self (除自身外序列之积)
    LeetCode Implement Trie (Prefix Tree) (实现trie树3个函数:插入,查找,前缀)
    字节流与字符流的区别
    oop第二章1知识点汇总
    抽象类和抽象方法的一些概念(转自百度)
  • 原文地址:https://www.cnblogs.com/pngcui/p/10016538.html
Copyright © 2020-2023  润新知