当前位置:  开发笔记 > 编程语言 > 正文

iPhone结合了音频文件

如何解决《iPhone结合了音频文件》经验,为你挑选了1个好方法。

我一直在寻找答案,但却找不到它 - 显然;)

这就是我所拥有的 - 我有一个单词列表.每个单词都保存为iPhone上的wav文件.在我的应用程序中,用户将选择单词,我想将这些单词放在一起来制作一个句子.

我无法确定如何将多个wav文件按顺序组合在一起,以将整个句子创建为单个文件.

我已经通过示例想出了如何将文件作为一个文件一起播放 - 但是示例混合它们 - 我需要基本上将它们相互附加.我试图将它们相互追加并从除第一个文件之外的所有文件中删除标题信息,但此过程不起作用.该文件的长度正确,但只播放第一个文件的内容.

我认为正确的路径是使用AudioFileReadPacketData读取文件,AudioFileWritePacketData将信息写入新文件.事实证明这很难......

有没有人有音频API的经验,可以提供一些示例代码?

好的 - 对此事的更多研究......看起来正确的功能是音频队列离线渲染.Apple提供了一些示例代码(AQOfflineRenderTest).脱机渲染的原因是您可以将输出缓冲区附加到渲染并将其保存到文件中.随着项目的进展,还有更多......

好的 - 三天没有真正的进展....

我试图将三个.wav文件合并到目标.wav文件中.现在,当您运行此代码时,第一个文件将保存到目标中.

有任何想法吗?

此源代码使用Apple提供的iPublicUtility类 - 它们可以在多个项目中下载.一个是项目是aurioTouch.

这是我的代码(将它放在一个.cpp文件中,并在普通的Objective C源文件中引用CombineAudioFiles):

// standard includes
#include 
#include 
#include 

// helpers
#include "CAXException.h"
#include "CAStreamBasicDescription.h"

#define kNumberOfBuffers 3
#define kMaxNumberOfFiles 3

// the application specific info we keep track of
struct AQTestInfo
{
    AudioFileID                     mAudioFile[kMaxNumberOfFiles];
    CAStreamBasicDescription        mDataFormat[kMaxNumberOfFiles];
    AudioQueueRef                   mQueue[kMaxNumberOfFiles];
    AudioQueueBufferRef             mBuffer[kNumberOfBuffers];
    UInt32                          mNumberOfAudioFiles;
    UInt32                          mCurrentAudioFile;
    UInt32                          mbufferByteSize;
    SInt64                          mCurrentPacket;
    UInt32                          mNumPacketsToRead;
    AudioStreamPacketDescription    *mPacketDescs;
    bool                            mFlushed;
    bool                            mDone;
};


#pragma mark- Helper Functions
// ***********************
// CalculateBytesForTime Utility Function

// we only use time here as a guideline
// we are really trying to get somewhere between 16K and 64K buffers, but not allocate too much if we don't need it
void CalculateBytesForTime (CAStreamBasicDescription &inDesc, UInt32 inMaxPacketSize, Float64 inSeconds, UInt32 *outBufferSize, UInt32 *outNumPackets)
{
    static const int maxBufferSize = 0x10000;   // limit size to 64K
    static const int minBufferSize = 0x4000;    // limit size to 16K

    if (inDesc.mFramesPerPacket) {
        Float64 numPacketsForTime = inDesc.mSampleRate / inDesc.mFramesPerPacket * inSeconds;
        *outBufferSize = numPacketsForTime * inMaxPacketSize;
    } else {
        // if frames per packet is zero, then the codec has no predictable packet == time
        // so we can't tailor this (we don't know how many Packets represent a time period
        // we'll just return a default buffer size
        *outBufferSize = maxBufferSize > inMaxPacketSize ? maxBufferSize : inMaxPacketSize;
    }

    // we're going to limit our size to our default
    if (*outBufferSize > maxBufferSize && *outBufferSize > inMaxPacketSize) {
        *outBufferSize = maxBufferSize;
    } else {
        // also make sure we're not too small - we don't want to go the disk for too small chunks
        if (*outBufferSize < minBufferSize) {
            *outBufferSize = minBufferSize;
        }
    }

    *outNumPackets = *outBufferSize / inMaxPacketSize;
}

#pragma mark- AQOutputCallback
// ***********************
// AudioQueueOutputCallback function used to push data into the audio queue

static void AQTestBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer) 
{
    AQTestInfo * myInfo = (AQTestInfo *)inUserData;
    if (myInfo->mDone) return;

    UInt32 numBytes;
    UInt32 nPackets = myInfo->mNumPacketsToRead;
    OSStatus result = AudioFileReadPackets(myInfo->mAudioFile[myInfo->mCurrentAudioFile],      // The audio file from which packets of audio data are to be read.
                                           false,                   // Set to true to cache the data. Otherwise, set to false.
                                           &numBytes,               // On output, a pointer to the number of bytes actually returned.
                                           myInfo->mPacketDescs,    // A pointer to an array of packet descriptions that have been allocated.
                                           myInfo->mCurrentPacket,  // The packet index of the first packet you want to be returned.
                                           &nPackets,               // On input, a pointer to the number of packets to read. On output, the number of packets actually read.
                                           inCompleteAQBuffer->mAudioData); // A pointer to user-allocated memory.
    if (result) {
        DebugMessageN1 ("Error reading from file: %d\n", (int)result);
        exit(1);
    }

    // we have some data
    if (nPackets > 0) {
        inCompleteAQBuffer->mAudioDataByteSize = numBytes;

        result = AudioQueueEnqueueBuffer(inAQ,                                  // The audio queue that owns the audio queue buffer.
                                         inCompleteAQBuffer,                    // The audio queue buffer to add to the buffer queue.
                                         (myInfo->mPacketDescs ? nPackets : 0), // The number of packets of audio data in the inBuffer parameter. See Docs.
                                         myInfo->mPacketDescs);                 // An array of packet descriptions. Or NULL. See Docs.
        if (result) {
            DebugMessageN1 ("Error enqueuing buffer: %d\n", (int)result);
            exit(1);
        }

        myInfo->mCurrentPacket += nPackets;

    } else {
        // **** This ensures that we flush the queue when done -- ensures you get all the data out ****

        if (!myInfo->mFlushed) {
            result = AudioQueueFlush(myInfo->mQueue[myInfo->mCurrentAudioFile]);

            if (result) {
                DebugMessageN1("AudioQueueFlush failed: %d", (int)result);
                exit(1);
            }

            myInfo->mFlushed = true;
        }

        result = AudioQueueStop(myInfo->mQueue[myInfo->mCurrentAudioFile], false);
        if (result) {
            DebugMessageN1("AudioQueueStop(false) failed: %d", (int)result);
            exit(1);
        }

        // reading nPackets == 0 is our EOF condition
        myInfo->mDone = true;
    }
}


// ***********************
#pragma mark- Main Render Function

#if __cplusplus
extern "C" {
#endif

void CombineAudioFiles(CFURLRef sourceURL1, CFURLRef sourceURL2, CFURLRef sourceURL3, CFURLRef destinationURL) 
{
    // main audio queue code
    try {
        AQTestInfo myInfo;

        myInfo.mDone = false;
        myInfo.mFlushed = false;
        myInfo.mCurrentPacket = 0;
        myInfo.mCurrentAudioFile = 0;

        // get the source file
        XThrowIfError(AudioFileOpenURL(sourceURL1, 0x01/*fsRdPerm*/, 0/*inFileTypeHint*/, &myInfo.mAudioFile[0]), "AudioFileOpen failed");
        XThrowIfError(AudioFileOpenURL(sourceURL2, 0x01/*fsRdPerm*/, 0/*inFileTypeHint*/, &myInfo.mAudioFile[1]), "AudioFileOpen failed");
        XThrowIfError(AudioFileOpenURL(sourceURL3, 0x01/*fsRdPerm*/, 0/*inFileTypeHint*/, &myInfo.mAudioFile[2]), "AudioFileOpen failed");

        UInt32 size = sizeof(myInfo.mDataFormat[myInfo.mCurrentAudioFile]);
        XThrowIfError(AudioFileGetProperty(myInfo.mAudioFile[myInfo.mCurrentAudioFile], kAudioFilePropertyDataFormat, &size, &myInfo.mDataFormat[myInfo.mCurrentAudioFile]), "couldn't get file's data format");

        printf ("File format: "); myInfo.mDataFormat[myInfo.mCurrentAudioFile].Print();

        // create a new audio queue output
        XThrowIfError(AudioQueueNewOutput(&myInfo.mDataFormat[myInfo.mCurrentAudioFile],   // The data format of the audio to play. For linear PCM, only interleaved formats are supported.
                                          AQTestBufferCallback,     // A callback function to use with the playback audio queue.
                                          &myInfo,                  // A custom data structure for use with the callback function.
                                          CFRunLoopGetCurrent(),    // The event loop on which the callback function pointed to by the inCallbackProc parameter is to be called.
                                                                    // If you specify NULL, the callback is invoked on one of the audio queue’s internal threads.
                                          kCFRunLoopCommonModes,    // The run loop mode in which to invoke the callback function specified in the inCallbackProc parameter. 
                                          0,                        // Reserved for future use. Must be 0.
                                          &myInfo.mQueue[myInfo.mCurrentAudioFile]),       // On output, the newly created playback audio queue object.
                                          "AudioQueueNew failed");

        UInt32 bufferByteSize;

        // we need to calculate how many packets we read at a time and how big a buffer we need
        // we base this on the size of the packets in the file and an approximate duration for each buffer
        {
            bool isFormatVBR = (myInfo.mDataFormat[myInfo.mCurrentAudioFile].mBytesPerPacket == 0 || myInfo.mDataFormat[myInfo.mCurrentAudioFile].mFramesPerPacket == 0);

            // first check to see what the max size of a packet is - if it is bigger
            // than our allocation default size, that needs to become larger
            UInt32 maxPacketSize;
            size = sizeof(maxPacketSize);
            XThrowIfError(AudioFileGetProperty(myInfo.mAudioFile[myInfo.mCurrentAudioFile], kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize), "couldn't get file's max packet size");

            // adjust buffer size to represent about a second of audio based on this format
            CalculateBytesForTime(myInfo.mDataFormat[myInfo.mCurrentAudioFile], maxPacketSize, 1.0/*seconds*/, &bufferByteSize, &myInfo.mNumPacketsToRead);

            if (isFormatVBR) {
                myInfo.mPacketDescs = new AudioStreamPacketDescription [myInfo.mNumPacketsToRead];
            } else {
                myInfo.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
            }

            printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)bufferByteSize, (int)myInfo.mNumPacketsToRead);
        }

        // if the file has a magic cookie, we should get it and set it on the AQ
        size = sizeof(UInt32);
        OSStatus result = AudioFileGetPropertyInfo (myInfo.mAudioFile[myInfo.mCurrentAudioFile], kAudioFilePropertyMagicCookieData, &size, NULL);

        if (!result && size) {
            char* cookie = new char [size];     
            XThrowIfError (AudioFileGetProperty (myInfo.mAudioFile[myInfo.mCurrentAudioFile], kAudioFilePropertyMagicCookieData, &size, cookie), "get cookie from file");
            XThrowIfError (AudioQueueSetProperty(myInfo.mQueue[myInfo.mCurrentAudioFile], kAudioQueueProperty_MagicCookie, cookie, size), "set cookie on queue");
            delete [] cookie;
        }

        // channel layout?
        OSStatus err = AudioFileGetPropertyInfo(myInfo.mAudioFile[myInfo.mCurrentAudioFile], kAudioFilePropertyChannelLayout, &size, NULL);
        AudioChannelLayout *acl = NULL;
        if (err == noErr && size > 0) {
            acl = (AudioChannelLayout *)malloc(size);
            XThrowIfError(AudioFileGetProperty(myInfo.mAudioFile[myInfo.mCurrentAudioFile], kAudioFilePropertyChannelLayout, &size, acl), "get audio file's channel layout");
            XThrowIfError(AudioQueueSetProperty(myInfo.mQueue[myInfo.mCurrentAudioFile], kAudioQueueProperty_ChannelLayout, acl, size), "set channel layout on queue");
        }

        //allocate the input read buffer
        XThrowIfError(AudioQueueAllocateBuffer(myInfo.mQueue[myInfo.mCurrentAudioFile], bufferByteSize, &myInfo.mBuffer[myInfo.mCurrentAudioFile]), "AudioQueueAllocateBuffer");

        // prepare a canonical interleaved capture format
        CAStreamBasicDescription captureFormat;
        captureFormat.mSampleRate = myInfo.mDataFormat[myInfo.mCurrentAudioFile].mSampleRate;
        captureFormat.SetAUCanonical(myInfo.mDataFormat[myInfo.mCurrentAudioFile].mChannelsPerFrame, true); // interleaved
        XThrowIfError(AudioQueueSetOfflineRenderFormat(myInfo.mQueue[myInfo.mCurrentAudioFile], &captureFormat, acl), "set offline render format");         

        ExtAudioFileRef captureFile;

        // prepare a 16-bit int file format, sample channel count and sample rate
        CAStreamBasicDescription dstFormat;
        dstFormat.mSampleRate = myInfo.mDataFormat[myInfo.mCurrentAudioFile].mSampleRate;
        dstFormat.mChannelsPerFrame = myInfo.mDataFormat[myInfo.mCurrentAudioFile].mChannelsPerFrame;
        dstFormat.mFormatID = kAudioFormatLinearPCM;
        dstFormat.mFormatFlags = kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger; // little-endian
        dstFormat.mBitsPerChannel = 16;
        dstFormat.mBytesPerPacket = dstFormat.mBytesPerFrame = 2 * dstFormat.mChannelsPerFrame;
        dstFormat.mFramesPerPacket = 1;

        // create the capture file
        XThrowIfError(ExtAudioFileCreateWithURL(destinationURL, kAudioFileCAFType, &dstFormat, acl, kAudioFileFlags_EraseFile, &captureFile), "ExtAudioFileCreateWithURL");

        // set the capture file's client format to be the canonical format from the queue
        XThrowIfError(ExtAudioFileSetProperty(captureFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &captureFormat), "set ExtAudioFile client format");

        // allocate the capture buffer, just keep it at half the size of the enqueue buffer
        // we don't ever want to pull any faster than we can push data in for render
        // this 2:1 ratio keeps the AQ Offline Render happy
        const UInt32 captureBufferByteSize = bufferByteSize / 2;

        AudioQueueBufferRef captureBuffer;
        AudioBufferList captureABL;

        XThrowIfError(AudioQueueAllocateBuffer(myInfo.mQueue[myInfo.mCurrentAudioFile], captureBufferByteSize, &captureBuffer), "AudioQueueAllocateBuffer");

        captureABL.mNumberBuffers = 1;
        captureABL.mBuffers[0].mData = captureBuffer->mAudioData;
        captureABL.mBuffers[0].mNumberChannels = captureFormat.mChannelsPerFrame;

        // lets start playing now - stop is called in the AQTestBufferCallback when there's
        // no more to read from the file
        XThrowIfError(AudioQueueStart(myInfo.mQueue[myInfo.mCurrentAudioFile], NULL), "AudioQueueStart failed");

        AudioTimeStamp ts;
        ts.mFlags = kAudioTimeStampSampleTimeValid;
        ts.mSampleTime = 0;

        // we need to call this once asking for 0 frames
        XThrowIfError(AudioQueueOfflineRender(myInfo.mQueue[myInfo.mCurrentAudioFile], &ts, captureBuffer, 0), "AudioQueueOfflineRender");

        // we need to enqueue a buffer after the queue has started
        AQTestBufferCallback(&myInfo, myInfo.mQueue[myInfo.mCurrentAudioFile], myInfo.mBuffer[myInfo.mCurrentAudioFile]);

        while (true) {
            UInt32 reqFrames = captureBufferByteSize / captureFormat.mBytesPerFrame;

            XThrowIfError(AudioQueueOfflineRender(myInfo.mQueue[myInfo.mCurrentAudioFile], &ts, captureBuffer, reqFrames), "AudioQueueOfflineRender");

            captureABL.mBuffers[0].mData = captureBuffer->mAudioData;
            captureABL.mBuffers[0].mDataByteSize = captureBuffer->mAudioDataByteSize;
            UInt32 writeFrames = captureABL.mBuffers[0].mDataByteSize / captureFormat.mBytesPerFrame;

            printf("t = %.f: AudioQueueOfflineRender:  req %d fr/%d bytes, got %d fr/%d bytes\n", ts.mSampleTime, (int)reqFrames, (int)captureBufferByteSize, writeFrames, (int)captureABL.mBuffers[0].mDataByteSize);

            XThrowIfError(ExtAudioFileWrite(captureFile, writeFrames, &captureABL), "ExtAudioFileWrite");

            if (myInfo.mFlushed) break;

            ts.mSampleTime += writeFrames;
        }

        CFRunLoopRunInMode(kCFRunLoopDefaultMode, 1, false);

        XThrowIfError(AudioQueueDispose(myInfo.mQueue[myInfo.mCurrentAudioFile], true), "AudioQueueDispose(true) failed");
        XThrowIfError(AudioFileClose(myInfo.mAudioFile[myInfo.mCurrentAudioFile]), "AudioQueueDispose(false) failed");
        XThrowIfError(ExtAudioFileDispose(captureFile), "ExtAudioFileDispose failed");

        if (myInfo.mPacketDescs) delete [] myInfo.mPacketDescs;
        if (acl) free(acl);
    }
    catch (CAXException e) {
        char buf[256];
        fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
    }

    return;
}


#if __cplusplus
}
#endif

iPhone Guy.. 6

好的 - 找到了这个问题的答案 - 只适用于wav文件......

我使用NSData将每个文件连接成主数据.然后我根据wav文件规范重写了标题(前44个字节).

这个过程运作良好......这个过程中最复杂的部分是重写标题信息......但是一旦弄清楚了,使用这个过程就能很好地运作.



1> iPhone Guy..:

好的 - 找到了这个问题的答案 - 只适用于wav文件......

我使用NSData将每个文件连接成主数据.然后我根据wav文件规范重写了标题(前44个字节).

这个过程运作良好......这个过程中最复杂的部分是重写标题信息......但是一旦弄清楚了,使用这个过程就能很好地运作.

推荐阅读
wangtao
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有