• Live555 Streaming from a live source


    https://www.mail-archive.com/live-devel@lists.live555.com/msg05506.html
    -----ask--------------------------------
    Hi, 
    
    We are trying to stream from a live source with Live555. 
    
    We implement our own DeviceSource class. In this class we implement
    doGetNextFrame in the following (logic) way. We remove all the unnecessary
    implementation details so you can see the idea 
    
    If no frame is available do the following
      nextTask() =
    envir().taskScheduler().scheduleDelayedTask(30000,(TaskFunc*)nextTime,
    this);
    
    If a frame is available do the following 
    If (fFrameSize < fMaxSize)
    {
    memcpy(fTo, Buffer_getUserPtr(hEncBuf) ,fFrameSize); // copy the frame to
    Live555 
    nextTask() =
    envir().taskScheduler().scheduleDelayedTask(0,(TaskFunc*)FramedSource::after
    Getting, this);
    }
    else 
    {
    What should we do? (We do not understand what should we do in this option) 
    }
    
    As you can see we would like to feed the Live555 frame by frame from the
    live source. However, after some calls of the function doGetNextFrame the
    fMaxSize is smaller than fFrameSize and the application is in deadlock
    state. 
    
    We do not understand what should we do in order to eliminate this state. 
    
    We can give part of a frame to Live555 but then it means that we are not
    going to feed the Live555 library in frame by frame scenario. (We can build
    a byte buffer between the live source and live555 but we do not sure it is
    the right way) 
    
    Please let us know what is the preferred way of handing this issue 
    
    Thanks,
    Sagi
    
    -----ans--------------------------------
    This should be "<=", not "<".
    
    Also, I hope you are setting "fFrameSize" properly before you get to this "if" statement. 
    You can probably replace this last statement with:
            FramedSource::afterGetting(this);
    
    which is more efficient (and will avoid infinite recursion, because you're reading from a live source).

    -----ask--------------------------------
    Hi Ross, 
    
    We are setting fFrameSize to the size of the frame before the posted code. 
    I am familiar with fNumTruncatedBytes but as you say the data will be
    dropped. We do not want this to happen. 
    I did not sure I understand your last statement "make sure that your
    downstream object always has enough buffer space to avoid trunction - i.e.,
    so that fMaxSize is always >= fFrameSize".  How can I assure it, the Live555
    library request 150,000 bytes exactly. We give it frame by frame and on the
    last frame it is not the exact number so we are in the situation of fMaxSize
    < fFrameSize. 
    
    If I understand you correctly we have two options 
    
    1. Feeding Live555 frame by frame and on the last frame truncate the frame
    and loss the data 
    2. Handle internal buffer inside our DeviceSource in order to give Live555
    parts of a frame on the last frame. It means that Live555 will handle the
    recognition of Frames and on this scenario I do not understand what should
    be the fPresentationTime because we are sending only part of a frame to the
    Live555 library and on the next call we will send the following part of the
    frame.  
    
    What is the preferred way of action?   
    
    Thanks,
    Sagi

    -----ans--------------------------------
    This is true only for the "StreamParser" class, which you should *not* be using, because you are delivering discrete frames - rather than a byte stream - to your downstream object. In particular, you should be using a "*DiscreteFramer" object downstream, and not a "*Framer". What objects (classes) do you have 'downstream' from your input device, and what type of data (i.e., what codec) is your "DeviceSource" object trying to deliver? (This may help identify the problem.)

    -----ask--------------------------------
    Hi Ross, 
    
    Ok, we used the StreamParser class and probably this cause the problem we
    have. 
    This is our Device class 
    
    class CapDeviceSource: public FramedSource {
    
    We are trying to stream MPEG4 (Later on we will move to H.264) 
    
    What is the best class to derive from instead of FramedSource in order to
    use DiscreteFramer downstream object? 
     
    If I understood you correctly it is MPEG4VideoStreamDiscreteFramer and we
    should implement the function doGetNextFunction but looking on the code we
    thought it is best to implement the function afterGettingFrame1, yet it is
    not virtual so probably we are missing something. 
    
    Thanks,
    Sagi

    -----ans--------------------------------
    Provided that your source object delivers one frame at a time, you should be able to feed it directly into a "MPEG4VideoStreamDiscreteFramer", with no modifications.
    No, there's nothing more for you to implement; just use "MPEG4VideoStreamDiscreteFramer" as is. (For H.264, however, it'll be a bit more complicated; you will need to implement your own subclass of "H264VideoStreamFramer" for that.)

    -----ask--------------------------------
    Hi Ross, 
    
    Thanks for the hint, we understood our problem. We used
    MPEG4VideoStreamFramer instead of MPEG4VideoStreamDiscreteFramer. We changed
    this and now it looks much better. 
    Again, thank you very much for your great support and library. 
    
    For the next stage we would like to use H264 codec, so I think we should
    write our own H264VideoStreamDiscreteFramer, is it correct? 
    
    Thanks,
    Sagi

    -----ans--------------------------------
    Yes, you need to write your own subclass of "H264VideoStreamFramer"; see http://www.live555.com/liveMedia/faq.html#h264-streaming

    -----ask--------------------------------
    Hi Ross, 
    
    We are checking for audio stream support with Live555 and we would like to
    know if we can stream the following codec 
    AAC-LC and/or AAC-HE through the library. 
    Thanks,
    Sagi

    -----ans--------------------------------
    Yes, you can do so using a "MPEG4GenericRTPSink", created with appropriate parameters to specify AAC audio. (Note, for example, how "ADTSAudioFileServerMediaSubsession" streams AAC audio that comes from an ADTS-format file.)

    -----ask--------------------------------
    Hi Ross,
    
    We have implemented a stream for AAC audio and it works great, we also
    implement a stream for H.264 and it also works great. We would like to
    combine these two streams under one name. 
    Currently, we have one stream called h264Video and another stream called
    aacAudio (Different streams, DESCRIBE). We would like to have one stream
    called audioVideo which configure two setups 1 for the video and 1 for the
    audio.
    Can you please let us know what is the best way to implement it?
    Thanks,
    Sagi

    -----ask--------------------------------
    Hi Ross, 
    
    We successfully combined the two streams into one stream and it works great.
    The Audio and Video are on the same url address. As it seems to us the Audio
    and Video are synchronized but we are not sure if we need to handle it (in
    some way other then setting presentation time) or it all handle in your
    library. The only thing we are currently doing is to update presentation
    time for the audio and for the video. We appreciate your input on this
    matter
    Thanks,
    Sagi

    -----ans--------------------------------
    Good. As you figured out, you can do this just by creating a single "ServerMediaSession" object, and adding two separate "ServerMediaSubsessions" to it.
    Yes, if the presentation times of the two streams are in sync, and aligned with 'wall clock' time (i.e., the time that you'd get by calling "gettimeofday()"), and you are using RTCP (which is implemented by default in "OnDemandServerMediaSubsession"), then you will see A/V synchronization in standards-compliant clients.

    -----ask--------------------------------
    how is the presentationtime of two streams synchronised?
    I have to synchronise the mpeg-4 es and a wave file. I am able to send the two 
    streams together by creating single servermediasession and adding two separate 
    servermediasubsession, but they are not synchronised. 
    In case of mpeg-4 es video, the gettimeofday() is getting called when the 
    constructor of MPEGVideoStreamFramer is called and in case of wave, in 
    WAVAudioFileSource::doGetNextFrame(). I think due to this the video and audio 
    is not getting synchronised. So in this case how should i synchronise the audio 
    and video?
    Regards,
    Nisha

    -----ans--------------------------------
    how is the presentationtime of two streams synchronised?
    Please read the FAQ!
    You *must* set accurate "fPresentationTime" values for each frame of each of your sources. These values - and only these values - are what are used for synchronization. If the "fPresentationTime" values are not accurate - and synchronized - at the server, then they cannot possibly become synchronized at a client.
  • 相关阅读:
    CodeForces Gym 100500A A. Poetry Challenge DFS
    CDOJ 486 Good Morning 傻逼题
    CDOJ 483 Data Structure Problem DFS
    CDOJ 482 Charitable Exchange bfs
    CDOJ 481 Apparent Magnitude 水题
    Codeforces Gym 100637G G. #TheDress 暴力
    Gym 100637F F. The Pool for Lucky Ones 暴力
    Codeforces Gym 100637B B. Lunch 找规律
    Codeforces Gym 100637A A. Nano alarm-clocks 前缀和
    TC SRM 663 div2 B AABB 逆推
  • 原文地址:https://www.cnblogs.com/welhzh/p/4440500.html
Copyright © 2020-2023  润新知