• (十) 编写UVC程序



    title: 编写UVC程序
    date: 2019/4/23 20:20:00
    toc: true

    编写UVC程序

    这里其实自己有些也没看懂,某个函数以后要深究的话还是要看下 Linux摄像头驱动2——UVC

    更多参考资料回去看 01-V4L2学习流程.md

    流程简述

    回顾下要怎么写代码

    1.构造一个usb_driver
    2. 
       .id_table:
       .probe:
            2.1. 分配video_device:video_device_alloc
            2.2. 设置
               .fops
               .ioctl_ops (里面需要设置11项)
               如果要用内核提供的缓冲区操作函数,还需要构造一个videobuf_queue_ops
            2.3. 注册: video_register_device      
    3.注册: usb_register
    
    4.构造11个ioctl函数
    5.构造启动,关闭函数
    
    ===============================================================================
    1. open
    2. 查询是否为视频设备
    3. 枚举支持的格式,这里app通过下标来索引支持的格式,我们直接只返回索引1的格式,也就表示支持1种格式
    4. 返回当前格式,这里构造一个全局变量,存储当前的格式
    	static struct v4l2_format myuvc_format;//当前格式
    4. 测试是否支持某种格式
    5. 设置某种格式
    	这里调用4的测试某种格式,然后只是复制到全局变量的当前格式中,并没有传输到硬件
    6. 申请buf,这里构造了一个队列管理结构
    	struct myuvc_queue {
    		void *mem;						//vmalloc_32 申请的内存地址
    		int count;						//已经分配的buf个数
    		int buf_size;    				//每个buf的大小,这里一般为一个lcd的大小=像素*pix
    		struct myuvc_buffer buffer[32]; //这里的32表示最多申请32个内存
    		struct list_head mainqueue;   /* 供APP消费用 */
    		struct list_head irqqueue;    /* 供底层驱动生产用 */
    	};
    	
    	struct myuvc_buffer {
    		struct v4l2_buffer buf;
    		int state;			// 这个我理解其实可以和下面的vma合并
    		int vma_use_count; /* 表示是否已经被mmap */
    		wait_queue_head_t wait;  /* APP要读某个缓冲区,如果无数据,在此休眠 */
    		struct list_head stream;	//当前buf的的链表节点 指示 用于指示驱动
    		struct list_head irq;    	//当前buf的的链表节点 指示 用于指示app
    	};
    	
    	
    
    	myuvc_free_buffers 释放原来已经申请的缓存
    	mem=vmalloc_32.........申请内存
    	INIT_LIST_HEAD(&myuvc_queue.mainqueue);//初始化队列
    	INIT_LIST_HEAD(&myuvc_queue.irqqueue);
    	
    	for (i = 0; i < nbuffers; ++i) {
    		myuvc_queue.buffer[i].buf.index = i;				// 下标
    		myuvc_queue.buffer[i].buf.m.offset = i * bufsize; 	//偏移
    		myuvc_queue.buffer[i].state     = VIDEOBUF_IDLE;	//状态
    		...
    		init_waitqueue_head(&myuvc_queue.buffer[i].wait);	//buf的等待队列
    	}
    	myuvc_queue.mem = mem;									//起始内存地址
        myuvc_queue.count = nbuffers;							//几个buf,这里我们实际上限制为32个
        myuvc_queue.buf_size = bufsize;							//每个buf的大小
    	
    
    6.1 这里也就是
    	-[buf0]-[buf1]-----[buf32]
    	 - v4l2_buffer
    	 - vma_use_count 是否被mmap
    	 - state
    	 - wait 	  /* APP要读某个缓冲区,如果无数据,在此休眠 */
    	 - stream 
    	 - irq    
    
    7. 查询缓存状态
    	通过 myuvc_queue.buffer[v4l2_buf->index].vma_use_count和state 去更新具体的 buf的flags
    8. 把缓冲区放入队列,这个具体为什么有两个队列,看老师画的图,这个函数在初始状态下应该会倍调用
    	1. 修改buf状态
    	2. 把缓冲区放入 队列中,这里有两个链表
    		mainqueue 给app使用,
    		irqqueue 给驱动使用
    		/* 2. 放入2个队列 */
    		/* 队列1: 供APP使用 
    		* 
    		* 当缓冲区有数据时, APP从mainqueue队列中取出
    		*/
    		list_add_tail(&buf->stream, &myuvc_queue.mainqueue);
    
    		/* 队列2: 供产生数据的函数使用
    		* 当采集到数据时,从irqqueue队列中取出第1个缓冲区,存入数据
    		*/
    		list_add_tail(&buf->irq, &myuvc_queue.irqqueue);
    
    9. 缓存从队列中取出,这里是应用层想得到数据
    	// 通过当前的队列节点 找到一个节点
    	buf = list_first_entry(&myuvc_queue.mainqueue, struct myuvc_buffer, stream);
    	list_del(&buf->stream);
    
    10. mmap,应用程序调用mmap函数时, 会传入offset参数,根据这个offset找出指定的缓冲区
    	
    
    11. poll
    	 buf = list_first_entry(&myuvc_queue.mainqueue, struct myuvc_buffer, stream); //找到队列
    	 poll_wait(file, &buf->wait, wait); // 等待数据
    
    
    用URB来记录一次完整传输的信息,包括每次传多少,传几次,传的目标位置等
    urb 初始化
    	1. 分配usb_buffers,这个是实际的内存区域
    	myuvc_queue.urb_buffer[i] = usb_buffer_alloc(...&myuvc_queue.urb_dma[i]),//这里会返回物理地址和虚拟地址
    	2. 分配urb ,这个是buf的管理结构
    	myuvc_queue.urb[i] = usb_alloc_urb
    	3. 设置urb
    		1. 很自然的,我们需要绑定这个buf到 urb上
    		urb->transfer_buffer = myuvc_queue.urb_buffer[i]  //虚拟地址
    		urb->transfer_dma = myuvc_queue.urb_dma[i];		// 物理地址
    		urb->complete = myuvc_video_complete;			//中断函数
    		2. 其他设置,比如端点,一个urb应该对应了一个端点,这里一个端点对应了多个urb
    			urb->pipe = usb_rcvisocpipe(myuvc_udev,myuvc_bEndpointAddress);
    urb 提交
    	for (i = 0; i < MYUVC_URBS; ++i) {
    	usb_submit_urb(myuvc_queue.urb[i], GFP_KERNEL)}
    
    urb 完成中断函数
    	1. 状态判断
    	2. myuvc_queue.irqqueue 非空也就是有空的buf用于存数据
    		if (!list_empty(&myuvc_queue.irqqueue))
    		// 取出队列头	
    		buf = list_first_entry(&myuvc_queue.irqqueue, struct myuvc_buffer, irq);
    		一个完整的urb是由多个包组成的,我们这里合并数据
    		for (i = 0; i < urb->number_of_packets; ++i)
    		{
    			src  = urb->transfer_buffer + urb->iso_frame_desc[i].offset; //每包的源
    			dest = myuvc_queue.mem + buf->buf.m.offset + buf->buf.bytesused; //数据目的地址
    			..
    			memcpy(dest, src + src[0], nbytes); //复制到buf
    		}
    		// 删除这个buf队列,唤醒app程序,app程序应该处理完后将这个buf放回去
    		list_del(&buf->irq);
    		wake_up(&buf->wait);
    	3. 重新提交urb 往复循环
    		usb_submit_urb
    		
    
    
    12. vidioc_streamon 启动
    	1. 设置参数
    		假如我们直接设置,可能摄像头不支持我们设置的格式,后面对应的解析数据可能会出现错误。
    		因此我们先尝试传入设置参数,摄像头接收后会保存起来,并根据自身情况做一些修正,,这里具体的解释看代码注释 ctrl->bmHint = 1;
    		usb_control_msg(..这里组好数据,VS_PROBE_CONTROL),这里使用VS接口,参数VS_PROBE_CONTROL只是枚举,尝试而已,并不是设置
    		再将该设置读取出来
    		再设置
    		usb_control_msg(..这里组好数据,VS_COMMIT_CONTROL ),VS_PROBE_CONTROL 表示枚举参数,VS_COMMIT_CONTROL 表示提交参数
    
    		设置具体的接口的带宽等
    		// 一个接口下有多个设置,获得当前接口索引
    			myuvc_control_intf = intf->cur_altsetting->desc.bInterfaceNumber;
    			myuvc_streaming_intf = intf->cur_altsetting->desc.bInterfaceNumber;
    		//选择第8个接口设置	
    		usb_set_interface(myuvc_udev, myuvc_streaming_intf, 8);
    
    

    11个ioctl函数

    先来实现ioctl函数,参考到driversmediausbuvcuvc_driver.c中的uvc_ioctl_ops,这里的video_usercopy就是把用户空间的参数传递给内核然后执行函数

    uvc_register_video
    	vdev->fops = &uvc_fops;				
    	vdev->ioctl_ops = &uvc_ioctl_ops;	//linux-4.13.1driversmediausbuvcuvc_v4l2.c
    
    uvc_fops.unlocked_ioctl
    	 video_usercopy(file, cmd, arg, __video_do_ioctl);
    			vdev->ioctl_ops=uvc_ioctl_ops
    			
    // 3.x的内核是这样的,也就是最终调用 uvc_v4l2_do_ioctl
    vdev->fops = &uvc_fops;
    	uvc_v4l2_ioctl
    		video_usercopy(file, cmd, arg, uvc_v4l2_do_ioctl)
    

    针对这些具体的cmd,我们可以看到如下定义

    #define VIDIOC_QUERYCAP		 _IOR('V',  0, struct v4l2_capability)
    #define VIDIOC_RESERVED		  _IO('V',  1)
    #define VIDIOC_ENUM_FMT       _IOWR('V',  2, struct v4l2_fmtdesc)
    
    

    第三个参数应该就是我们设置或者查询的结构,所以内部一般是这么使用的

    struct v4l2_capability *cap = arg;
    ...然后对这个cap 进行设置或者解析
    

    查询属性 VIDIOC_QUERYCAP

    // uvc_v4l2.c > uvc_v4l2_do_ioctl
    case VIDIOC_QUERYCAP:
    {
        struct v4l2_capability *cap = arg;
    
        memset(cap, 0, sizeof *cap);
        strlcpy(cap->driver, "uvcvideo", sizeof cap->driver);
        strlcpy(cap->card, vdev->name, sizeof cap->card);
        usb_make_path(stream->dev->udev,
                      cap->bus_info, sizeof(cap->bus_info));
        cap->version = LINUX_VERSION_CODE;
        if (stream->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
            cap->capabilities = V4L2_CAP_VIDEO_CAPTURE
            | V4L2_CAP_STREAMING;
        else
            cap->capabilities = V4L2_CAP_VIDEO_OUTPUT
            | V4L2_CAP_STREAMING;
        break;
    }
    
    

    修改如下

    static int myuvc_vidioc_querycap(struct file *file, void  *priv,
    					struct v4l2_capability *cap)
    {    
        memset(cap, 0, sizeof *cap);
        strcpy(cap->driver, "myuvc");
        strcpy(cap->card, "myuvc");
        cap->version = 1;
        
        cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
     
    	return 0;
    }
    

    枚举格式 VIDIOC_ENUM_FMT

    所谓枚举,就是App通过index=1,....max来查询我们硬件支持的格式

    // uvc_v4l2.c > uvc_v4l2_do_ioctl
    case VIDIOC_ENUM_FMT:
    {
        struct v4l2_fmtdesc *fmt = arg;
        struct uvc_format *format;
        enum v4l2_buf_type type = fmt->type;
        __u32 index = fmt->index;	//app查询的下标
    
        if (fmt->type != stream->type ||
            fmt->index >= stream->nformats)
            return -EINVAL;
    
        memset(fmt, 0, sizeof(*fmt));
        fmt->index = index;
        fmt->type = type;
    	
        //
        format = &stream->format[fmt->index];
        fmt->flags = 0;
        if (format->flags & UVC_FMT_FLAG_COMPRESSED)
            fmt->flags |= V4L2_FMT_FLAG_COMPRESSED;
        strlcpy(fmt->description, format->name,
                sizeof fmt->description);
        fmt->description[sizeof fmt->description - 1] = 0;
        fmt->pixelformat = format->fcc;
        break;
    }
    
    
    
    

    这里具体的格式直接从stream->format[fmt->index]获取,这个是怎么设置的?

    这个格式是16位guid,在函数uvc_driver.c>uvc_parse_format 中的uvc_format_by_guid,可以看到这个数组uvc_fmts

    static struct uvc_format_desc uvc_fmts[] = {
    	{
    		.name		= "YUV 4:2:2 (YUYV)",
    		.guid		= UVC_GUID_FORMAT_YUY2,
    		.fcc		= V4L2_PIX_FMT_YUYV,
    	},
    	{
    		.name		= "YUV 4:2:2 (YUYV)",
    		.guid		= UVC_GUID_FORMAT_YUY2_ISIGHT,
    		.fcc		= V4L2_PIX_FMT_YUYV,
    	},
    	{
    		.name		= "YUV 4:2:0 (NV12)",
    		.guid		= UVC_GUID_FORMAT_NV12,
    		.fcc		= V4L2_PIX_FMT_NV12,
    	},
    	{
    		.name		= "MJPEG",
    		.guid		= UVC_GUID_FORMAT_MJPEG,
    		.fcc		= V4L2_PIX_FMT_MJPEG,
    	},
    	{
    		.name		= "YVU 4:2:0 (YV12)",
    		.guid		= UVC_GUID_FORMAT_YV12,
    		.fcc		= V4L2_PIX_FMT_YVU420,
    	},
    	{
    		.name		= "YUV 4:2:0 (I420)",
    		.guid		= UVC_GUID_FORMAT_I420,
    		.fcc		= V4L2_PIX_FMT_YUV420,
    	},
    	{
    		.name		= "YUV 4:2:0 (M420)",
    		.guid		= UVC_GUID_FORMAT_M420,
    		.fcc		= V4L2_PIX_FMT_M420,
    	},
    	{
    		.name		= "YUV 4:2:2 (UYVY)",
    		.guid		= UVC_GUID_FORMAT_UYVY,
    		.fcc		= V4L2_PIX_FMT_UYVY,
    	},
    	{
    		.name		= "Greyscale (8-bit)",
    		.guid		= UVC_GUID_FORMAT_Y800,
    		.fcc		= V4L2_PIX_FMT_GREY,
    	},
    	{
    		.name		= "Greyscale (16-bit)",
    		.guid		= UVC_GUID_FORMAT_Y16,
    		.fcc		= V4L2_PIX_FMT_Y16,
    	},
    	{
    		.name		= "RGB Bayer",
    		.guid		= UVC_GUID_FORMAT_BY8,
    		.fcc		= V4L2_PIX_FMT_SBGGR8,
    	},
    	{
    		.name		= "RGB565",
    		.guid		= UVC_GUID_FORMAT_RGBP,
    		.fcc		= V4L2_PIX_FMT_RGB565,
    	},
    	{
    		.name		= "H.264",
    		.guid		= UVC_GUID_FORMAT_H264,
    		.fcc		= V4L2_PIX_FMT_H264,
    	},
    };
    

    这里我们先只支持一种格式

    static int myuvc_vidioc_enum_fmt_vid_cap(struct file *file, void  *priv,
    					struct v4l2_fmtdesc *f)
    {
        /* 人工查看描述符可知我们用的摄像头只支持1种格式 */
    	if (f->index >= 1)
    		return -EINVAL;
    
        /* 支持什么格式呢?
         * 查看VideoStreaming Interface的描述符,
         * 得到GUID为"59 55 59 32 00 00 10 00 80 00 00 aa 00 38 9b 71"
         */
    	strcpy(f->description, "4:2:2, packed, YUYV");
    	f->pixelformat = V4L2_PIX_FMT_YUYV;    
        
    	return 0;
    }
    

    查询当前格式 VIDIOC_G_FMT

    case VIDIOC_G_FMT:
    	return uvc_v4l2_get_format(stream, arg);
    

    这里我们自己直接返回定义的结构体即可

    static int myuvc_vidioc_g_fmt_vid_cap(struct file *file, void *priv,
    					struct v4l2_format *f)
    {
        memcpy(f, &myuvc_format, sizeof(myuvc_format));
    	return (0);
    }
    

    尝试某种格式 VIDIOC_TRY_FMT

    参考: uvc_v4l2_try_format和myvivi_vidioc_try_fmt_vid_cap

    case VIDIOC_TRY_FMT:
    {
        struct uvc_streaming_control probe;
    
        return uvc_v4l2_try_format(stream, arg, &probe, NULL, NULL);
            fmt->fmt.pix.width = frame->wWidth;
            fmt->fmt.pix.height = frame->wHeight;
            fmt->fmt.pix.field = V4L2_FIELD_NONE;
            fmt->fmt.pix.bytesperline = format->bpp * frame->wWidth / 8;
            fmt->fmt.pix.sizeimage = probe->dwMaxVideoFrameSize;
            fmt->fmt.pix.colorspace = format->colorspace;
            fmt->fmt.pix.priv = 0;
    
    }
    

    具体如下,实际我们的硬件摄像头要修改V4L2_PIX_FMT_YUYV为实际的

    static struct frame_desc frames[] = {{640, 480}, {352, 288}, {320, 240}, {176, 144}, {160, 120}};
    
    static int myuvc_vidioc_try_fmt_vid_cap(struct file *file, void *priv,
    			struct v4l2_format *f)
    {
        if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
        {
            return -EINVAL;
        }
    
        if (f->fmt.pix.pixelformat != V4L2_PIX_FMT_YUYV)
            return -EINVAL;
        
        /* 调整format的width, height, 
         * 计算bytesperline, sizeimage
         */
    
        /* 人工查看描述符, 确定支持哪几种分辨率 */
        f->fmt.pix.width  = frames[frame_idx].width;
        f->fmt.pix.height = frames[frame_idx].height;
         
    	f->fmt.pix.bytesperline =
    		(f->fmt.pix.width * bBitsPerPixel) >> 3;
    	f->fmt.pix.sizeimage =
    		f->fmt.pix.height * f->fmt.pix.bytesperline;
        
        return 0;
    }
    

    设置某种格式 VIDIOC_S_FMT (未传递USB)

    case VIDIOC_S_FMT:
    	uvc_v4l2_set_format
    		uvc_v4l2_try_format(stream, fmt, &probe, &format, &frame);
    		memcpy(&stream->ctrl, &probe, sizeof probe);
            stream->cur_format = format;
            stream->cur_frame = frame;
    

    这里并没有传输到USB,只是赋值全局变量即可

    static int myuvc_vidioc_s_fmt_vid_cap(struct file *file, void *priv,
    					struct v4l2_format *f)
    {
    	int ret = myuvc_vidioc_try_fmt_vid_cap(file, NULL, f);
    	if (ret < 0)
    		return ret;
    
        memcpy(&myuvc_format, f, sizeof(myuvc_format));
        
        return 0;
    }
    

    队列请求 VIDIOC_REQBUFS

    case VIDIOC_REQBUFS:
    	uvc_alloc_buffers(&stream->queue, arg);
    		ret = vb2_reqbufs(&queue->queue, rb);
    			__vb2_queue_free
    				/* Release video buffer memory */
                    __vb2_free_mem(q, buffers);
    
                    /* Free videobuf buffers */
                    for (buffer = q->num_buffers - buffers; buffer < q->num_buffers;
                         ++buffer) {
                        kfree(q->bufs[buffer]);
                        q->bufs[buffer] = NULL;
                    }
    
                    q->num_buffers -= buffers;
                    if (!q->num_buffers)
                        q->memory = 0;
                    INIT_LIST_HEAD(&q->queued_list);
    				
    		
    

    这里老师的视频是2.x的内核,参考的如下

    uvc_v412.c
      uvc_v4l2_do_ioctl
        uvc_alloc_buffers
        
    unsigned int bufsize = PAGE_ALIGN(buflength);  
      unsigned int i;
        void *mem = NULL;
        int ret;
    
        if (nbuffers > UVC_MAX_VIDEO_BUFFERS) /*#define UVC_MAX_VIDEO_BUFFERS   32*/
            nbuffers = UVC_MAX_VIDEO_BUFFERS;
    
        mutex_lock(&queue->mutex);
    
        if ((ret = uvc_free_buffers(queue)) < 0)
            goto done;
    
        /* Bail out if no buffers should be allocated. */
        if (nbuffers == 0)
            goto done;
    
        /* Decrement the number of buffers until allocation succeeds. */
        for (; nbuffers > 0; --nbuffers) {
            mem = vmalloc_32(nbuffers * bufsize);
            if (mem != NULL)
                break;
        }
    
        if (mem == NULL) {
            ret = -ENOMEM;
            goto done;
        }
    
        for (i = 0; i < nbuffers; ++i) {
            memset(&queue->buffer[i], 0, sizeof queue->buffer[i]);
            queue->buffer[i].buf.index = i;
            queue->buffer[i].buf.m.offset = i * bufsize;
            queue->buffer[i].buf.length = buflength;
            queue->buffer[i].buf.type = queue->type;
            queue->buffer[i].buf.sequence = 0;
            queue->buffer[i].buf.field = V4L2_FIELD_NONE;
            queue->buffer[i].buf.memory = V4L2_MEMORY_MMAP;
            queue->buffer[i].buf.flags = 0;
            init_waitqueue_head(&queue->buffer[i].wait);
        }
    
        queue->mem = mem;
        queue->count = nbuffers;
        queue->buf_size = bufsize;
        ret = nbuffers;
    
    done:
        mutex_unlock(&queue->mutex);
        return ret;
    

    流程基本就是

    1. 释放buff,如果已经有缓存就释放掉
    2. 申请内存头
    3. 清空
    4. 加入到两个队列
    5. 设置每个buf的具体的值

    这里为什么需要两个队列? 因为一个是给驱动放数据用,一个是给APP取数据用的

    mark

    实际代码如下

    static int myuvc_vidioc_reqbufs(struct file *file, void *priv,
    			  struct v4l2_requestbuffers *p)
    {
        int nbuffers = p->count;
        int bufsize  = PAGE_ALIGN(myuvc_format.fmt.pix.sizeimage);
        unsigned int i;
        void *mem = NULL;
        int ret;
    
        if ((ret = myuvc_free_buffers()) < 0)
            goto done;
    
        /* Bail out if no buffers should be allocated. */
        if (nbuffers == 0)
            goto done;
    
        /* Decrement the number of buffers until allocation succeeds. */
        for (; nbuffers > 0; --nbuffers) {
            mem = vmalloc_32(nbuffers * bufsize);
            if (mem != NULL)
                break;
        }
    
        if (mem == NULL) {
            ret = -ENOMEM;
            goto done;
        }
    
        /* 这些缓存是一次性作为一个整体来分配的 */
        memset(&myuvc_queue, 0, sizeof(myuvc_queue));
    
    	INIT_LIST_HEAD(&myuvc_queue.mainqueue);
    	INIT_LIST_HEAD(&myuvc_queue.irqqueue);
    
        for (i = 0; i < nbuffers; ++i) {
            myuvc_queue.buffer[i].buf.index = i;
            myuvc_queue.buffer[i].buf.m.offset = i * bufsize;
            myuvc_queue.buffer[i].buf.length = myuvc_format.fmt.pix.sizeimage;//buffer的长度(图像的大小)
            myuvc_queue.buffer[i].buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;//buffer的类型(视频捕捉类)
            myuvc_queue.buffer[i].buf.sequence = 0;
            myuvc_queue.buffer[i].buf.field = V4L2_FIELD_NONE;
            myuvc_queue.buffer[i].buf.memory = V4L2_MEMORY_MMAP;
            myuvc_queue.buffer[i].buf.flags = 0;
            myuvc_queue.buffer[i].state     = VIDEOBUF_IDLE;
            //初始化等待队列,最后的视频数据是放到一个一个缓冲区,应用程序在读某个缓冲区的时候。有可能因为缓冲区还没有数据就会休眠。缓冲区里面应该有一个队列(用来存储要读这个缓冲区的进程)
            init_waitqueue_head(&myuvc_queue.buffer[i].wait);
        }
    
        myuvc_queue.mem = mem;//总buffer的地址(队列里面地址)
        myuvc_queue.count = nbuffers;//队列里缓冲区的个数
        myuvc_queue.buf_size = bufsize;//每个缓冲区的大小,页对齐后的大小
        ret = nbuffers;
    
    done:
        return ret;
    }
    
    static int myuvc_free_buffers(void)//释放缓存
    {
        kfree(myuvc_queue.mem);//释放整块缓存
        memset(&myuvc_queue, 0, sizeof(myuvc_queue));//清零
        return 0;
    }
    

    队列查询 VIDIOC_QUERYBUF

    VIDIOC_QUERYBUF
    	uvc_query_buffer
    		vb2_querybuf(&queue->queue, buf);
    			__fill_v4l2_buffer(vb, b);
    				switch (vb->state) {
                        case VB2_BUF_STATE_QUEUED:
                        case VB2_BUF_STATE_ACTIVE:
                            b->flags |= V4L2_BUF_FLAG_QUEUED;
                            break;
                        case VB2_BUF_STATE_ERROR:
                            b->flags |= V4L2_BUF_FLAG_ERROR;
                            /* fall through */
                        case VB2_BUF_STATE_DONE:
                            b->flags |= V4L2_BUF_FLAG_DONE;
                            break;
                        case VB2_BUF_STATE_PREPARED:
                            b->flags |= V4L2_BUF_FLAG_PREPARED;
                            break;
                        case VB2_BUF_STATE_DEQUEUED:
                            /* nothing */
                            break;
                        }
    			if (__buffer_in_use(q, vb))
    				b->flags |= V4L2_BUF_FLAG_MAPPED;
    
    

    这里3.x和2.x的内核不一样了,看下2.x的

    int uvc_query_buffer(struct uvc_video_queue *queue,
            struct v4l2_buffer *v4l2_buf)
    {
        int ret = 0;
        if (v4l2_buf->index >= queue->count) {
            ret = -EINVAL;
            goto done;
        }
        __uvc_query_buffer(&queue->buffer[v4l2_buf->index], v4l2_buf);
    }
    
    static void __uvc_query_buffer(struct uvc_buffer *buf,
            struct v4l2_buffer *v4l2_buf)
    {
        memcpy(v4l2_buf, &buf->buf, sizeof *v4l2_buf);
    
        if (buf->vma_use_count)
            v4l2_buf->flags |= V4L2_BUF_FLAG_MAPPED;
    
        switch (buf->state) {
        case UVC_BUF_STATE_ERROR:
        case UVC_BUF_STATE_DONE:
            v4l2_buf->flags |= V4L2_BUF_FLAG_DONE;
            break;
        case UVC_BUF_STATE_QUEUED:
        case UVC_BUF_STATE_ACTIVE:
            v4l2_buf->flags |= V4L2_BUF_FLAG_QUEUED;
            break;
        case UVC_BUF_STATE_IDLE:
        default:
            break;
        }
    }
    

    这里我们实际代码如下

    /* A8 查询缓存状态, 比如地址信息(APP可以用mmap进行映射) 
     * 参考 uvc_query_buffer
     */
    static int myuvc_vidioc_querybuf(struct file *file, void *priv, struct v4l2_buffer *v4l2_buf)
    {
        int ret = 0;
        
    	if (v4l2_buf->index >= myuvc_queue.count) {
    		ret = -EINVAL;
    		goto done;
    	}
    
        memcpy(v4l2_buf, &myuvc_queue.buffer[v4l2_buf->index].buf, sizeof(*v4l2_buf));
    
        /* 更新flags */
    	if (myuvc_queue.buffer[v4l2_buf->index].vma_use_count)
    		v4l2_buf->flags |= V4L2_BUF_FLAG_MAPPED;
    
    
    	switch (myuvc_queue.buffer[v4l2_buf->index].state) {
        	case VIDEOBUF_ERROR:
        	case VIDEOBUF_DONE:
        		v4l2_buf->flags |= V4L2_BUF_FLAG_DONE;
        		break;
        	case VIDEOBUF_QUEUED:
        	case VIDEOBUF_ACTIVE:
        		v4l2_buf->flags |= V4L2_BUF_FLAG_QUEUED;
        		break;
        	case VIDEOBUF_IDLE:
        	default:
        		break;
    	}
    
    done:    
    	return ret;
    }
    

    缓冲放入队列 VIDIOC_QBUF

    VIDIOC_QBUF
    	uvc_queue_buffer(&stream->queue, arg)
        	vb2_qbuf(&queue->queue, buf)
    	
    

    这个直接放上代码

    static int myuvc_vidioc_qbuf(struct file *file, void *priv, struct v4l2_buffer *v4l2_buf)
    {
        struct myuvc_buffer *buf;
        int ret;
    
        /* 0. APP传入的v4l2_buf可能有问题, 要做判断 */
    
    	if (v4l2_buf->type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
    	    v4l2_buf->memory != V4L2_MEMORY_MMAP) {
    		return -EINVAL;
    	}
    
    	if (v4l2_buf->index >= myuvc_queue.count) {
    		return -EINVAL;
    	}
    
        buf = &myuvc_queue.buffer[v4l2_buf->index];
    
    	if (buf->state != VIDEOBUF_IDLE) {
    		return -EINVAL;
    	}
    
    
        /* 1. 修改状态 */
    	buf->state = VIDEOBUF_QUEUED;
    	buf->buf.bytesused = 0;
    
        /* 2. 放入2个队列 */
        /* 队列1: 供APP使用 
         * 当缓冲区没有数据时,放入mainqueue队列
         * 当缓冲区有数据时, APP从mainqueue队列中取出
         */
    	list_add_tail(&buf->stream, &myuvc_queue.mainqueue);
    
        /* 队列2: 供产生数据的函数使用
         * 当采集到数据时,从irqqueue队列中取出第1个缓冲区,存入数据
         */
    	list_add_tail(&buf->irq, &myuvc_queue.irqqueue);
        
    	return 0;
    }
    
    

    缓冲出队列 VIDIOC_DQBUF

    VIDIOC_DQBUF
    	uvc_dequeue_buffer
    		vb2_dqbuf(&queue->queue, buf, nonblocking)
    			/* Fill buffer information for the userspace */
                __fill_v4l2_buffer(vb, b);
                /* Remove from videobuf queue */
                list_del(&vb->queued_entry);
    
                dprintk(1, "dqbuf of buffer %d, with state %d
    ",
                        vb->v4l2_buf.index, vb->state);
    
                vb->state = VB2_BUF_STATE_DEQUEUED;
    
    

    2.x的不太一样,直接放代码

    static int myuvc_vidioc_dqbuf(struct file *file, void *priv, struct v4l2_buffer *v4l2_buf)
    {
        /* APP发现数据就绪后, 从mainqueue里取出这个buffer */
    
        struct myuvc_buffer *buf;
        int ret = 0;
    
    	if (list_empty(&myuvc_queue.mainqueue)) {
    		ret = -EINVAL;
    		goto done;
    	}
        
    	buf = list_first_entry(&myuvc_queue.mainqueue, struct myuvc_buffer, stream);
    
    	switch (buf->state) {
    	case VIDEOBUF_ERROR:
    		ret = -EIO;
    	case VIDEOBUF_DONE:
    		buf->state = VIDEOBUF_IDLE;
    		break;
    
    	case VIDEOBUF_IDLE:
    	case VIDEOBUF_QUEUED:
    	case VIDEOBUF_ACTIVE:
    	default:
    		ret = -EINVAL;
    		goto done;
    	}
    
    	list_del(&buf->stream);
    
    done:
    	return ret;
    }
    
    

    MMAP

    uvc_v4l2_mmap
    	uvc_queue_mmap
    		vb2_mmap(&queue->queue, vma)
        		__find_plane_by_offset
        		call_memop(q, mmap, vb->planes[plane].mem_priv, vma)
    

    实际代码参考vivi好了

    static int myuvc_mmap(struct file *file, struct vm_area_struct *vma)
    {
        struct myuvc_buffer *buffer;
        struct page *page;
        unsigned long addr, start, size;
        unsigned int i;
        int ret = 0;
    
        start = vma->vm_start;
        size = vma->vm_end - vma->vm_start;
    
        /* 应用程序调用mmap函数时, 会传入offset参数
         * 根据这个offset找出指定的缓冲区
         */
        for (i = 0; i < myuvc_queue.count; ++i) {
            buffer = &myuvc_queue.buffer[i];
            if ((buffer->buf.m.offset >> PAGE_SHIFT) == vma->vm_pgoff)
                break;
        }
    
        if (i == myuvc_queue.count || size != myuvc_queue.buf_size) {
            ret = -EINVAL;
            goto done;
        }
    
        /*
         * VM_IO marks the area as being an mmaped region for I/O to a
         * device. It also prevents the region from being core dumped.
         */
        vma->vm_flags |= VM_IO;
    
        /* 根据虚拟地址找到缓冲区对应的page构体 */
        addr = (unsigned long)myuvc_queue.mem + buffer->buf.m.offset;
        while (size > 0) {
            page = vmalloc_to_page((void *)addr);
    
            /* 把page和APP传入的虚拟地址挂构 */
            if ((ret = vm_insert_page(vma, start, page)) < 0)
                goto done;
    
            start += PAGE_SIZE;
            addr += PAGE_SIZE;
            size -= PAGE_SIZE;
        }
    
        vma->vm_ops = &myuvc_vm_ops;
        vma->vm_private_data = buffer;
        myuvc_vm_open(vma);
    
    done:
        return ret;
    }
    

    poll

    uvc_v4l2_poll
    	uvc_queue_poll
    		vb2_poll(&queue->queue, file, wait)
                if (list_empty(&q->queued_list))
                	return POLLERR;
        		poll_wait(file, &q->done_wq, wait);
    			if (vb && (vb->state == VB2_BUF_STATE_DONE
                    || vb->state == VB2_BUF_STATE_ERROR)) {
                return (V4L2_TYPE_IS_OUTPUT(q->type)) ? POLLOUT | POLLWRNORM :
                    POLLIN | POLLRDNORM;
    

    APP调用POLL/select来确定缓存是否就绪(有数据),我觉得在3.x内核应该直接调用vb2_poll就好了

    static unsigned int myuvc_poll(struct file *file, struct poll_table_struct *wait)
    {
    	struct myuvc_buffer *buf;
    	unsigned int mask = 0;
        
        /* 从mainqueuq中取出第1个缓冲区 */
    
        /*判断它的状态, 如果未就绪, 休眠 */
    
        if (list_empty(&myuvc_queue.mainqueue)) {
            mask |= POLLERR;
            goto done;
        }
        
        buf = list_first_entry(&myuvc_queue.mainqueue, struct myuvc_buffer, stream);
    
        poll_wait(file, &buf->wait, wait);
        if (buf->state == VIDEOBUF_DONE ||
            buf->state == VIDEOBUF_ERROR)
            mask |= POLLIN | POLLRDNORM;
        
    done:
        return mask;
    }
    

    streamon(设置参数&urb )

    打开摄像头启动传输,这里我们需要设置参数到摄像头,怎么设置参数?流程如下

    mark

    VIDIOC_STREAMON
    	uvc_video_enable
    		uvc_queue_enable(&stream->queue, 1)
        		vb2_streamon
        	uvc_commit_video(stream, &stream->ctrl)
        		uvc_set_video_ctrl	//这里设置参数
        			size = stream->dev->uvc_version >= 0x0110 ? 34 : 26;
    				data = kzalloc(size, GFP_KERNEL);
    				...
                    __uvc_query_ctrl
                    	usb_control_msg
        	uvc_init_video(stream, GFP_KERNEL)//查找端点
            	uvc_video_stats_start
            	if (intf->num_altsetting > 1)
                {
                    	uvc_find_endpoint
                        /* Check if the bandwidth is high enough. */
                ....
                }
            	usb_set_interface  //设置接口
            uvc_init_video_isoc  //分配设置urb        
    		usb_submit_urb // 提交urb
    

    那么这里的data数据是怎么构造的呢?搜索bmHint

    uvc_v4l2_try_format in uvc_v4l2.c (driversmediavideouvc) : 	probe->bmHint = 1;	/* dwFrameInterval */
    uvc_get_video_ctrl in uvc_video.c (driversmediavideouvc) : 	ctrl->bmHint = le16_to_cpup((__le16 *)&data[0]);
    uvc_set_video_ctrl in uvc_video.c (driversmediavideouvc) : 	*(__le16 *)&data[0] = cpu_to_le16(ctrl->bmHint);
    uvc_streaming_control in video.h (includelinuxusb) : 	__u16 bmHint;
    
    

    可以看到可以手工设置它,也可以通过uvc_get_video_ctrl来读出后修改

    uvc_get_video_ctrl
    		ctrl->bmHint = le16_to_cpup((__le16 *)&data[0]);
    		....
     uvc_set_video_ctrl
     		*(__le16 *)&data[0] = cpu_to_le16(ctrl->bmHint);
    		...
    

    我们可以这么写在2.x中,注意这里还需要选择接口的端点,也就是先

    1. 确定带宽
    2. 根据setting的endpoint能传输的wMaxPacketSize来确定选择setting
    static int myuvc_vidioc_streamon(struct file *file, void *priv, enum v4l2_buf_type i)
    {
        int ret;
        
        /* 1. 向USB摄像头设置参数: 比如使用哪个format, 使用这个format下的哪个frame(分辨率) 
         * 参考: uvc_set_video_ctrl / uvc_get_video_ctrl
         * 1.1 根据一个结构体uvc_streaming_control设置数据包: 可以手工设置,也可以读出后再修改
         * 1.2 调用usb_control_msg发出数据包
         */
    
        /* a. 测试参数 */
        ret = myuvc_try_streaming_params(&myuvc_params);
        printk("myuvc_try_streaming_params ret = %d
    ", ret);
    
        /* b. 取出参数 */
        ret = myuvc_get_streaming_params(&myuvc_params);
        printk("myuvc_get_streaming_params ret = %d
    ", ret);
    
        /* c. 设置参数 */
        ret = myuvc_set_streaming_params(&myuvc_params);
        printk("myuvc_set_streaming_params ret = %d
    ", ret);
        
        myuvc_print_streaming_params(&myuvc_params);
    
        /* d. 设置VideoStreaming Interface所使用的setting
         * d.1 从myuvc_params确定带宽
         * d.2 根据setting的endpoint能传输的wMaxPacketSize
         *     找到能满足该带宽的setting
         */
        /* 手工确定:
         * bandwidth = myuvc_params.dwMaxPayloadTransferSize = 1024
         * 观察lsusb -v -d 0x1e4e:的结果: 这个端点的传输大小
         *                wMaxPacketSize     0x0400  1x 1024 bytes
         * bAlternateSetting       8
         */
        usb_set_interface(myuvc_udev, myuvc_streaming_intf, myuvc_streaming_bAlternateSetting);
        
        /* 2. 分配设置URB */
        ret = myuvc_alloc_init_urbs();
        if (ret)
            printk("myuvc_alloc_init_urbs err : ret = %d
    ", ret);
    
        /* 3. 提交URB以接收数据 */
    	for (i = 0; i < MYUVC_URBS; ++i) {
    		if ((ret = usb_submit_urb(myuvc_queue.urb[i], GFP_KERNEL)) < 0) {
    			printk("Failed to submit URB %u (%d).
    ", i, ret);
    			myuvc_uninit_urbs();
    			return ret;
    		}
    	}
        
    	return 0;
    }
    

    具体的参数获取设置如下

    static void myuvc_print_streaming_params(struct myuvc_streaming_control *ctrl)
    {
        printk("video params:
    ");
        printk("bmHint                   = %d
    ", ctrl->bmHint);
        printk("bFormatIndex             = %d
    ", ctrl->bFormatIndex);
        printk("bFrameIndex              = %d
    ", ctrl->bFrameIndex);
        printk("dwFrameInterval          = %d
    ", ctrl->dwFrameInterval);
        printk("wKeyFrameRate            = %d
    ", ctrl->wKeyFrameRate);
        printk("wPFrameRate              = %d
    ", ctrl->wPFrameRate);
        printk("wCompQuality             = %d
    ", ctrl->wCompQuality);
        printk("wCompWindowSize          = %d
    ", ctrl->wCompWindowSize);
        printk("wDelay                   = %d
    ", ctrl->wDelay);
        printk("dwMaxVideoFrameSize      = %d
    ", ctrl->dwMaxVideoFrameSize);
        printk("dwMaxPayloadTransferSize = %d
    ", ctrl->dwMaxPayloadTransferSize);
        printk("dwClockFrequency         = %d
    ", ctrl->dwClockFrequency);
        printk("bmFramingInfo            = %d
    ", ctrl->bmFramingInfo);
        printk("bPreferedVersion         = %d
    ", ctrl->bPreferedVersion);
        printk("bMinVersion              = %d
    ", ctrl->bMinVersion);
        printk("bMinVersion              = %d
    ", ctrl->bMinVersion);
    }
    
    
    /* 参考: uvc_get_video_ctrl 
     (ret = uvc_get_video_ctrl(video, probe, 1, GET_CUR)) 
     static int uvc_get_video_ctrl(struct uvc_video_device *video,
         struct uvc_streaming_control *ctrl, int probe, __u8 query)
     */
    static int myuvc_get_streaming_params(struct myuvc_streaming_control *ctrl)
    {
    	__u8 *data;
    	__u16 size;
    	int ret;
    	__u8 type = USB_TYPE_CLASS | USB_RECIP_INTERFACE;
    	unsigned int pipe;
    
    	size = uvc_version >= 0x0110 ? 34 : 26;
    	data = kmalloc(size, GFP_KERNEL);
    	if (data == NULL)
    		return -ENOMEM;
       
    	pipe = (GET_CUR & 0x80) ? usb_rcvctrlpipe(myuvc_udev, 0)
    			      : usb_sndctrlpipe(myuvc_udev, 0);
    	type |= (GET_CUR & 0x80) ? USB_DIR_IN : USB_DIR_OUT;
    
    	ret = usb_control_msg(myuvc_udev, pipe, GET_CUR, type, VS_PROBE_CONTROL << 8,
    			0 << 8 | myuvc_streaming_intf, data, size, 5000);
    
        if (ret < 0)
            goto done;
    
    	ctrl->bmHint = le16_to_cpup((__le16 *)&data[0]);
    	ctrl->bFormatIndex = data[2];
    	ctrl->bFrameIndex = data[3];
    	ctrl->dwFrameInterval = le32_to_cpup((__le32 *)&data[4]);
    	ctrl->wKeyFrameRate = le16_to_cpup((__le16 *)&data[8]);
    	ctrl->wPFrameRate = le16_to_cpup((__le16 *)&data[10]);
    	ctrl->wCompQuality = le16_to_cpup((__le16 *)&data[12]);
    	ctrl->wCompWindowSize = le16_to_cpup((__le16 *)&data[14]);
    	ctrl->wDelay = le16_to_cpup((__le16 *)&data[16]);
    	ctrl->dwMaxVideoFrameSize = get_unaligned_le32(&data[18]);
    	ctrl->dwMaxPayloadTransferSize = get_unaligned_le32(&data[22]);
    
    	if (size == 34) {
    		ctrl->dwClockFrequency = get_unaligned_le32(&data[26]);
    		ctrl->bmFramingInfo = data[30];
    		ctrl->bPreferedVersion = data[31];
    		ctrl->bMinVersion = data[32];
    		ctrl->bMaxVersion = data[33];
    	} else {
    		//ctrl->dwClockFrequency = video->dev->clock_frequency;
    		ctrl->bmFramingInfo = 0;
    		ctrl->bPreferedVersion = 0;
    		ctrl->bMinVersion = 0;
    		ctrl->bMaxVersion = 0;
    	}
    
    done:
        kfree(data);
        
        return (ret < 0) ? ret : 0;
    }
    
    /* 参考: uvc_v4l2_try_format ∕uvc_probe_video 
     *       uvc_set_video_ctrl(video, probe, 1)
     */
    static int myuvc_try_streaming_params(struct myuvc_streaming_control *ctrl)
    {
        __u8 *data;
        __u16 size;
        int ret;
    	__u8 type = USB_TYPE_CLASS | USB_RECIP_INTERFACE;
    	unsigned int pipe;
        
    	memset(ctrl, 0, sizeof *ctrl);
        
    	ctrl->bmHint = 1;	/* dwFrameInterval */
    	ctrl->bFormatIndex = 1;
    	ctrl->bFrameIndex  = frame_idx + 1;
    	ctrl->dwFrameInterval = 333333;
    
    
        size = uvc_version >= 0x0110 ? 34 : 26;
        data = kzalloc(size, GFP_KERNEL);
        if (data == NULL)
            return -ENOMEM;
    
        *(__le16 *)&data[0] = cpu_to_le16(ctrl->bmHint);
        data[2] = ctrl->bFormatIndex;
        data[3] = ctrl->bFrameIndex;
        *(__le32 *)&data[4] = cpu_to_le32(ctrl->dwFrameInterval);
        *(__le16 *)&data[8] = cpu_to_le16(ctrl->wKeyFrameRate);
        *(__le16 *)&data[10] = cpu_to_le16(ctrl->wPFrameRate);
        *(__le16 *)&data[12] = cpu_to_le16(ctrl->wCompQuality);
        *(__le16 *)&data[14] = cpu_to_le16(ctrl->wCompWindowSize);
        *(__le16 *)&data[16] = cpu_to_le16(ctrl->wDelay);
        put_unaligned_le32(ctrl->dwMaxVideoFrameSize, &data[18]);
        put_unaligned_le32(ctrl->dwMaxPayloadTransferSize, &data[22]);
    
        if (size == 34) {
            put_unaligned_le32(ctrl->dwClockFrequency, &data[26]);
            data[30] = ctrl->bmFramingInfo;
            data[31] = ctrl->bPreferedVersion;
            data[32] = ctrl->bMinVersion;
            data[33] = ctrl->bMaxVersion;
        }
    
        pipe = (SET_CUR & 0x80) ? usb_rcvctrlpipe(myuvc_udev, 0)
                      : usb_sndctrlpipe(myuvc_udev, 0);
        type |= (SET_CUR & 0x80) ? USB_DIR_IN : USB_DIR_OUT;
    
        ret = usb_control_msg(myuvc_udev, pipe, SET_CUR, type, VS_PROBE_CONTROL << 8,
                0 << 8 | myuvc_streaming_intf, data, size, 5000);
    
        kfree(data);
        
        return (ret < 0) ? ret : 0;
        
    }
    
    
    /* 参考: uvc_v4l2_try_format ∕uvc_probe_video 
     *       uvc_set_video_ctrl(video, probe, 1)
     */
    static int myuvc_set_streaming_params(struct myuvc_streaming_control *ctrl)
    {
        __u8 *data;
        __u16 size;
        int ret;
    	__u8 type = USB_TYPE_CLASS | USB_RECIP_INTERFACE;
    	unsigned int pipe;
        
        size = uvc_version >= 0x0110 ? 34 : 26;
        data = kzalloc(size, GFP_KERNEL);
        if (data == NULL)
            return -ENOMEM;
    
        *(__le16 *)&data[0] = cpu_to_le16(ctrl->bmHint);
        data[2] = ctrl->bFormatIndex;
        data[3] = ctrl->bFrameIndex;
        *(__le32 *)&data[4] = cpu_to_le32(ctrl->dwFrameInterval);
        *(__le16 *)&data[8] = cpu_to_le16(ctrl->wKeyFrameRate);
        *(__le16 *)&data[10] = cpu_to_le16(ctrl->wPFrameRate);
        *(__le16 *)&data[12] = cpu_to_le16(ctrl->wCompQuality);
        *(__le16 *)&data[14] = cpu_to_le16(ctrl->wCompWindowSize);
        *(__le16 *)&data[16] = cpu_to_le16(ctrl->wDelay);
        put_unaligned_le32(ctrl->dwMaxVideoFrameSize, &data[18]);
        put_unaligned_le32(ctrl->dwMaxPayloadTransferSize, &data[22]);
    
        if (size == 34) {
            put_unaligned_le32(ctrl->dwClockFrequency, &data[26]);
            data[30] = ctrl->bmFramingInfo;
            data[31] = ctrl->bPreferedVersion;
            data[32] = ctrl->bMinVersion;
            data[33] = ctrl->bMaxVersion;
        }
    
        pipe = (SET_CUR & 0x80) ? usb_rcvctrlpipe(myuvc_udev, 0)
                      : usb_sndctrlpipe(myuvc_udev, 0);
        type |= (SET_CUR & 0x80) ? USB_DIR_IN : USB_DIR_OUT;
    
        ret = usb_control_msg(myuvc_udev, pipe, SET_CUR, type, VS_COMMIT_CONTROL << 8,
                0 << 8 | myuvc_streaming_intf, data, size, 5000);
    
        kfree(data);
        
        return (ret < 0) ? ret : 0;
        
    }
    
    /* 参考: uvc_init_video_isoc */
    static int myuvc_alloc_init_urbs(void)
    {
    	u16 psize;
    	u32 size;
        int npackets;
        int i;
        int j;
    
        struct urb *urb;
    
    	psize = wMaxPacketSize; /* 实时传输端点一次能传输的最大字节数 */
    	size  = myuvc_params.dwMaxVideoFrameSize;  /* 一帧数据的最大长度 */
        npackets = DIV_ROUND_UP(size, psize);
        if (npackets > 32)
            npackets = 32;
    
        size = myuvc_queue.urb_size = psize * npackets;
        
        for (i = 0; i < MYUVC_URBS; ++i) {
            /* 1. 分配usb_buffers */
            
            myuvc_queue.urb_buffer[i] = usb_buffer_alloc(
                myuvc_udev, size,
                GFP_KERNEL | __GFP_NOWARN, &myuvc_queue.urb_dma[i]);
    
            /* 2. 分配urb */
    		myuvc_queue.urb[i] = usb_alloc_urb(npackets, GFP_KERNEL);
    
            if (!myuvc_queue.urb_buffer[i] || !myuvc_queue.urb[i])
            {
                myuvc_uninit_urbs();
                return -ENOMEM;
            }
    
        }
    
    
        /* 3. 设置urb */
        for (i = 0; i < MYUVC_URBS; ++i) {
            urb = myuvc_queue.urb[i];
            
            urb->dev = myuvc_udev;
            urb->context = NULL;
            urb->pipe = usb_rcvisocpipe(myuvc_udev,myuvc_bEndpointAddress);
            urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP;
            urb->interval = 1;
            urb->transfer_buffer = myuvc_queue.urb_buffer[i];
            urb->transfer_dma = myuvc_queue.urb_dma[i];
            urb->complete = myuvc_video_complete;
            urb->number_of_packets = npackets;
            urb->transfer_buffer_length = size;
            
            for (j = 0; j < npackets; ++j) {
                urb->iso_frame_desc[j].offset = j * psize;
                urb->iso_frame_desc[j].length = psize;
            }
        
        }
        
        return 0;
    }
    
    

    streamoff

    uvc_video_enable
    		uvc_uninit_video(stream, 1);
    		usb_set_interface(stream->dev->udev, stream->intfnum, 0);
    		uvc_queue_enable(&stream->queue, 0);
    		uvc_video_clock_cleanup(stream);
    

    这个就比较简单,参考uvc_video_enable(video, 0)

    static int myuvc_vidioc_streamoff(struct file *file, void *priv, enum v4l2_buf_type t)
    {
    	struct urb *urb;
    	unsigned int i;
    
        /* 1. kill URB */
    	for (i = 0; i < MYUVC_URBS; ++i) {
    		if ((urb = myuvc_queue.urb[i]) == NULL)
    			continue;
    		usb_kill_urb(urb);
    	}
    
        /* 2. free URB */
        myuvc_uninit_urbs();
    
        /* 3. 设置VideoStreaming Interface为setting 0 */
        usb_set_interface(myuvc_udev, myuvc_streaming_intf, 0);
        
        return 0;
    }
    

    设置URB

    在上面的streamon中其实已经提到了设置urb,具体参考uvc_init_video_isoc

    VIDIOC_STREAMON
    	uvc_video_enable
    		uvc_queue_enable(&stream->queue, 1)
        		vb2_streamon
        	uvc_commit_video(stream, &stream->ctrl)
        		uvc_set_video_ctrl	//这里设置参数
        			size = stream->dev->uvc_version >= 0x0110 ? 34 : 26;
    				data = kzalloc(size, GFP_KERNEL);
    				...
                    __uvc_query_ctrl
                    	usb_control_msg
        	uvc_init_video(stream, GFP_KERNEL)//查找端点
            	uvc_video_stats_start
            	if (intf->num_altsetting > 1)
                {
                    	uvc_find_endpoint
                        /* Check if the bandwidth is high enough. */
                ....
                }
            	usb_set_interface  //设置接口
            uvc_init_video_isoc  //分配设置urb-----------        
    		usb_submit_urb // 提交urb
                    
                    
    uvc_init_video_isoc
    	uvc_alloc_urb_buffers	//存储数据的缓冲区
    		usb_alloc_coherent  ===这个和以前的函数 usb_buffer_alloc等同
    			for (i = 0; i < UVC_URBS; ++i)
    				kmalloc(stream->urb_size, gfp_flags | __GFP_NOWARN);
    	//===这个和以前的函数 usb_buffer_alloc等同
    	static inline void *usb_buffer_alloc
    				return usb_alloc_coherent(dev, size, mem_flags, dma);
    
    	
    	for (i = 0; i < UVC_URBS; ++i) {
    	urb = usb_alloc_urb(npackets, gfp_flags);	//管理结构,有一个指针指向上面的缓冲区
    	if (urb == NULL) {
    		uvc_uninit_video(stream, 1);
    		return -ENOMEM;
    	}
    

    实际的代码如下

    /* 参考: uvc_init_video_isoc */
    static int myuvc_alloc_init_urbs(void)
    {
    	u16 psize;
    	u32 size;
        int npackets;
        int i;
        int j;
    
        struct urb *urb;
    
    	psize = wMaxPacketSize; /* 实时传输端点一次能传输的最大字节数 */
    	size  = myuvc_params.dwMaxVideoFrameSize;  /* 一帧数据的最大长度 */
        npackets = DIV_ROUND_UP(size, psize);
        if (npackets > 32)
            npackets = 32;
    
        size = myuvc_queue.urb_size = psize * npackets;
        
        for (i = 0; i < MYUVC_URBS; ++i) {
            /* 1. 分配usb_buffers */
            
            myuvc_queue.urb_buffer[i] = usb_buffer_alloc(
                myuvc_udev, size,
                GFP_KERNEL | __GFP_NOWARN, &myuvc_queue.urb_dma[i]);
    
            /* 2. 分配urb */
    		myuvc_queue.urb[i] = usb_alloc_urb(npackets, GFP_KERNEL);
    
            if (!myuvc_queue.urb_buffer[i] || !myuvc_queue.urb[i])
            {
                myuvc_uninit_urbs();
                return -ENOMEM;
            }
    
        }
    
    
        /* 3. 设置urb */
        for (i = 0; i < MYUVC_URBS; ++i) {
            urb = myuvc_queue.urb[i];
            
            urb->dev = myuvc_udev;
            urb->context = NULL;
            //myuvc_bEndpointAddress 这个是端点,我们选择了接口下的某个设置,就会有一个端点地址
            urb->pipe = usb_rcvisocpipe(myuvc_udev,myuvc_bEndpointAddress);//管道设置
            urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP;
            urb->interval = 1;//端点描述符里面的bInterval
            urb->transfer_buffer = myuvc_queue.urb_buffer[i];//分配的是哪一个urb_buffer
            urb->transfer_dma = myuvc_queue.urb_dma[i];
            urb->complete = myuvc_video_complete;//驱动程序收到一个urb包后,会产生一个中断,这是相应的中断处理函数
            urb->number_of_packets = npackets;//urb要传输多少次数据
            urb->transfer_buffer_length = size;//总共是多长的数据
           
           //每一次传输的数据存在在哪里(偏移地址和长度)
            for (j = 0; j < npackets; ++j) {
                urb->iso_frame_desc[j].offset = j * psize;
                urb->iso_frame_desc[j].length = psize;
            }
        
        }
        
        return 0;
    }
    

    URB中断处理函数

    参考代码

    uvc_video_complete
    
    static void uvc_video_complete(struct urb *urb)
    {
    	struct uvc_streaming *stream = urb->context;
    	struct uvc_video_queue *queue = &stream->queue;
    	struct uvc_buffer *buf = NULL;
    	unsigned long flags;
    	int ret;
    
    	switch (urb->status) {
    	case 0:
    		break;
    
    	default:
    		uvc_printk(KERN_WARNING, "Non-zero status (%d) in video "
    			"completion handler.
    ", urb->status);
    
    	case -ENOENT:		/* usb_kill_urb() called. */
    		if (stream->frozen)
    			return;
    
    	case -ECONNRESET:	/* usb_unlink_urb() called. */
    	case -ESHUTDOWN:	/* The endpoint is being disabled. */
    		uvc_queue_cancel(queue, urb->status == -ESHUTDOWN);
    		return;
    	}
    
    	spin_lock_irqsave(&queue->irqlock, flags);
    	if (!list_empty(&queue->irqqueue))
    		buf = list_first_entry(&queue->irqqueue, struct uvc_buffer,
    				       queue);
    	spin_unlock_irqrestore(&queue->irqlock, flags);
    
    	stream->decode(urb, stream, buf);// 从urb取出数据
    
    	if ((ret = usb_submit_urb(urb, GFP_ATOMIC)) < 0) {
    		uvc_printk(KERN_ERR, "Failed to resubmit video URB (%d).
    ",
    			ret);
    	}
    }
    
    
    解析函数搜索下 decode > uvc_video_decode_isoc
    uvc_video_init
    		if (stream->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
    		if (stream->dev->quirks & UVC_QUIRK_BUILTIN_ISIGHT)
    			stream->decode = uvc_video_decode_isight;
    		else if (stream->intf->num_altsetting > 1)
    			stream->decode = uvc_video_decode_isoc;
    		else
    			stream->decode = uvc_video_decode_bulk;
    

    实际代码如下,这个是每个urb就会触发的,一个完整的数据一般由多个urb组成

    /* 参考: uvc_video_complete / uvc_video_decode_isoc */
    static void myuvc_video_complete(struct urb *urb)
    {
    	u8 *src;
        u8 *dest;
    	int ret, i;
        int len;
        int maxlen;
        int nbytes;
        struct myuvc_buffer *buf;
        
    	switch (urb->status) {
    	case 0:
    		break;
    
    	default:
    		printk("Non-zero status (%d) in video "
    			"completion handler.
    ", urb->status);
    		return;
    	}
    
        /* 从irqqueue队列中取出第1个缓冲区 */
    	if (!list_empty(&myuvc_queue.irqqueue))
    	{
    		buf = list_first_entry(&myuvc_queue.irqqueue, struct myuvc_buffer, irq);
        
    
        	for (i = 0; i < urb->number_of_packets; ++i) {
        		if (urb->iso_frame_desc[i].status < 0) {
        			printk("USB isochronous frame "
        				"lost (%d).
    ", urb->iso_frame_desc[i].status);
        			continue;
        		}
    
                src  = urb->transfer_buffer + urb->iso_frame_desc[i].offset;
    
                dest = myuvc_queue.mem + buf->buf.m.offset + buf->buf.bytesused;
    
                len = urb->iso_frame_desc[i].actual_length;
                /* 判断数据是否有效 */
                /* URB数据含义:
                 * data[0] : 头部长度
                 * data[1] : 错误状态
                 */
                if (len < 2 || src[0] < 2 || src[0] > len)
                    continue;
                
                /* Skip payloads marked with the error bit ("error frames"). */
                if (src[1] & UVC_STREAM_ERR) {
                    printk("Dropping payload (error bit set).
    ");
                    continue;
                }
    
                /* 除去头部后的数据长度 */
                len -= src[0];
    
                /* 缓冲区最多还能存多少数据 */
                maxlen = buf->buf.length - buf->buf.bytesused;
                nbytes = min(len, maxlen);
    
                /* 复制数据 */
                memcpy(dest, src + src[0], nbytes);
                buf->buf.bytesused += nbytes;
    
                /* 判断一帧数据是否已经全部接收到 */
                if (len > maxlen) {
                    buf->state = VIDEOBUF_DONE;
                }
                
                /* Mark the buffer as done if the EOF marker is set. */
                if (src[1] & UVC_STREAM_EOF && buf->buf.bytesused != 0) {
                    printk("Frame complete (EOF found).
    ");
                    if (len == 0)
                        printk("EOF in empty payload.
    ");
                    buf->state = VIDEOBUF_DONE;
                }
    
        	}
    
            /* 当接收完一帧数据, 
             * 从irqqueue中删除这个缓冲区
             * 唤醒等待数据的进程 
             */
            if (buf->state == VIDEOBUF_DONE ||
                buf->state == VIDEOBUF_ERROR)
            {
                list_del(&buf->irq);
                wake_up(&buf->wait);
            }
    	}
    
        /* 再次提交URB */
    	if ((ret = usb_submit_urb(urb, GFP_ATOMIC)) < 0) {
    		printk("Failed to resubmit video URB (%d).
    ", ret);
    	}
    }
    
    
  • 相关阅读:
    pb9 json,Powerbuilder json parser
    ubuntu 16.04 安装VS CODE时 此软件来自第三方且可能包含非自由组件
    【转】C# XML序列化去掉XML默认的命名空间及声明头
    SQL Server孤立账户解决办法
    mirror op 如果在windows receiver上是黑屏
    C# .net WebRequest HttpWebRequest 禁用系统默认代理。
    解决“chrome adobe flash player不是最新版本”的方法
    python使用sqlalchemy连接pymysql数据库
    python2.0_s12_day11_SqlAlchemy使用介绍
    python2.0_s12_day9_协程&多线程和cpu,磁盘io之间的关系
  • 原文地址:https://www.cnblogs.com/zongzi10010/p/10764256.html
Copyright © 2020-2023  润新知