• 嵌入式RPC RPMsg IPC通信 开源解决方式


    推荐 EmbeddedRPC   RPMsg-Lite

    https://github.com/EmbeddedRPC/erpc

    https://github.com/NXPmicro/rpmsg-lite

    https://github.com/OpenAMP/open-amp

    https://nxpmicro.github.io/rpmsg-lite/

    https://gitee.com/l0km/erpcdemo

    RPMsg Component

    This documentation describes the RPMsg-Lite component, which is a lightweight implementation of the Remote Processor Messaging (RPMsg) protocol. The RPMsg protocol defines a standardized binary interface used to communicate between multiple cores in a heterogeneous multicore system.

    Compared to the RPMsg implementation of the Open Asymmetric Multi Processing (OpenAMP) framework (https://github.com/OpenAMP/open-amp), the RPMsg-Lite offers a code size reduction, API simplification, and improved modularity. On smaller Cortex-M0+ based systems, it is recommended to use RPMsg-Lite.

    The RPMsg-Lite is an open-source component developed by NXP Semiconductors and released under the BSD-compatible license.

    Motivation to create RPMsg-Lite

    There are multiple reasons why RPMsg-Lite was developed. One reason is the need for the small footprint of the RPMsg protocol-compatible communication component, another reason is the simplification of extensive API of OpenAMP RPMsg implementation.

    RPMsg protocol was not documented, and its only definition was given by the Linux Kernel and legacy OpenAMP implementations. This has changed with [1] which is a standardization protocol allowing multiple different implementations to coexist and still be mutually compatible.

    Small MCU-based systems often do not implement dynamic memory allocation. The creation of static API in RPMsg-Lite enables another reduction of resource usage. Not only does the dynamic allocation adds another 5 KB of code size, but also communication is slower and less deterministic, which is a property introduced by dynamic memory. The following table shows some rough comparison data between the OpenAMP RPMsg implementation and new RPMsg-Lite implementation:

    Component / ConfigurationFlash [B]RAM [B]
    OpenAMP RPMsg / Release (reference) 5547 456 + dynamic
    RPMsg-Lite / Dynamic API, Release 3462 56 + dynamic
    Relative Difference [%] ~62.4% ~12.3%
    RPMsg-Lite / Static API (no malloc), Release 2926 352
    Relative Difference [%] ~52.7% ~77.2%

    Implementation

    The implementation of RPMsg-Lite can be divided into three sub-components, from which two are optional. The core component is situated in rpmsg_lite.c. Two optional components are used to implement a blocking receive API (in rpmsg_queue.c) and dynamic "named" endpoint creation and deletion announcement service (in rpmsg_ns.c).

    The actual "media access" layer is implemented in virtqueue.c, which is one of the few files shared with the OpenAMP implementation. This layer mainly defines the shared memory model, and internally defines used components such as vring or virtqueue.

    The porting layer is split into two sub-layers: the environment layer and the platform layer. The first sublayer is to be implemented separately for each environment. (The bare metal environment already exists and is implemented in rpmsg_env_bm.c, and the FreeRTOS environment is implemented in rpmsg_env_freertos.c etc.) Only the source file, which matches the used environment, is included in the target application project. The second sublayer is implemented in rpmsg_platform.c and defines low-level functions for interrupt enabling, disabling, and triggering mainly. The situation is described in the following figure:

    rpmsg_lite_arch.png
    RPMsg-Lite Architecture

    RPMsg-Lite core sub-component

    This subcomponent implements a blocking send API and callback-based receive API. The RPMsg protocol is part of the transport layer. This is realized by using so-called endpoints. Each endpoint can be assigned a different receive callback function. However, it is important to notice that the callback is executed in an interrupt environment in current design. Therefore, certain actions like memory allocation are discouraged to execute in the callback. The following figure shows the role of RPMsg in an ISO/OSI-like layered model:

    rpmsg_isoosi.png
    RPMsg ISO/OSI Layered Model

    Queue sub-component (optional)

    This subcomponent is optional and requires implementation of the env_*_queue() functions in the environment porting layer. It uses a blocking receive API, which is common in RTOS-environments. It supports both copy and nocopy blocking receive functions.

    Name Service sub-component (optional)

    This subcomponent is a minimum implementation of the name service which is present in the Linux Kernel implementation of RPMsg. It allows the communicating node both to send announcements about "named" endpoint (in other words, channel) creation or deletion and to receive these announcement taking any user-defined action in an application callback. The endpoint address used to receive name service announcements is arbitrarily fixed to be 53 (0x35).

    Usage

    The application should put the /rpmsg_lite/lib/include directory to the include path and in the application, include either the rpmsg_lite.h header file, or optionally also include the rpmsg_queue.h and/or rpmsg_ns.h files. Both porting sublayers should be provided for you by NXP, but if you plan to use your own RTOS, all you need to do is to implement your own environment layer (in other words, rpmsg_env_myrtos.c) and to include it in the project build.

    The initialization of the stack is done by calling the rpmsg_lite_master_init() on the master side and the rpmsg_lite_remote_init() on the remote side. This initialization function must be called prior to any RPMsg-Lite API call. After the init, it is wise to create a communication endpoint, otherwise communication is not possible. This can be done by calling the rpmsg_lite_create_ept() function. It optionally accepts a last argument, where an internal context of the endpoint is created, just in case the RL_USE_STATIC_API option is set to 1. If not, the stack internally calls env_alloc() to allocate dynamic memory for it. In case a callback-based receiving is to be used, an ISR-callback is registered to each new endpoint with user-defined callback data pointer. If a blocking receive is desired (in case of RTOS environment), the rpmsg_queue_create() function must be called before calling rpmsg_lite_create_ept(). The queue handle is passed to the endpoint creation function as a callback data argument and the callback function is set to rpmsg_queue_rx_cb(). Then, it is possible to use rpmsg_queue_receive() function to listen on a queue object for incoming messages. The rpmsg_lite_send() function is used to send messages to the other side.

    The RPMsg-Lite also implements no-copy mechanisms for both sending and receiving operations. These methods require specifics that have to be considered when used in an application.

    no-copy-send mechanism: This mechanism allows sending messages without the cost for copying data from the application buffer to the RPMsg/virtio buffer in the shared memory. The sequence of no-copy sending steps to be performed is as follows:

    • Call the rpmsg_lite_alloc_tx_buffer() function to get the virtio buffer and provide the buffer pointer to the application.
    • Fill the data to be sent into the pre-allocated virtio buffer. Ensure that the filled data does not exceed the buffer size (provided as the rpmsg_lite_alloc_tx_buffer() size output parameter).
    • Call the rpmsg_lite_send_nocopy() function to send the message to the destination endpoint. Consider the cache functionality and the virtio buffer alignment. See the rpmsg_lite_send_nocopy() function description below.

    no-copy-receive mechanism: This mechanism allows reading messages without the cost for copying data from the virtio buffer in the shared memory to the application buffer. The sequence of no-copy receiving steps to be performed is as follows:

    • Call the rpmsg_queue_recv_nocopy() function to get the virtio buffer pointer to the received data.
    • Read received data directly from the shared memory.
    • Call the rpmsg_queue_nocopy_free() function to release the virtio buffer and to make it available for the next data transfer.

    The user is responsible for destroying any RPMsg-Lite objects he has created in case of deinitialization. In order to do this, the function rpmsg_queue_destroy() is used to destroy a queue, rpmsg_lite_destroy_ept() is used to destroy an endpoint and finally, rpmsg_lite_deinit() is used to deinitialize the RPMsg-Lite intercore communication stack. Deinitialize all endpoints using a queue before deinitializing the queue. Otherwise, you are actively invalidating the used queue handle, which is not allowed. RPMsg-Lite does not check this internally, since its main aim is to be lightweight.

    rpmsg_lite_send_receive.png
    RPMsg Lite copy and no-copy interface, multiple scenarios

    Configuration options

    The RPMsg-Lite can be configured at the compile time. The default configuration is defined in the rpmsg_default_config.h header file. This configuration can be customized by the user by including rpmsg_config.h file with custom settings. The following table summarizes all possible RPMsg-Lite configuration options.

    Configuration optionDefault valueUsage
    RL_MS_PER_INTERVAL (1) Delay in milliseconds used in non-blocking API functions for polling.
    RL_BUFFER_PAYLOAD_SIZE (496) Size of the buffer payload, it must be equal to (240, 496, 1008, ...) [2^n - 16]
    RL_BUFFER_COUNT (2) Number of the buffers, it must be power of two (2, 4, ...)
    RL_API_HAS_ZEROCOPY (1) Zero-copy API functions enabled/disabled.
    RL_USE_STATIC_API (0) Static API functions (no dynamic allocation) enabled/disabled.
    RL_CLEAR_USED_BUFFERS (0) Clearing used buffers before returning back to the pool of free buffers enabled/disabled.
    RL_USE_MCMGR_IPC_ISR_HANDLER (0) When enabled IPC interrupts are managed by the Multicore Manager (IPC interrupts router), when disabled RPMsg-Lite manages IPC interrupts by itself.
    RL_USE_ENVIRONMENT_CONTEXT (0) When enabled the environment layer uses its own context. Required for some environments (QNX). The default value is 0 (no context, saves some RAM).
    RL_DEBUG_CHECK_BUFFERS (0) When enabled buffer debug checks in rpmsg_lite_send_nocopy() and rpmsg_lite_release_rx_buffer() functions are disabled. Do not use in RPMsg-Lite to Linux configuration.
    RL_ALLOW_CONSUMED_BUFFERS_NOTIFICATION (0) When enabled the opposite side is notified each time received buffers are consumed and put into the queue of available buffers. Enable this option in RPMsg-Lite to Linux configuration to allow unblocking of the Linux blocking send. The default value is 0 (RPMsg-Lite to RPMsg-Lite communication).
    RL_ALLOW_CUSTOM_SHMEM_CONFIG (0) It allows to define custom shared memory configuration and replacing the shared memory related global settings from rpmsg_config.h This is useful when multiple instances are running in parallel but different shared memory arrangement (vring size & alignment, buffers size & count) is required. The default value is 0 (all RPMsg_Lite instances use the same shared memory arrangement as defined by common config macros).
    RL_ASSERT see rpmsg_default_config.h Assert implementation.

    References

    [1] M. Novak, M. Cingel, Lockless Shared Memory Based Multicore Communication Protocol

     

    https://github.com/sonydevworld/spresense/tree/master/examples/fft

    https://github.com/qicosmos/rest_rpc

    https://github.com/EmbeddedRPC/erpc

    This tutorial is introducing the eRPC (embedded remote procedure call) open-source project.

    The eRPC (Embedded Remote Procedure Call) is a Remote Procedure Call (RPC) system created by NXP. An RPC is a mechanism used to invoke a software routine on a remote system using a simple local function call. The remote system may be any CPU connected by an arbitrary communications channel: a server across a network, another CPU core in a multicore system, and so on. To the client, it is just like calling a function in a library built into the application. The only difference is any latency or unreliability introduced by the communications channel.

    Important links: 

    The eRPC is supporting multicore and multiprocessor types of applications. 

    Where to find eRPC:

    • Multicores examples
    • Multiprocessor examples
      • frdmk22f, frdmk28f, frdmk64f, frdmk66f, frdmk82f, frdmkl25z, frdmkl27z, frdmkl43z - MCUXpresso download page
     
     

    https://github.com/alibaba/AliOS-Things/tree/master/components/uservice

    uService (微服务) 是一种支持RPC请求/应用的交互,并支持状态消息发布的一种服务机制,客户端可以通过发送请求消息并待回复的方式调用uService(微服务)提供的服务,也可以通过订阅服务的事件,来处理服务的事件状态。

    serviceTask (服务任务)是利用操作系统的多任务系统,实现消息的分发机制,一个 serviceTask中创建一个OS 的Task。一个serviceTask 下可以注册多个微服务,同一个服务任务下的所有微服务的消息采用先进先处理的顺序执行。

     

     

    BufferQueue

    https://blog.csdn.net/rabbyheathy/article/details/103748551

    https://github.com/nesl/Android-IPC/blob/master/ashmemIPC/FibonacciClient/jni/android/include/frameworks/native/include/gui/BufferQueue.h

     

     

    https://yoc.docs.t-head.cn/yocbook/Chapter4-%E6%A0%B8%E5%BF%83%E6%A8%A1%E5%9D%97/%E5%BC%82%E6%9E%84%E5%A4%9A%E6%A0%B8IPC.html

    异构多核通信 (IPC)

    概述

    随着信息技术的发展与需求的提升,传统单核 SoC 不能满足需求,促进了同构多核与异构多核的发展,在异构多核框架中,整个系统中由多个不同的处理器、性能与用途格不相同的多个核心组成,各个核心发挥各自的计算优势,实现资源的最佳配置,很好的提升了整体性能,降低功耗,简化开发模式,因此多核间的通信机制,以及软件编程模型对多核性能的发挥尤为重要。本文介绍异构核间通信机制,考虑的因素有:

    • 轻量级,可应用于 NOS / RTOS 的应用场景
    • 简化各核心的开发模式
    • 针对核间协同工作的特点,划分为:通用计算、并行计算、流式数据、帧式数据,应用于多种异构场景
    • 可移植性,框架可以方便移植到 Linux,不同的 RTOS 中

    异构多核处理器大多采用主从式的结构。主从式结构根据不同核的功能把处理器核分为主核和从核。主核的结构和功能一般较为复杂,负责全局资源、任务的管理和调度并完成从核的引导加载。从核主要接受主核的管理,负责运行主核分配的任务,并具有本地任务调度与管理功能。在多核处理器中,根据不同核的结构,各个核可运行相同或不同的操作系统。

    异构多核实现原理

    @平台接入示意图

    从上图可知,为实现异构多核之间的通信,芯片设计时,通过中断向另一个核发送中断请求,执行相应的中断处理程序进入地址传递,使用共享一块内存区域,实现数据传递,各核心对外设的访问,通过控制总线来配置。因此,要实现异构核间通信,各核心须有以下功能:

    • 主处理器对从处理器进行管理
    • 内部处理器之间的信息传输与交换

    核间中断

    共享内存

    核间通信协议

    IPC架构

    Channel (通道)

    Channel 实现多核之间的数据交互的通信机制,

    struct channel {
        slist_t    next;
        channel_cb cb;
        void      *priv;
        uint32_t   des_id;
        void      *context;
    };
    

    Event (事件)

    Event 是对核间中断的抽象,定义为事件,系统中可以定义32个事件,事件是有优先级的,EventId越小优先级越高,事件0的优先级最高,随着EventId增大优先级依次递减;当多个事件被触发,优先级最高的会最先响应。

    MessageQ (消息队列)

    MessageQ(目前暂未实现),基于队列的消息传递,有以下特点:

    实现了处理期间变长消息的传递;消息的传递都是通过操作消息队列来实现的;每个消息队列可以有多个写者,但只能有一个读者;每个任务(task)可以对多个消息队列进行读写;一个宿主在准备接收消息时,必须先创建消息队列,而在发送消息前,需要打开预定的接收消息队列。

    FrameQ (帧缓冲队列)

    FrameQ(目前暂未实现)是专门为传递视频帧而设计出来的组件。FrameQ的基本数据结构是可用于queue/dequeue数据的数据队列,封装了视频帧缓存指针、帧数据类型、帧宽、帧高、时间戳等信息。

    对于FrameQ模块具有以下特征:支持多个读者,但写者唯一;可以分配和释放Frames;可以对指向同一块内存区多次分配和初始化成新的帧buffer;FrameQ允许有多个队列,在多通道的运用中,视频帧会根据通道号被分配到相应的帧队列中。

    RingQ (环形缓冲区)

    RingIO(目前暂未实现)是基于数据流的环形缓冲buffer,而且针对于音视频数据的特性做了优化。

    RingIO支持一下特性:仅支持一个读者和一个写者;读写相对独立,可以在不同的进程或者处理器中同时进行读写操作。

    接口定义

    初始化ipc

    ipc_t *ipc_get(int des_cpu_id);
    

    初始化与目标cpu的通道,获取ipc句柄

    • 参数
      • des_cpu_id:目标cpu id
    • 返回值:
      • 成功返回ipc_t 指针,失败返回NULL

    发送ipc消息

    int ipc_message_send(ipc_t *ipc, message_t *msg, int timeout_ms);
    

    IPC 消息发送,将消息通过 channel 发送到远程 IPC。当 msg 为同步消息时,该函数等待对方应答后才会返回,当 msg 异步消息发送完后直接返回。

    typedef struct message_msg message_t;
    struct message_msg {
        uint8_t         flag;            /** flag for MESSAGE_SYNC and MESSAGE_ACK */
        uint8_t         service_id;      /** service id for the service want to comtunicate */
        uint16_t        command;         /** command id the service provide */
        uint32_t        seq;             /** message seq */
        void           *req_data;        /** message request data */
        int             req_len;         /** message request data len */
        aos_queue_t     queue;           /** queue for SYNC MESSAGE */
        void           *resp_data;         /** message response data */
        int             resp_len;         /** message response data len*/
    };
    
    • 参数
      • ipc:ipc 句柄
      • msg:ipc 消息
      • timeout_ms: 超时时间,单位ms
    • 返回值:
      • 成功返回0,失败返回-1

    回复ipc消息

    int ipc_message_ack(ipc_t *ipc, message_t *msg, int timeout_ms);
    

    当 msg 为同步消息时,调用该函数

    • 参数
      • ipc:ipc 句柄
      • msg:ipc 消息
      • timeout_ms: 超时时间,单位ms
    • 返回值:
      • 成功返回0,失败返回-1

    添加服务

    int ipc_add_service(ipc_t *ipc, int service_id, ipc_process_t cb, void *priv);
    

    在ipc中增加一个服务

    typedef void (*ipc_process_t)(ipc_t *ipc, message_t *msg, void *priv);
    
    • 参数
      • ipc:ipc 句柄
      • service_id:服务号
      • cb: 用户自定义ipc服务处理函数
      • priv:用户自定义参数
    • 返回值:
      • 成功返回0,失败返回-1

    示例代码

    • AP
      #define IPC_BUF_LEN (4096*2)
    
      typedef struct {
          ipc_t *ipc;
          char data[IPC_BUF_LEN] __attribute__ ((aligned(64)));
      } ipc_test_t;
    
      ipc_test_t g_test[2];
    
      #define TAG "AP"
      /*ipc 同步调用示例*/
      int ipc_sync(void)
      {
          message_t msg;
    
          /* 设置message的command为103 */
          msg.command = 103;
          /* 设置同步标志 */
          msg.flag    = MESSAGE_SYNC;
          /* 设置message的service_id为20 */
          msg.service_id = 20;
          msg.resp_data = NULL;
          msg.resp_len = 0;
    
          /* 设置一次发送数据量 */
          int snd_len = 4096;
          int offset = 0;
          char *send = (char *)src_audio_music_raw;
    
    
          while (offset < src_audio_music_raw_len) {
              /* 设置message的请求data指针 */
              msg.req_data    = send + offset;
    
              /* 设置message的请求data的长度 */
              snd_len = 4096 < (src_audio_music_raw_len - offset)? 4096 : (src_audio_music_raw_len - offset);
              msg.req_len     = snd_len;
              /* 发送message*/
              ipc_message_send(g_test[0].ipc, &msg, AOS_WAIT_FOREVER);
              /* 发送成功后将文件偏移发送数据量*/
              offset += snd_len;
          }
    
          printf("ipc sync done\n");
          return 0;
      }
    
      // async message demo
    
      static char *s[] = {
          "00000000",
          "11111111",
          "22222222",
          "33333333",
          "44444444",
          "55555555",
          "66666666",
          "77777777",
          "88888888",
          "99999999",
      };
      /*ipc 异步调用示例*/
      int ipc_async(void)
      {
          message_t msg;
          memset(&msg, 0, sizeof(message_t));
    
          /* 设置message的service_id为20 */
          msg.service_id = 20;
          /* 设置message的command为104 */
          msg.command = 104;
          /* 设置异步标志*/
          msg.flag   = MESSAGE_ASYNC;
    
          msg.service_id = 20;
    
          for (int i = 0; i < 100; i++) {
              msg.req_data    = s[i%10];
              msg.req_len     = strlen(s[i%10]) + 1;
    
              /* 发送 message */
              ipc_message_send(g_test[0].ipc, &msg, AOS_WAIT_FOREVER);
          }
    
          printf("ipc async done\n");
    
          return 0;
      }
    
      int ipc_server_init(void)
      {
          ipc_test_t *i = &g_test[0];
          /* 获取ipc (cpu_id:2)*/
          i->ipc = ipc_get(2);
    
          /* 添加ipc服务(service_id:20)*/
          ipc_add_service(i->ipc, 20, NULL, NULL);
    
          return 0;
      }
    
    • CP
    #define TAG "CP"
    #define IPC_BUF_LEN (1024)
    
    typedef struct {
        ipc_t *ipc;
        char data[IPC_BUF_LEN] __attribute__ ((aligned(64)));
    } ipc_test_t;
    
    ipc_test_t g_test[2];
    
    static char *s[] = {
        "00000000",
        "11111111",
        "22222222",
        "33333333",
        "44444444",
        "55555555",
        "66666666",
        "77777777",
        "88888888",
        "99999999",
    };
    
    static void cli_ipc_process(ipc_t *ipc, message_t *msg, void *priv)
    {
        switch (msg->command) {
    
            case 104: {
                /* 异步cmd处理 */
                static int offset = 0;
                /* 比较异步发送的数据是否正确*/
                int ret = memcmp(s[offset % 10], msg->req_data, msg->req_len);
    
                offset ++;
    
                if (ret != 0) {
                    printf("ipc async err!!!\n");
                }
    
                if (offset == 100) {
                    printf("ipc async ok!!!\n");
                    offset = 0;
                }
                break;
            }
    
            case 103: {
                char *music = (char *)src_audio_music_raw;
                static int music_len = 0;
    
                /* 比较同步发送的数据是否正确*/
                int ret = memcmp(music + music_len, msg->req_data, msg->req_len);
                music_len += msg->req_len;
    
                if (ret != 0) {
                    printf("ipc sync err!!!\n");
                }
                if (music_len == src_audio_music_raw_len) {
                    /* 若文件发送完毕则打印成功*/
                    printf("music recv ok, total:(%d)\n", music_len);
                    music_len = 0;
                }
                /* 回复同步message的ack*/
                ipc_message_ack(ipc, msg, AOS_WAIT_FOREVER);
    
            }
    
            default :
                break;
        }
    }
    
    int ipc_cli_init(void)
    {
        ipc_test_t *i = &g_test[1];
        /* 获取ipc (cpu_id:0)*/
        i->ipc = ipc_get(0);
    
        /* 添加ipc服务(service_id:20)*/
        ipc_add_service(i->ipc, 20, cli_ipc_process, i);
    
        return 0;
    }



    https://help.aliyun.com/document_detail/311304.html

     

     

    异构多核通信 (IPC)

    https://github.com/T-head-Semi/open-yoc/blob/master/components/ipc/README.md

    https://occ.t-head.cn/doc/docs/Chapter4-E6A0B8E5BF83E6A8A1E59D97/E5BC82E69E84E5A49AE6A0B8IPC.html

     

     

     

    https://zhuanlan.zhihu.com/p/530307453

     

     

     

     

  • 相关阅读:
    学生信息录入(学号 姓名 成绩),并按学号查找。
    char、signed char、unsigned char的区别
    C语言-数组
    如何选取网站主要内容(转)
    git pull和git fetch的区别(转)
    yolov3训练
    Docker容器图形界面显示(运行GUI软件)的配置方法
    切换Ubuntu默认python版本的两种方法
    多用户远程linux+内网穿透工具frp使用详解
    pycharm远程调试docker containers
  • 原文地址:https://www.cnblogs.com/sinferwu/p/16531879.html
Copyright © 2020-2023  润新知