• 信号量的分类


    信号量的分类
    Mutual Exclusion Semaphores(互斥):一种特殊的二进制信号量,专门针对互斥操作进行了优化。
    Binary Semaphores(二进制):完成互斥、同步操作的最佳方式;速度最快,最常用。
    Counting Semaphores(计数):类似于二进制信号量,可记录信号量释放的次数,可监视同一资源上的多个实例。

    ======== Mutual Exclusion Semaphores(互斥信号量)==============================

    互斥信号量是一种特殊的二进制信号量,它是针对使用二进制信号量进行互斥操作时存在的一些问题设计的。
    互斥信号量主要增加了对优先级倒置、删除安全以及递归访问的处理。
    1.互斥信号量只能用于互斥操作。
    2.只能由已经获取了互斥信号量的任务去释放它。
    3.中断服务程序(ISR)不可以释放(semGive())互斥信号量。
    4.互斥信号量不支持semFlush()操作。

    A mutual exclusion (mutex) semaphore is a special binary semaphore that supports
    ownership, recursive access, task deletion safety, and one or more protocols
    for avoiding problems inherent to mutual exclusion.

    When a task owns the mutex, it is not possible for any other task to lock or unlock that mutex.
    Contrast this concept with the binary semaphore, which can be released by any task,
    even a task that did not originally acquire the semaphore.

    A mutex is a synchronization object that can have only two states:
    Not owned.
    Owned.

    Two operations are defined for mutexes:
    Lock : This operation attempts to take ownership of a mutex,
    if the mutex is already owned by another thread then the invoking thread is queued.
    Unlock : This operation relinquishes ownership of a mutex.
    If there are queued threads then a thread is removed from the queue and resumed,
    ownership is implicitly assigned to the thread.

    ======== Binary Semaphores(二进制信号量)======================================

    1.互斥操作:是指不同任务可以利用信号量互斥地访问临界资源。
    这种互斥的访问方式比中断禁止(interrupt disable) 与优先级锁定(preemptive locks)
    两种互斥方式具有更加精确的粒度。
    互斥操作时初始状态设为(SEM_FULL)可用。
    并在同一个Task中成对、顺序调用semTake()、semGive()。

    2.同步操作:是指一个任务可以利用信号量控制自己的执行进度,
    使自己同步于一组外部事件。
    同步操作时初始状态设为(SEM_EMPTY)不可用。
    在不同Task中分别单独调用semTake()、semGive()。

    Because Wait may cause a thread to block (i.e., when the counter is zero),
    it has a similar effect of the lock operation of a mutex lock.
    Similarly, a Signal may release a waiting threads,
    and is similar to the unlock operation.

    In fact, semaphores can be used as mutex locks.
    Consider a semaphore S with initial value 1.
    Then, Wait and Signal correspond to lock and unlock:

    A binary semaphore can have a value of either 0 or 1.
    When a binary semaphore’s value is 0, the semaphore is considered unavailable (or empty);
    when the value is 1, the binary semaphore is considered available (or full ).

    Note that when a binary semaphore is first created, it can be initialized to
    either available or unavailable (1 or 0, respectively).

    However, there is an advantage in using semaphores.
    When a mutex lock is created, it is always in the "unlock" position.
    If a binary semaphore is used and initialized to 0, it is equivalent to having a mutex lock
    that is locked initially. Therefore, the use of binary semaphores is a little more flexible.

    A binary semaphore is a synchronization object that can have only two states:
    Not taken.
    Taken.

    Two operations are defined:
    Take : Taking a binary semaphore brings it in the “taken” state,
    trying to take a semaphore that is already taken enters the invoking thread into a waiting queue.
    Release : Releasing a binary semaphore brings it in the “not taken” state
    if there are not queued threads. If there are queued threads then a thread is removed
    from the queue and resumed, the binary semaphore remains in the “taken” state.
    Releasing a semaphore that is already in its “not taken” state has no effect.

    ======== Counting Semaphores(计数信号量)======================================

    计数信号量与二进制信号量都可以用于任务之间的同步与互斥。
    其不同点在于,计数信号量可记录信号量释放的次数,可以用来监视某一资源的使用状况。

    A counting semaphore is a synchronization object that can have an arbitrarily large number of states.
    The internal state is defined by a signed integer variable, the counter.
    The counter value (N) has a precise meaning:
    Negative, there are exactly -N threads queued on the semaphore.
    Zero, no waiting threads, a wait operation would put in queue the invoking thread.
    Positive, no waiting threads, a wait operation would not put in queue the invoking thread.

    Two operations are defined for counting semaphores:
    Wait : This operation decreases the semaphore counter,
    if the result is negative then the invoking thread is queued.
    Signal : This operation increases the semaphore counter,
    if the result is non-negative then a waiting thread is removed from the queue and resumed.

    ======== Mutexes 互斥信号量 ====================================================

    Mutexes are binary semaphores that include a priority inheritance mechanism. <优先级继承>

    Whereas binary semaphores are the better choice for implementing synchronisation
    (between tasks or between tasks and an interrupt),
    mutexes are the better choice for implementing
    simple mutual exclusion (hence 'MUT'ual 'EX'clusion).

    When used for mutual exclusion the mutex acts
    like a token that is used to guard a resource.
    When a task wishes to access the resource it must first obtain ('take') the token.
    When it has finished with the resource it must 'give' the token back -
    allowing other tasks the opportunity to access the same resource.

    Priority inheritance does not cure priority inversion!

    It just minimises its effect in some situations.
    Hard real time applications should be designed such that priority inversion
    does not happen in the first place.

    ======== Recursive Mutexes 递归互斥信号量 ======================================

    A mutex used recursively can be 'taken' repeatedly by the owner.
    The mutex doesn't become available again until the owner has called
    xSemaphoreGiveRecursive() for each successful xSemaphoreTakeRecursive() request.
    For example, if a task successfully 'takes' the same mutex 5 times then the mutex
    will not be available to any other task until it has also 'given'
    the mutex back exactly five times.
    This type of semaphore uses a priority inheritance mechanism so a task
    'taking' a semaphore MUST ALWAYS 'give' the semaphore back
    once the semaphore it is no longer required.

    Mutex type semaphores cannot be used from within interrupt service routines.

    Task() ----- xSemaphoreTakeRecursive()
    |funcA --- xSemaphoreTakeRecursive(), xSemaphoreGiveRecursive()
    |funcB --- xSemaphoreTakeRecursive(), xSemaphoreGiveRecursive()
    Task() ----- xSemaphoreGiveRecursive()

    ======== Binary Semaphores(二进制信号量)======================================

    Binary semaphores are used for both mutual exclusion and synchronisation purposes.
    Binary semaphores and mutexes are very similar but have some subtle differences:
    Mutexes include a priority inheritance mechanism, binary semaphores do not. <!优先级继承>
    This makes binary semaphores the better choice for implementing synchronisation
    (between tasks or between tasks and an interrupt),
    and mutexes the better choice for implementing simple mutual exclusion.

    Think of a binary semaphore as a queue that can only hold one item.
    The queue can therefore only be empty or full (hence binary).
    Tasks and interrupts using the queue don't care what the queue holds
    - they only want to know if the queue is empty or full.
    This mechanism can be exploited to synchronise (for example) a task with an interrupt.
    Consider the case where a task is used to service a peripheral.
    Polling the peripheral would be wasteful of CPU resources,
    and prevent other tasks from executing.
    It is therefore preferable that the task spends most of its time
    in the Blocked state (allowing other tasks to execute) and
    only execute itself when there is actually something for it to do.

    This is achieved using a binary semaphore by having the task Block
    while attempting to 'take' the semaphore.
    An interrupt routine is then written for the peripheral that just 'gives'
    the semaphore when the peripheral requires servicing.
    The task always 'takes' the semaphore (reads from the queue to make the queue empty),
    but never 'gives' it.
    The interrupt always 'gives' the semaphore (writes to the queue to make it full)
    but never takes it.

    Task prioritisation can be used to ensure peripherals get services in a timely manner
    - effectively generating a 'differed interrupt' scheme.
    An alternative approach is to use a queue in place of the semaphore.
    When this is done the interrupt routine can capture the data associated with the peripheral event
    and send it on a queue to the task. The task unblocks when data becomes available on the queue,
    retrieves the data from the queue, then performs any data processing that is required.
    This second scheme permits interrupts to remain as short as possible,
    with all post processing instead occurring within a task.

    ======== Counting Semaphores(计数信号量)======================================

    Just as binary semaphores can be thought of as queues of length one,
    counting semaphores can be thought of as queues of length greater than one.
    Again, users of the semaphore are not interested in the data that is stored in the queue
    - just whether or not the queue is empty or not.
    Counting semaphores are typically used for two things:

    Counting events.

    In this usage scenario an event handler will 'give' a semaphore each time an event occurs
    (incrementing the semaphore count value), and a handler task will 'take' a semaphore each time
    it processes an event (decrementing the semaphore count value).
    The count value is therefore the difference between the number of events that have occurred
    and the number that have been processed.
    In this case it is desirable for the count value to be zero when the semaphore is created.

    Resource management.

    In this usage scenario the count value indicates the number of resources available.
    To obtain control of a resource a task must first obtain a semaphore
    - decrementing the semaphore count value.
    When the count value reaches zero there are no free resources.
    When a task finishes with the resource it 'gives' the semaphore back
    - incrementing the semaphore count value.

    In this case it is desirable for the count value to be equal the maximum count value
    when the semaphore is created.

    ======== Typical Semaphore Use =================================================
    Semaphores are useful either for synchronizing execution of multiple tasks
    or for coordinating access to a shared resource.

    The following examples and general discussions illustrate using different types of semaphores
    to address common synchronization design requirements effectively, as listed:
    wait-and-signal synchronization
    multiple-task wait-and-signal synchronization
    credit-tracking synchronization
    single shared-resource-access synchronization
    multiple shared-resource-access synchronization
    recursive shared-resource-access synchronization

    死锁(或抱死)(Deadlock (or Deadly Embrace))
    死锁也称作抱死,指两个任务无限期地互相等待对方控制着的资源。
    设任务T1正独享资源R1,任务T2在独享资源T2,
    而此时T1又要独享R2,T2也要独享R1,于是哪个任务都没法继续执行了,
    发生了死锁。最简单的防止发生死锁的方法是让每个任务都:
    先得到全部需要的资源再做下一步的工作
    用同样的顺序去申请多个资源
    释放资源时使用相反的顺序
    内核大多允许用户在申请信号量时定义等待超时,以此化解死锁。
    当等待时间超过了某一确定值,信号量还是无效状态,
    就会返回某种形式的出现超时错误的代码,这个出错代码告知该任务,
    不是得到了资源使用权,而是系统错误。
    死锁一般发生在大型多任务系统中,在嵌入式系统中不易出现。

    优先级倒置 : HP_task的优先级降至LP_task的优先级
    HP_task等待LP_task的资源,于是处于Pend状态,这时一个中等优先级的MP_task进来,并抢占了LP_task的CPU,
    此时的表现是低优先级MP_task在高优先级的HP_task前执行。这种现象就是优先级倒置。

    优先级继承 : LP_task的优先级升至HP_task的优先级
    HP_task等待LP_task的资源,于是处于Pend状态,这是把LP_task提升到HP_task的优先级
    在LP_task semGive()后恢复LP_task的优先级,避免低于HP_task优先级的任务在HP_task
    等待期间执行。这种现象就是优先级继承。
    LP_task继承了HP_task的优先级。

    The rule to go by for the scheduler is:

    Activate the task that has the highest priority of all tasks in the READY state.

    But what happens if the highest-priority task is blocked because it is waiting for a resource owned by a lower-priority task?
    According to the above rule, it would wait until the low-priority-task becomes running again and releases the resource.
    Up to this point, everything works as expected.

    Problems arise when a task with medium priority becomes ready during the execution of the higher prioritized task.
    When the higher priority task is suspended waiting for the resource, the task with the medium priority will run
    until it finished its work, because it has higher priority as the low priority task.
    In this scenario, a task with medium priority runs before the task with high priority.
    This is known as priority inversion.

    The low priority task claims the semaphore with OS_Use().
    An interrupt activates the high priority task, which also calls OS_Use().
    Meanwhile a task with medium got ready and runs when the high priority task is suspended.
    After doing some operations, the task with medium priority calls OS_Delay() and is therefore suspended.
    The task with lower priority continues now and calls OS_Unuse() to release the resource semaphore.
    After the low priority task releases the semaphore, the high priority task is activated and claims the semaphore.

    To avoid this kind of situation, the low-priority task that is blocking the highest-priority task gets assigned the highest priority
    until it releases the resource, unblocking the task which originally had highest priority. This is known as priority inheritance.

    With priority inheritance, the low priority task inherits the priority of the waiting high priority task
    as long as it holds the resource semaphore. The lower priority task is activated instead of the medium priority task
    when the high priority task tries to claim the semaphore.

    mutex — specify the task-waiting order and enable task deletion safety, recursion,
    and priority-inversion avoidance protocols, if supported.
    binary — specify the initial semaphore state and the task-waiting order.
    counting — specify the initial semaphore count and the task-waiting order.

  • 相关阅读:
    python 去重
    怎样稳稳获得年化高收益
    module_loader.py
    mac上安装ta-lib
    mac上安装memcache
    创建widget
    smartsvn 用法
    用nifi executescript 生成3小时间隔字符串
    TclError: no display name and no $DISPLAY environment variable
    【C#】详解C#序列化
  • 原文地址:https://www.cnblogs.com/shangdawei/p/3939376.html
Copyright © 2020-2023  润新知