Spin Lock
Spinlock is nothing but a Mutex, which
doesn’t sleep. It continues spinning around until it gets the lock. Spinlock
doesn’t sleep so it is safe to use in interrupt context. Spinlock should be
used very carefully, if not used at proper place, may slow down system
performance.
Spinlock APIs are defined in “linux/spinlock.h”. Spinlocks are available in many variants; they can be used
as per requirement.
The irq version:
You can either declare a ‘spinlock_t',
and assign ‘SPIN_LOCK_UNLOCKED' to it, or use ‘spin_lock_init()' in your initialization code.
Locking and Unlocking:
spin_lock_irqsave()
spin_unlock_irqrestore()
This grabs the spinlock, and blocks all
interrupts on the local CPU by saving the previous state in the flags argument (see
example code) and restores them at unlock.
How to use irq version of spinlock is
given below:
/* Header file */
#include
<linux/spinlock.h>
/* Declaration */
spinlock_t mylock;
/* dynamic initialization */
spin_lock_init(&mylock); // initialize in unlocked state
/* use */
unsigned long flags;
spin_lock_irqsave(&mylock, flags);
/*... critical section here .. */
spin_unlock_irqrestore(&mylock,
flags);
The rw-irq version:
If your data accesses
have a very natural pattern where you usually tend to mostly read from the shared variables, the
reader-writer locks (rw_lock) versions of the spinlocks are sometimes useful. They allow
multiple readers to be in the same critical region at once, but if somebody
wants to change the variables it has to
get an exclusive write lock. It also blocks all interrupts on the local CPU during
its execution of critical section.
Reader-writer locks
require more atomic memory operations than the simple irq version of spinlocks.
Unless the reader critical section is long, you are better off just
using spinlocks.
You can either declare a ‘rwlock_t',
and assign ‘RW_LOCK_UNLOCKED' to it, or use ‘rwlock_init()' in your initialization code.
/* Header file */
#include
<linux/spinlock.h>
/* Declaration */
rwlock_t mylock;
/* dynamic initialization */
rwlock_init(&mylock); // initialize in unlocked state
/* use */
unsigned long flags;
/* read only */
read_lock_irqsave(&mylock, flags);
/*.. critical section that only reads the info ... */
read_unlock_irqrestore(&mylock, flags);
/* read-write */
write_lock_irqsave(&mylock, flags);
/* .. read and write exclusive access to the info ... */
write_unlock_irqrestore(&mylock,
flags);
The irq version of spin
lock is the safest version among any type of locks. They disable all interrupts
during execution in critical section, so they are safest one. The safety comes
in cost of slowness because of disabled interrupts.
If you have a case where you have to protect a
data structure across several CPU's and you know that the locks are never used
in interrupt handlers,you can potentially use non-irq version of spinlocks.
Declaration,
initialization and usage pattern of non-irq spinlock is similar to irq version.
Locking and unlocking API if non-irq version of spinlock is given below:
spin_lock(&mylock);
spin_unlock(&mylock);
Equivalent read-write version of
non-irq spinlock APIs are also available.
void read_lock(rwlock_t
*lock);
void read_unlock(rwlock_t
*lock);void write_lock(rwlock_t *lock);
void write_unlock(rwlock_t *lock);
if you have the spinlock
and interrupts are still enabled, you could be interrupted by a bottom half,
which then spins forever on the spinlock. A better method is available if you
are only using the spinlock between a bottom half and user context; ‘spin_lock_bh()'. It disables software interrupts (bottom half) before
taking the lock, but leaves hardware interrupts enabled. This means that
interrupts can still be processed, and on most architectures is cheaper than
blocking interrupts. APIs used to lock and unlock spinlock are:
void
spin_lock_bh(spinlock_t *lock);
void
spin_unlock_bh(spinlock_t *lock);
Equivalent read-write version of bh
spinlock APIs are also available.
void
read_lock_bh(rwlock_t *lock);
void
read_unlock_bh(rwlock_t *lock);void write_lock_bh(rwlock_t *lock);
void write_unlock_bh(rwlock_t *lock);
No comments:
Post a Comment