diff options
author | Chris Metcalf <cmetcalf@tilera.com> | 2011-03-01 18:30:15 (GMT) |
---|---|---|
committer | Chris Metcalf <cmetcalf@tilera.com> | 2011-03-10 21:10:41 (GMT) |
commit | 3c5ead52ed68406c0ee789024c4ae581be8bcee4 (patch) | |
tree | cd634aba3710115640b372b4fc49fee5ead75acf /Documentation | |
parent | 5c7707554858eca8903706b6df7cba5c0f802244 (diff) | |
download | linux-fsl-qoriq-3c5ead52ed68406c0ee789024c4ae581be8bcee4.tar.xz |
arch/tile: fix deadlock bugs in rwlock implementation
The first issue fixed in this patch is that pending rwlock write locks
could lock out new readers; this could cause a deadlock if a read lock was
held on cpu 1, a write lock was then attempted on cpu 2 and was pending,
and cpu 1 was interrupted and attempted to re-acquire a read lock.
The write lock code was modified to not lock out new readers.
The second issue fixed is that there was a narrow race window where a tns
instruction had been issued (setting the lock value to "1") and the store
instruction to reset the lock value correctly had not yet been issued.
In this case, if an interrupt occurred and the same cpu then tried to
manipulate the lock, it would find the lock value set to "1" and spin
forever, assuming some other cpu was partway through updating it. The fix
is to enforce an interrupt critical section around the tns/store pair.
In addition, this change now arranges to always validate that after
a readlock we have not wrapped around the count of readers, which
is only eight bits.
Since these changes make the rwlock "fast path" code heavier weight,
I decided to move all the rwlock code all out of line, leaving only the
conventional spinlock code with fastpath inlines. Since the read_lock
and read_trylock implementations ended up very similar, I just expressed
read_lock in terms of read_trylock.
As part of this change I also eliminate support for the now-obsolete
tns_atomic mode.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Diffstat (limited to 'Documentation')
0 files changed, 0 insertions, 0 deletions