HardenedBSD/sys/vm
jhb 8af56fb687 If a thread that is swapped out is made runnable, then the setrunnable()
routine wakes up proc0 so that proc0 can swap the thread back in.
Historically, this has been done by waking up proc0 directly from
setrunnable() itself via a wakeup().  When waking up a sleeping thread
that was swapped out (the usual case when waking proc0 since only sleeping
threads are eligible to be swapped out), this resulted in a bit of
recursion (e.g. wakeup() -> setrunnable() -> wakeup()).

With sleep queues having separate locks in 6.x and later, this caused a
spin lock LOR (sleepq lock -> sched_lock/thread lock -> sleepq lock).
An attempt was made to fix this in 7.0 by making the proc0 wakeup use
the ithread mechanism for doing the wakeup.  However, this required
grabbing proc0's thread lock to perform the wakeup.  If proc0 was asleep
elsewhere in the kernel (e.g. waiting for disk I/O), then this degenerated
into the same LOR since the thread lock would be some other sleepq lock.

Fix this by deferring the wakeup of the swapper until after the sleepq
lock held by the upper layer has been locked.  The setrunnable() routine
now returns a boolean value to indicate whether or not proc0 needs to be
woken up.  The end result is that consumers of the sleepq API such as
*sleep/wakeup, condition variables, sx locks, and lockmgr, have to wakeup
proc0 if they get a non-zero return value from sleepq_abort(),
sleepq_broadcast(), or sleepq_signal().

Discussed with:	jeff
Glanced at by:	sam
Tested by:	Jurgen Weber  jurgen - ish com au
MFC after:	2 weeks
2008-08-05 20:02:31 +00:00
..
default_pager.c
device_pager.c Preset a device object's alignment ("pg_color") based upon the 2008-05-17 16:26:34 +00:00
memguard.c Provide the new argument to kmem_suballoc(). 2008-05-10 23:39:27 +00:00
memguard.h
phys_pager.c
pmap.h Retire pmap_addr_hint(). It is no longer used. 2008-05-18 04:16:57 +00:00
redzone.c Modify stack(9) stack_print() and stack_sbuf_print() routines to use new 2007-12-01 22:04:16 +00:00
redzone.h
swap_pager.c If the kernel has run out of metadata for swap, then explicitly panic() 2008-07-30 21:12:15 +00:00
swap_pager.h
uma_core.c
uma_dbg.c
uma_dbg.h
uma_int.h
uma.h
vm_contig.c
vm_extern.h Introduce a new parameter "superpage_align" to kmem_suballoc() that is 2008-05-10 21:46:20 +00:00
vm_fault.c
vm_glue.c If a thread that is swapped out is made runnable, then the setrunnable() 2008-08-05 20:02:31 +00:00
vm_init.c Introduce a new parameter "superpage_align" to kmem_suballoc() that is 2008-05-10 21:46:20 +00:00
vm_kern.c Eliminate stale comments from kmem_malloc(). 2008-07-18 17:41:31 +00:00
vm_kern.h Enable the creation of a kmem map larger than 4GB. 2008-07-05 19:34:33 +00:00
vm_map.c KERNBASE is not necessarily an address within the kernel map, e.g., 2008-06-21 21:02:13 +00:00
vm_map.h Generalize vm_map_find(9)'s parameter "find_space". Specifically, add 2008-05-10 18:55:35 +00:00
vm_meter.c
vm_mmap.c Fill in a few sysctl descriptions. 2008-08-03 14:26:15 +00:00
vm_object.c Fill in a few sysctl descriptions. 2008-08-03 14:26:15 +00:00
vm_object.h Allow VM object creation in ufs_lookup. (If vfs.vmiodirenable is set) 2008-05-20 19:05:43 +00:00
vm_page.c Essentially, neither madvise(..., MADV_DONTNEED) nor madvise(..., MADV_FREE) 2008-06-06 18:38:43 +00:00
vm_page.h
vm_pageout.c Fill in a few sysctl descriptions. 2008-08-03 14:26:15 +00:00
vm_pageout.h
vm_pager.c
vm_pager.h
vm_param.h
vm_phys.c Introduce vm_reserv_reclaim_contig(). This function is used by 2008-04-06 18:09:28 +00:00
vm_phys.h
vm_reserv.c Introduce vm_reserv_reclaim_contig(). This function is used by 2008-04-06 18:09:28 +00:00
vm_reserv.h Introduce vm_reserv_reclaim_contig(). This function is used by 2008-04-06 18:09:28 +00:00
vm_unix.c
vm_zeroidle.c Fill in a few sysctl descriptions. 2008-08-03 14:26:15 +00:00
vm.h
vnode_pager.c A few more whitespace fixes. 2008-07-30 21:18:08 +00:00
vnode_pager.h