memcg: fix deadlock between lock_page_cgroup and mapping tree_lock
mapping->tree_lock can be acquired from interrupt context. Then,
following dead lock can occur.
Assume "A" as a page.
CPU0:
lock_page_cgroup(A)
interrupted
-> take mapping->tree_lock.
CPU1:
take mapping->tree_lock
-> lock_page_cgroup(A)
This patch tries to fix above deadlock by moving memcg's hook to out of
mapping->tree_lock. charge/uncharge of pagecache/swapcache is protected
by page lock, not tree_lock.
After this patch, lock_page_cgroup() is not called under mapping->tree_lock.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
bd6daba909
commit
e767e0561d
6 changed files with 15 additions and 7 deletions
|
|
@ -437,6 +437,11 @@ static inline int mem_cgroup_cache_charge_swapin(struct page *page,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline void
|
||||
mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* CONFIG_SWAP */
|
||||
#endif /* __KERNEL__*/
|
||||
#endif /* _LINUX_SWAP_H */
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue