pve-kernel-thunderx/swapops-0004-mm-ksm-handle-protnone-saved-writes-when-making-page.patch
Fabian Grünbichler 2b834b083d add proposed fix for LP#1674838
Patches and rationale by Seth Forshee[1]:

My testing shows that the "POWER9: Additional power9
patches" patches are responsible, two of them in particular:

 - mm: introduce page_vma_mapped_walk()
 - mm, ksm: convert write_protect_page() to use page_vma_mapped_walk()

These patches don't appear to be included for any
functionality they provide, but rather to make "mm/ksm:
handle protnone saved writes when making page write protect"
a clean cherry pick instead of a backport. But the backport
isn't that difficult, so as far as I can tell we can do away
with the other two patches.

1: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1674838/comments/108
2017-05-05 09:12:20 +02:00

77 lines
2.6 KiB
Diff

From 361de9fb44163c4e693022786af380a2b2298c6d Mon Sep 17 00:00:00 2001
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Date: Fri, 24 Feb 2017 14:59:19 -0800
Subject: [PATCH 4/4] mm/ksm: handle protnone saved writes when making page
write protect
Without this KSM will consider the page write protected, but a numa
fault can later mark the page writable. This can result in memory
corruption.
Link: http://lkml.kernel.org/r/1487498625-10891-3-git-send-email-aneesh.kumar@linux.vnet.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(backported from commit 595cd8f256d24face93b2722927ec9c980419c26)
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
---
include/asm-generic/pgtable.h | 8 ++++++++
mm/ksm.c | 9 +++++++--
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index b6f3a8a4b738..8c8ba48bef0b 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -200,6 +200,10 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
#define pte_mk_savedwrite pte_mkwrite
#endif
+#ifndef pte_clear_savedwrite
+#define pte_clear_savedwrite pte_wrprotect
+#endif
+
#ifndef pmd_savedwrite
#define pmd_savedwrite pmd_write
#endif
@@ -208,6 +212,10 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
#define pmd_mk_savedwrite pmd_mkwrite
#endif
+#ifndef pmd_clear_savedwrite
+#define pmd_clear_savedwrite pmd_wrprotect
+#endif
+
#ifndef __HAVE_ARCH_PMDP_SET_WRPROTECT
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline void pmdp_set_wrprotect(struct mm_struct *mm,
diff --git a/mm/ksm.c b/mm/ksm.c
index fed4afd8293b..099dfa45d596 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -878,7 +878,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
if (!ptep)
goto out_mn;
- if (pte_write(*ptep) || pte_dirty(*ptep)) {
+ if (pte_write(*ptep) || pte_dirty(*ptep) ||
+ (pte_protnone(*ptep) && pte_savedwrite(*ptep))) {
pte_t entry;
swapped = PageSwapCache(page);
@@ -903,7 +904,11 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
}
if (pte_dirty(entry))
set_page_dirty(page);
- entry = pte_mkclean(pte_wrprotect(entry));
+
+ if (pte_protnone(entry))
+ entry = pte_mkclean(pte_clear_savedwrite(entry));
+ else
+ entry = pte_mkclean(pte_wrprotect(entry));
set_pte_at_notify(mm, addr, ptep, entry);
}
*orig_pte = *ptep;
--
2.7.4