mm: fix possible cause of a page_mapped BUG
Robert Swiecki reported a BUG_ON(page_mapped) from a fuzzer, punching a hole with madvise(,, MADV_REMOVE). That path is under mutex, and cannot be explained by lack of serialization in unmap_mapping_range(). Reviewing the code, I found one place where vm_truncate_count handling should have been updated, when I switched at the last minute from one way of managing the restart_addr to another: mremap move changes the virtual addresses, so it ought to adjust the restart_addr. But rather than exporting the notion of restart_addr from memory.c, or converting to restart_pgoff throughout, simply reset vm_truncate_count to 0 to force a rescan if mremap move races with preempted truncation. We have no confirmation that this fixes Robert's BUG, but it is a fix that's worth making anyway. Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
		
					parent
					
						
							
								2aa15890f3
							
						
					
				
			
			
				commit
				
					
						a3e8cc643d
					
				
			
		
					 1 changed files with 1 additions and 3 deletions
				
			
		| 
						 | 
				
			
			@ -94,8 +94,6 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 | 
			
		|||
		 */
 | 
			
		||||
		mapping = vma->vm_file->f_mapping;
 | 
			
		||||
		spin_lock(&mapping->i_mmap_lock);
 | 
			
		||||
		if (new_vma->vm_truncate_count &&
 | 
			
		||||
		    new_vma->vm_truncate_count != vma->vm_truncate_count)
 | 
			
		||||
		new_vma->vm_truncate_count = 0;
 | 
			
		||||
	}
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
	Add table
		Add a link
		
	
		Reference in a new issue