Skip to content

Commit

Permalink
mm: rework mapcount accounting to enable 4k mapping of THPs
Browse files Browse the repository at this point in the history
We're going to allow mapping of individual 4k pages of THP compound.
It means we need to track mapcount on per small page basis.

Straight-forward approach is to use ->_mapcount in all subpages to track
how many time this subpage is mapped with PMDs or PTEs combined. But
this is rather expensive: mapping or unmapping of a THP page with PMD
would require HPAGE_PMD_NR atomic operations instead of single we have
now.

The idea is to store separately how many times the page was mapped as
whole -- compound_mapcount. This frees up ->_mapcount in subpages to
track PTE mapcount.

We use the same approach as with compound page destructor and compound
order to store compound_mapcount: use space in first tail page,
->mapping this time.

Any time we map/unmap whole compound page (THP or hugetlb) -- we
increment/decrement compound_mapcount. When we map part of compound page
with PTE we operate on ->_mapcount of the subpage.

page_mapcount() counts both: PTE and PMD mappings of the page.

Basically, we have mapcount for a subpage spread over two counters.
It makes tricky to detect when last mapcount for a page goes away.

We introduced PageDoubleMap() for this. When we split THP PMD for the
first time and there's other PMD mapping left we offset up ->_mapcount
in all subpages by one and set PG_double_map on the compound page.
These additional references go away with last compound_mapcount.

This approach provides a way to detect when last mapcount goes away on
per small page basis without introducing new overhead for most common
cases.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
  • Loading branch information
kiryl authored and sfrothwell committed Jan 13, 2016
1 parent e0ae60f commit c7a92fe
Show file tree
Hide file tree
Showing 11 changed files with 160 additions and 35 deletions.
26 changes: 24 additions & 2 deletions include/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -410,6 +410,19 @@ static inline int is_vmalloc_or_module_addr(const void *x)

extern void kvfree(const void *addr);

static inline atomic_t *compound_mapcount_ptr(struct page *page)
{
return &page[1].compound_mapcount;
}

static inline int compound_mapcount(struct page *page)
{
if (!PageCompound(page))
return 0;
page = compound_head(page);
return atomic_read(compound_mapcount_ptr(page)) + 1;
}

/*
* The atomic page->_mapcount, starts from -1: so that transitions
* both from it and to it can be tracked, using atomic_inc_and_test
Expand All @@ -422,8 +435,17 @@ static inline void page_mapcount_reset(struct page *page)

static inline int page_mapcount(struct page *page)
{
int ret;
VM_BUG_ON_PAGE(PageSlab(page), page);
return atomic_read(&page->_mapcount) + 1;

ret = atomic_read(&page->_mapcount) + 1;
if (PageCompound(page)) {
page = compound_head(page);
ret += atomic_read(compound_mapcount_ptr(page)) + 1;
if (PageDoubleMap(page))
ret--;
}
return ret;
}

static inline int page_count(struct page *page)
Expand Down Expand Up @@ -934,7 +956,7 @@ static inline pgoff_t page_file_index(struct page *page)
*/
static inline int page_mapped(struct page *page)
{
return atomic_read(&(page)->_mapcount) >= 0;
return atomic_read(&(page)->_mapcount) + compound_mapcount(page) >= 0;
}

/*
Expand Down
1 change: 1 addition & 0 deletions include/linux/mm_types.h
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ struct page {
* see PAGE_MAPPING_ANON below.
*/
void *s_mem; /* slab first object */
atomic_t compound_mapcount; /* first tail page */
};

/* Second double word */
Expand Down
37 changes: 37 additions & 0 deletions include/linux/page-flags.h
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,9 @@ enum pageflags {

/* SLOB */
PG_slob_free = PG_private,

/* Compound pages. Stored in first tail page's flags */
PG_double_map = PG_private_2,
};

#ifndef __GENERATING_BOUNDS_H
Expand Down Expand Up @@ -523,10 +526,44 @@ static inline int PageTransTail(struct page *page)
return PageTail(page);
}

/*
* PageDoubleMap indicates that the compound page is mapped with PTEs as well
* as PMDs.
*
* This is required for optimization of rmap oprations for THP: we can postpone
* per small page mapcount accounting (and its overhead from atomic operations)
* until the first PMD split.
*
* For the page PageDoubleMap means ->_mapcount in all sub-pages is offset up
* by one. This reference will go away with last compound_mapcount.
*
* See also __split_huge_pmd_locked() and page_remove_anon_compound_rmap().
*/
static inline int PageDoubleMap(struct page *page)
{
VM_BUG_ON_PAGE(!PageHead(page), page);
return test_bit(PG_double_map, &page[1].flags);
}

static inline int TestSetPageDoubleMap(struct page *page)
{
VM_BUG_ON_PAGE(!PageHead(page), page);
return test_and_set_bit(PG_double_map, &page[1].flags);
}

static inline int TestClearPageDoubleMap(struct page *page)
{
VM_BUG_ON_PAGE(!PageHead(page), page);
return test_and_clear_bit(PG_double_map, &page[1].flags);
}

#else
TESTPAGEFLAG_FALSE(TransHuge)
TESTPAGEFLAG_FALSE(TransCompound)
TESTPAGEFLAG_FALSE(TransTail)
TESTPAGEFLAG_FALSE(DoubleMap)
TESTSETFLAG_FALSE(DoubleMap)
TESTCLEARFLAG_FALSE(DoubleMap)
#endif

/*
Expand Down
4 changes: 2 additions & 2 deletions include/linux/rmap.h
Original file line number Diff line number Diff line change
Expand Up @@ -183,9 +183,9 @@ void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long);

static inline void page_dup_rmap(struct page *page)
static inline void page_dup_rmap(struct page *page, bool compound)
{
atomic_inc(&page->_mapcount);
atomic_inc(compound ? compound_mapcount_ptr(page) : &page->_mapcount);
}

/*
Expand Down
5 changes: 4 additions & 1 deletion mm/debug.c
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,12 @@ static void dump_flags(unsigned long flags,
void dump_page_badflags(struct page *page, const char *reason,
unsigned long badflags)
{
pr_emerg("page:%p count:%d mapcount:%d mapping:%p index:%#lx\n",
pr_emerg("page:%p count:%d mapcount:%d mapping:%p index:%#lx",
page, atomic_read(&page->_count), page_mapcount(page),
page->mapping, page->index);
if (PageCompound(page))
pr_cont(" compound_mapcount: %d", compound_mapcount(page));
pr_cont("\n");
BUILD_BUG_ON(ARRAY_SIZE(pageflag_names) != __NR_PAGEFLAGS);
dump_flags(page->flags, pageflag_names, ARRAY_SIZE(pageflag_names));
if (reason)
Expand Down
2 changes: 1 addition & 1 deletion mm/huge_memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -1020,7 +1020,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
src_page = pmd_page(pmd);
VM_BUG_ON_PAGE(!PageHead(src_page), src_page);
get_page(src_page);
page_dup_rmap(src_page);
page_dup_rmap(src_page, true);
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);

pmdp_set_wrprotect(src_mm, addr, src_pmd);
Expand Down
4 changes: 2 additions & 2 deletions mm/hugetlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -3102,7 +3102,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
entry = huge_ptep_get(src_pte);
ptepage = pte_page(entry);
get_page(ptepage);
page_dup_rmap(ptepage);
page_dup_rmap(ptepage, true);
set_huge_pte_at(dst, addr, dst_pte, entry);
hugetlb_count_add(pages_per_huge_page(h), dst);
}
Expand Down Expand Up @@ -3585,7 +3585,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
ClearPagePrivate(page);
hugepage_add_new_anon_rmap(page, vma, address);
} else
page_dup_rmap(page);
page_dup_rmap(page, true);
new_pte = make_huge_pte(vma, page, ((vma->vm_flags & VM_WRITE)
&& (vma->vm_flags & VM_SHARED)));
set_huge_pte_at(mm, address, ptep, new_pte);
Expand Down
2 changes: 1 addition & 1 deletion mm/memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -864,7 +864,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
page = vm_normal_page(vma, addr, pte);
if (page) {
get_page(page);
page_dup_rmap(page);
page_dup_rmap(page, false);
rss[mm_counter(page)]++;
}

Expand Down
2 changes: 1 addition & 1 deletion mm/migrate.c
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma,
if (PageAnon(new))
hugepage_add_anon_rmap(new, vma, addr);
else
page_dup_rmap(new);
page_dup_rmap(new, false);
} else if (PageAnon(new))
page_add_anon_rmap(new, vma, addr, false);
else
Expand Down
13 changes: 10 additions & 3 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -469,6 +469,7 @@ void prep_compound_page(struct page *page, unsigned int order)
p->mapping = TAIL_MAPPING;
set_compound_head(p, page);
}
atomic_set(compound_mapcount_ptr(page), -1);
}

#ifdef CONFIG_DEBUG_PAGEALLOC
Expand Down Expand Up @@ -733,7 +734,7 @@ static inline int free_pages_check(struct page *page)
const char *bad_reason = NULL;
unsigned long bad_flags = 0;

if (unlikely(page_mapcount(page)))
if (unlikely(atomic_read(&page->_mapcount) != -1))
bad_reason = "nonzero mapcount";
if (unlikely(page->mapping != NULL))
bad_reason = "non-NULL mapping";
Expand Down Expand Up @@ -857,7 +858,13 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
ret = 0;
goto out;
}
if (page->mapping != TAIL_MAPPING) {
/* mapping in first tail page is used for compound_mapcount() */
if (page - head_page == 1) {
if (unlikely(compound_mapcount(page))) {
bad_page(page, "nonzero compound_mapcount", 0);
goto out;
}
} else if (page->mapping != TAIL_MAPPING) {
bad_page(page, "corrupted mapping in tail page", 0);
goto out;
}
Expand Down Expand Up @@ -1335,7 +1342,7 @@ static inline int check_new_page(struct page *page)
const char *bad_reason = NULL;
unsigned long bad_flags = 0;

if (unlikely(page_mapcount(page)))
if (unlikely(atomic_read(&page->_mapcount) != -1))
bad_reason = "nonzero mapcount";
if (unlikely(page->mapping != NULL))
bad_reason = "non-NULL mapping";
Expand Down
Loading

0 comments on commit c7a92fe

Please sign in to comment.