From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f179.google.com (mail-pd0-f179.google.com [209.85.192.179]) by kanga.kvack.org (Postfix) with ESMTP id 61DD76B0358 for ; Mon, 21 Oct 2013 17:47:30 -0400 (EDT) Received: by mail-pd0-f179.google.com with SMTP id y10so5155640pdj.38 for ; Mon, 21 Oct 2013 14:47:30 -0700 (PDT) Received: from psmtp.com ([74.125.245.170]) by mx.google.com with SMTP id hb3si10216646pac.239.2013.10.21.14.47.27 for ; Mon, 21 Oct 2013 14:47:28 -0700 (PDT) Received: by mail-pb0-f41.google.com with SMTP id rp16so7651183pbb.0 for ; Mon, 21 Oct 2013 14:47:26 -0700 (PDT) Date: Mon, 21 Oct 2013 14:47:22 -0700 From: Ning Qu Subject: [PATCHv2 07/13] mm, thp, tmpfs: initial support for huge page in write_begin/write_end in tmpfs Message-ID: <20131021214722.GH29870@hippobay.mtv.corp.google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Sender: owner-linux-mm@kvack.org List-ID: To: Andrea Arcangeli , Andrew Morton , "Kirill A. Shutemov" , Hugh Dickins Cc: Al Viro , Wu Fengguang , Jan Kara , Mel Gorman , linux-mm@kvack.org, Andi Kleen , Matthew Wilcox , Hillf Danton , Dave Hansen , Alexander Shishkin , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ning Qu , Ning Qu For now we try to grab a huge cache page if the minimum requirements have been satisfied. Signed-off-by: Ning Qu --- mm/shmem.c | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0dd6689..af56731 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1635,8 +1635,20 @@ shmem_write_begin(struct file *file, struct address_space *mapping, struct inode *inode = mapping->host; pgoff_t index = pos >> PAGE_CACHE_SHIFT; gfp_t gfp = mapping_gfp_mask(inode->i_mapping); + int ret = 0; + int getpage_flags = 0; + + /* + * Do not allocate a huge page in the first huge page range in page + * cache. This way we can avoid most small files overhead. + */ + if (pos >= HPAGE_PMD_SIZE) + getpage_flags |= AOP_FLAG_TRANSHUGE; - return shmem_getpage(inode, index, pagep, SGP_WRITE, gfp, 0, NULL); + ret = shmem_getpage(inode, index, pagep, SGP_WRITE, gfp, + getpage_flags, NULL); + + return ret; } static int @@ -1650,10 +1662,18 @@ shmem_write_end(struct file *file, struct address_space *mapping, i_size_write(inode, pos + copied); if (!PageUptodate(page)) { - if (copied < PAGE_CACHE_SIZE) { - unsigned from = pos & (PAGE_CACHE_SIZE - 1); - zero_user_segments(page, 0, from, - from + copied, PAGE_CACHE_SIZE); + if (copied < len) { + unsigned from; + if (PageTransHugeCache(page)) { + from = pos & ~HPAGE_PMD_MASK; + zero_huge_user(page, 0, from); + zero_huge_user(page, from + copied, + HPAGE_PMD_SIZE); + } else { + from = pos & ~PAGE_CACHE_MASK; + zero_user_segments(page, 0, from, + from + copied, PAGE_CACHE_SIZE); + } } SetPageUptodate(page); } -- 1.8.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org