linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <bsingharora@gmail.com>
To: sedat.dilek@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-next <linux-next@vger.kernel.org>
Subject: Re: [3.19-final|next-20150204] LTP OOM testsuite causes call-traces
Date: Tue, 10 Feb 2015 16:15:28 +0530	[thread overview]
Message-ID: <CAKTCnz=ABrmbQrAEYJ=D0=s2+fRj9FH4D5oG6aWW-qVMoYLdEA@mail.gmail.com> (raw)
In-Reply-To: <CA+icZUWLJvuZknXhamKJxyGb+OYdkeD5z0V_jn=BQVtq8F5XUQ@mail.gmail.com>

On Tue, Feb 10, 2015 at 3:12 PM, Sedat Dilek <sedat.dilek@gmail.com> wrote:
> Hi,
>
> I first noticed call-traces in next-20150204 and tested on v3.19-final
> out of curiosity.
>
> So, oom3 | oom4 | oom5 from LTP tests produces call-traces in my logs
> in both releases.
> Yesterday, I sent a tarball to linux-mm/Shutemov which has material
> for next-20150204.
> The for-lkml tarball has stuff for v3.19-final.
>
> As an example (please see dmesg files in attached tarball(s)):
> ...
> +[  143.591734] oom03 invoked oom-killer: gfp_mask=0xd0, order=0,
> oom_score_adj=0
> +[  143.591789] oom03 cpuset=/ mems_allowed=0
> +[  143.591828] CPU: 0 PID: 2904 Comm: oom03 Not tainted 3.19.0-1-iniza-small #1
> +[  143.591830] Hardware name: SAMSUNG ELECTRONICS CO., LTD.
> 530U3BI/530U4BI/530U4BH/530U3BI/530U4BI/530U4BH, BIOS 13XK 03/28/2013
> +[  143.591831]  ffff880034a64800 ffff880032c57bf8 ffffffff8175c66c
> 0000000000000008
> +[  143.591835]  ffff8800681a54d0 ffff880032c57c88 ffffffff8175ac3a
> ffff880032c57c28
> +[  143.591838]  ffffffff810c329d 0000000000000206 ffffffff81c74040
> ffff880032c57c38
> +[  143.591841] Call Trace:
> +[  143.591848]  [<ffffffff8175c66c>] dump_stack+0x4c/0x65
> +[  143.591852]  [<ffffffff8175ac3a>] dump_header+0x9e/0x259
> +[  143.591857]  [<ffffffff810c329d>] ? trace_hardirqs_on_caller+0x15d/0x200
> +[  143.591860]  [<ffffffff810c334d>] ? trace_hardirqs_on+0xd/0x10
> +[  143.591863]  [<ffffffff81184cd2>] oom_kill_process+0x1d2/0x3c0
> +[  143.591868]  [<ffffffff811ebf40>] mem_cgroup_oom_synchronize+0x630/0x670
> +[  143.591871]  [<ffffffff811e6ac0>] ? mem_cgroup_reset+0xb0/0xb0
> +[  143.591874]  [<ffffffff81185628>] pagefault_out_of_memory+0x18/0x90
> +[  143.591877]  [<ffffffff8106317d>] mm_fault_error+0x8d/0x190
> +[  143.591879]  [<ffffffff810637a8>] __do_page_fault+0x528/0x600
> +[  143.591883]  [<ffffffff8113a847>] ? __acct_update_integrals+0xb7/0x120
> +[  143.591886]  [<ffffffff81765a1b>] ? _raw_spin_unlock+0x2b/0x40
> +[  143.591889]  [<ffffffff810a8ac1>] ? vtime_account_user+0x91/0xa0
> +[  143.591892]  [<ffffffff8117ff83>] ? context_tracking_user_exit+0xb3/0x110
> +[  143.591895]  [<ffffffff810638b1>] do_page_fault+0x31/0x70
> +[  143.591898]  [<ffffffff817687b8>] page_fault+0x28/0x30
> +[  143.591934] Task in /1 killed as a result of limit of /1
> +[  143.591940] memory: usage 1048576kB, limit 1048576kB, failcnt 24350
> +[  143.591942] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
> +[  143.591943] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
> +[  143.591944] Memory cgroup stats for /1: cache:0KB rss:1048576KB
> rss_huge:0KB mapped_file:0KB writeback:12060KB inactive_anon:524284KB
> active_anon:524192KB inactive_file:0KB active_file:0KB unevictable:0KB
> +[  143.592007] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
> oom_score_adj name
> +[  143.592155] [ 2903]     0  2903     1618      436       9        0
>             0 oom03
> +[  143.592159] [ 2904]     0  2904   788050   245188     616    65535
>             0 oom03
> +[  143.592162] Memory cgroup out of memory: Kill process 2904 (oom03)
> score 921 or sacrifice child
> +[  143.592167] Killed process 2904 (oom03) total-vm:3152200kB,
> anon-rss:979724kB, file-rss:1028kB
> +[  144.526653] oom03 invoked oom-killer: gfp_mask=0xd0, order=0,
> oom_score_adj=0

Looks like we ran out of memory, the limit is 1024MB (1GiB) and we've
hit it with a fail count of 24350. So basically /1 hit the limit and
got OOM killed. Isn't that what you were testing for? How was the
expected victim?

Thanks,
Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-02-10 10:45 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-10  9:42 Sedat Dilek
2015-02-10 10:45 ` Balbir Singh [this message]
2015-02-19 23:32   ` Sedat Dilek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKTCnz=ABrmbQrAEYJ=D0=s2+fRj9FH4D5oG6aWW-qVMoYLdEA@mail.gmail.com' \
    --to=bsingharora@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-next@vger.kernel.org \
    --cc=sedat.dilek@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox