From: "Antonino A. Daplas" <adaplas@gmail.com>
To: linux-fbdev-devel@lists.sourceforge.net
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
James Simmons <jsimmons@infradead.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Linux Kernel Development <linux-kernel@vger.kernel.org>,
linux-mm@kvack.org, Paul Mundt <lethal@linux-sh.org>
Subject: Re: [Linux-fbdev-devel] [PATCH 2.6.20 1/1] fbdev, mm: hecuba/E-Ink fbdev driver
Date: Thu, 22 Feb 2007 07:43:36 +0800 [thread overview]
Message-ID: <1172101416.4217.19.camel@daplas> (raw)
In-Reply-To: <45a44e480702210855t344441c1xf8e081c82ece4e63@mail.gmail.com>
On Wed, 2007-02-21 at 11:55 -0500, Jaya Kumar wrote:
> On 2/20/07, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > Don't you need a way to specify the maximum deferral time? E.g. a field in
> > fb_info.
> >
>
> You are right. I will need that. I could put that into struct
> fb_deferred_io. So drivers would setup like:
>
Is it also possible to let the drivers do the 'deferred_io'
themselves? Say, a driver that would flush the dirty pages on
every VBLANK interrupt.
> static struct fb_deferred_io hecubafb_defio = {
> .delay = HZ,
> .deferred_io = hecubafb_dpy_update,
> };
>
> where that would be:
> struct fb_deferred_io {
> unsigned long delay; /* delay between mkwrite and deferred handler */
> struct mutex lock; /* mutex that protects the page list */
> struct list_head pagelist; /* list of touched pages */
> struct delayed_work deferred_work;
> void (*deferred_io)(struct fb_info *info, struct list_head
> *pagelist); /* callback */
> };
>
> and the driver would do:
> ...
> info->fbdefio = hecubafb_defio;
> register_framebuffer...
>
> When the driver calls register_framebuffer and unregister_framebuffer,
> I can then do the init and destruction of the other members of that
> struct. Does this sound okay?
It would be better if separate registering functions are created for
this functionality (ie deferred_io_register/unregister).
Tony
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2007-02-21 23:40 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-02-17 10:42 [PATCH 2.6.20 1/1] fbdev,mm: " Jaya Kumar
2007-02-17 12:34 ` Peter Zijlstra
2007-02-17 13:25 ` Jaya Kumar
2007-02-17 13:59 ` Paul Mundt
2007-02-18 11:31 ` Jaya Kumar
2007-02-18 23:57 ` Paul Mundt
2007-02-20 4:13 ` Jaya Kumar
2007-02-20 4:38 ` Paul Mundt
2007-02-20 6:11 ` Jaya Kumar
2007-02-21 16:46 ` Jaya Kumar
2007-02-20 8:07 ` Geert Uytterhoeven
2007-02-21 16:55 ` Jaya Kumar
2007-02-21 21:52 ` James Simmons
2007-02-21 23:22 ` Jaya Kumar
2007-02-28 16:50 ` [Linux-fbdev-devel] [PATCH 2.6.20 1/1] fbdev, mm: " James Simmons
2007-02-21 23:43 ` Antonino A. Daplas [this message]
2007-02-21 23:47 ` Jaya Kumar
2007-02-21 23:43 ` Antonino A. Daplas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1172101416.4217.19.camel@daplas \
--to=adaplas@gmail.com \
--cc=a.p.zijlstra@chello.nl \
--cc=geert@linux-m68k.org \
--cc=jsimmons@infradead.org \
--cc=lethal@linux-sh.org \
--cc=linux-fbdev-devel@lists.sourceforge.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox