Skip to content

Commit

Permalink
dm: only run the queue on completion if congested or no requests pending
Browse files Browse the repository at this point in the history
On really fast storage it can be beneficial to delay running the
request_queue to allow the elevator more opportunity to merge requests.

Otherwise, it has been observed that requests are being sent to
q->request_fn much quicker than is ideal on IOPS-bound backends.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
  • Loading branch information
snitm committed Apr 15, 2015
1 parent ff36ab3 commit 9a0e609
Showing 1 changed file with 9 additions and 3 deletions.
12 changes: 9 additions & 3 deletions drivers/md/dm.c
Original file line number Diff line number Diff line change
Expand Up @@ -1024,10 +1024,13 @@ static void end_clone_bio(struct bio *clone, int error)
*/
static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
{
int nr_requests_pending;

atomic_dec(&md->pending[rw]);

/* nudge anyone waiting on suspend queue */
if (!md_in_flight(md))
nr_requests_pending = md_in_flight(md);
if (!nr_requests_pending)
wake_up(&md->wait);

/*
Expand All @@ -1036,8 +1039,11 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
* back into ->request_fn() could deadlock attempting to grab the
* queue lock again.
*/
if (run_queue)
blk_run_queue_async(md->queue);
if (run_queue) {
if (!nr_requests_pending ||
(nr_requests_pending >= md->queue->nr_congestion_on))
blk_run_queue_async(md->queue);
}

/*
* dm_put() must be at the end of this function. See the comment above
Expand Down

0 comments on commit 9a0e609

Please sign in to comment.