[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

2029.0. "task scheduling problem" by FLOYD::YODER (MFY) Tue Feb 20 1996 09:54

Suppose you have 1 processor and n tasks T[i] (1<=i<=n), where each task has two
properties: a deadline d[i], and a duration t[i].  If your goal is to make all
deadlines, it is easy to show that an optimal strategy is "earliest deadline
first" or EDF scheduling.

Suppose that you instead want to maximize the number of deadlines you make.  Is
there a simple way to find an optimal strategy?  (You can obviously try EDF
first, and if it succeeds in making all deadlines you are done.)  I think it is
easy to show that it doesn't help to be able to switch from a task you have
started, so one method is to try all n! orderings of tasks and pick the best
solution.  But is there, say, a polynomial-time method?

I don't have a solution for this problem, it is just one that was suggested by
some reading I did.  It may well be solved in the literature.
T.RTitleUserPersonal
Name
DateLines
2029.1EVMS::HALLYBFish have no concept of fireWed Feb 21 1996 08:0412
    It seems to me one first tries EDF scheduling and sees if it works.
    
    If that fails then remove the task with the longest duration from the set 
    and start over. This works well if one has a "near fit", but not so well
    if there are several tasks of long duration with widely varying deadlines.
    
    The above suggests a further complication also be introduced: unequal
    weighting of task completion. For each task T[i] define an "Importance"
    weight, w[i], of each deadline. In .0 we have w[i] = 1 for all i. 
    What if w[i] is not constant and we want to maximize sum(w[i])?
    
      John
2029.2Will you settle for O(n log n)?WIBBIN::NOYCEEV5 issues 4 instructions per meterWed Feb 21 1996 08:3929
You don't need to try n! orderings, you only need to try 2^n subsets,
since if a subset can be made to fit, EDF will work for it.

But I think there's a simple O(n log n) algorithm that works (for the
unweighted problem in .0), if you look at the problem from the other
end.

Suppose a work order for task i arrives at time -t(i), and the task
has duration d(i).  We're not allowed to work on any task before its
work order arrives.  How should we schedule to complete the maximum
number of tasks by time 0?

The solution is simply to keep a list (actually a heap, for O(log n)
insertion and removal of the smallest item) of available tasks.  At
all times we work on a task that has the minimum remaining duration.
Thus, when we finish one task, we pick a new task to work on that
has minimum remaining duration; when a work order arrives, we put
it into the heap, put our current task into the heap (after adjusting
its remaining duration to account for the work we've done), and then
pull a new task from the heap.  (To write the program, I would keep
working on the current task unless the new one has smaller duration
than what's left in the current task -- but I think you get an equally
optimal solution if you switch.)

When time 0 arrives, we have a bunch of tasks that aren't done.  I think
you can show that a schedule that completes one of those cannot complete
a larger number of tasks than the schedule we just produced.

Clearly this problem is equivalent to the deadline scheduling problem.
2029.3Re: .1FLOYD::YODERMFYWed Feb 21 1996 10:1613
I like the suggestion of weighted deadlines.  If we are fortunate, a solution
for that case could fall out of the more specialized problem.

"It seems to me one first tries EDF scheduling and sees if it works."
    
"If that fails then remove the task with the longest duration from the set and
start over."

But this can be wrong: suppose a task with longest duration also has a deadline
so late that it's easy to make whatever you do.  Perhaps you want to remove the
task with the earliest necessary starting deadline (i.e., d[i]-t[i])?  This is
clearly right if that time is in the past, in which case the task is impossible
to complete.
2029.4Re: .3FLOYD::YODERMFYThu Feb 22 1996 10:1623
I like this solution, especially since it seems to generalize to the case I'm
really interested in, where you have k processors.

Your result follows from the following, which is another rearrangement of the
problem:

Conjecture.  Given n tasks T[i], 1<=i<=n, with durations d[i], such that no task
may be worked on until time t[i]>=0, and k processors P[i], 1<=i<=k.  Assume two
processors may not work on the same task simultaneously, and that switching
tasks is instantaneous.  For some schedule s of assigning processors to tasks,
let its "score" score(s,t) at time t be the number of completed tasks.

Let a schedule S (starting at t=0) be such that, at every instant, the k tasks
which are worked on are the ones with the k smallest times to completion
(assuming there are at least k tasks to work on), and in which all available
tasks are worked on if there are at most k tasks which can be run.  Show that
for *any* other schedule s and any t>=0, score(S,t) >= score(s,t).

(End of conjecture.)

For a *fixed* deadline d, can you always rearrange the 0<=t<=d part of schedule
S into one s such that s has no task switching except upon task completion?  (It
is easy to show that this holds if no task is ever split between processors.)
2029.5Parallelism requires extra switchingWIBBIN::NOYCEEV5 issues 4 instructions per meterThu Feb 22 1996 11:1020
> For a *fixed* deadline d, can you always rearrange the 0<=t<=d part of schedule
> S into one s such that s has no task switching except upon task completion?  (It
> is easy to show that this holds if no task is ever split between processors.)

No.  Consider two processors, and three tasks that all become available at t=0,
with duration 2, and a deadline of 3.  You can complete the three tasks with
this schedule:

	  P1				P2

t=0:  start T1			    start T2

t=1:				    drop T2, start T3

t=2:  finish T1, start T2

t=3:  finish T2			    finish T3


Come to think of it, this looks like a counterexample to the conjecture too.
2029.6new conjectureFLOYD::YODERMFYThu Feb 22 1996 16:5627
Yes, that demolishes the conjecture.  In fact, I have realized that disallowing
switching processors makes the question "can you make deadline d with k
processors" equivalent to the question "can you pack the durations into k bins
of size d" if all starting times are 0.  So with that restriction the problem is
NP-hard.

However, you can show that if all starting times are 0, the following is
optimal.  Order the tasks from shortest to longest, but discard those with
duration >d (these are impossible to complete).  (That is, you pretend you are
scheduling for a single processor.)  Now slice this imaginary schedule at times
d, 2d,..., kd and give processor i a schedule that "looks like" the schedule
between (i-1)d and id.  There is never a conflict, because although a task can
get split between two processors, the fact that its length is <=d insures that
it won't be scheduled for the same time on those two processors.  (If it "wraps
around," it will get a slice 0..x on one processor and y..d on another, but x<=y
always.)

Now, then, I propose the following method.  For some interval of length L in
which the set of tasks that can be worked on is fixed, order the tasks from
shortest to longest, but if a task has duration greater than L, only allocate a
time slice of size L in the schedule.  (Such a task can't get more than L's
worth of work in the interval.)  Then, as above, take the [0, kL] part of the
1-processor schedule and use it to produce a schedule for k processors.  At the
end of this interval, subtract from each task's duration whatever amount of
processor time it got, discard those that reach duration 0, and continue.

Conjecture.  The above produces an optimal schedule.