Real-Time article summaries

Please send corrections and comments to David L. Levine



Maher Awad, Juha Kuusela, and Jurgen Ziegler
``Octopus: Subsystem Analysis/Design and Performance Analysis''
Embedded Systems Programming, November 1996

Describes an OO-based system analysis and design approach for real-time systems. It uses OMT and Statecharts, and looks OK. Of interest is the process (``thread'', ``task'') allocation strategy: it's done after deciding whether each object interaction should be synchronous or asynchronous based on concurrency requirements, significance and time requirements of the events, service duration, and other subsystem communication. Then, groups of objects with synchronous interaction form processes. This is the textbook way of doing things, but it often doesn't happen that way in practice.

The last section of the paper is ``Performance Analysis''. It discusses priority assignment and timing analysis. Priority assignment is based on pairwise comparison of process importance, repeated methodically (and mechanically) until all possible pairings have been analyzed. It looks like a good approach: priorities are global, so this gets there eventually. It's different than, but maybe a useful extension to handle importance to, rate monotonic scheduling.

The timing analysis isn't as rigorous as we need. The authors suggest estimating durations of services, taking into account the effects of message passing and synchronization, and checking the most critical event sequence. We need more precision and assurance that all sequences have been checked.


Ronald E. Barkley and T. Paul Lee
``A Heap-based Callout Implementation to Meet Real-Time Needs''
Proceedings of the USENIX Summer Conference, June, 1988, pp. 213--222

Describes a heap-based callout table for real-time event scheduling on UNIX systems. It provides much faster and more predictable callout scheduling operations.

Alan Burns, Ken Tindell, and Andy Wellings
``Effective Analysis for Engineering Real-Time Fixed Priority Schedulers''
IEEE Trans. Software Engineering 25:5, May 1995

Presents methods for considering scheduler overhead: clock overheads, queue manipulations, and release delays. This is more of interest at the OS scheduling level, rather than at our application scheduling level.

Darren Cathey
``RTOS Benchmarking -- All Things Considered . . .''
Real-Time Magazine, Second Quarter 1993, reprinted by Wind River Systems

Shows results of VxWorks, pSos+, VTRX32, LynxOS, and PDOS (to a limited extent) on these benchmark tests:

A few examples: on a Motorola MVME 147S board with a 25 MHz 68030 processor, a VxWorks context switch takes 18 microseconds; on a 25 MHz MVME-167 board, it takes 4 microseconds. The time to get and release a semaphore on the 147S is 11 microseconds.


[DLS:96]
Z. Deng, J. W.-S. Liu, and J. Sun
``Dynamic Scheduling of Hard Real-Time Applications in Open System Environment''
17th IEEE Real-Time Systems Symposium, Work in Progress Session 2, Washington, DC, December 4-6, 1996

Presents an approach for running both real-time and non-real-time applications on a general purpose platform (workstation), while guaranteeing that hard deadlines will be met. The scheduling scheme has these objectives:
  1. The schedulability of the tasks in each real-time application can be validated in isolation from other applications.
  2. A simple acceptance test determines whether a new real-time application should be admitted.
  3. Schedulability is guaranteed for the tasks in admitted real-time applications.
  4. A certain level of responsiveness is maintained for non-real-time applications.
  5. It does not rely on fixed allocation or fine-grain time-slicing.

The scheduling system has a two-level hierarchy. Each real-time application has its own constant utilization server, and all non-real-time applications share a single constant utilization server. The constant utilization servers are responsible for dispatching for their application(s), and have a fixed size U that reflects their application(s)' CPU time requirements. The lower level OS schedule coordinates the constant utilization servers. It also decides whether to admit new applications, based on comparison of the sum of the current total U with that of the new application, with the processor speed.

This seems like a good approach. The constant utilization servers may map nicely to LWPs on Solaris.


Vincent Encontre
``How to use modeling to implement verifiable, scalable, and efficient real-time application programs''
Real-time Engineering, Fall 1997


Vincent Encontre
``Modeling and Implementing Correct, Scalable and Efficient Real-Time Applications with ObjectGEODE''
Real-Time Magazine, First Quarter 1996, reprinted by Verilog USA

Describes Verilog's ObjectGEODE analysis, design, verification, and validation toolset. The verification and validation aspects are supported by rapid prototyping and simulation. ITU-T's Specification and Description Language (SDL) and Message Sequence Charts (MSC) are used to specify the system and verify properties. SDL is FSM based and MSC can be used to describe scenarios; the two can be used together to validate the expected system behavior. Looks very interesting.

[FL:97]
Wu-chun Feng and Jane W.-S. Liu
``Algorithms for Scheduling Real-Time Tasks with Input Error and End-to-End Deadlines''
IEEE Trans. Software Engineering 23:2, February 1997

Describes real-time scheduling algorithms for preemptive, imprecise, composite tasks. Uses two-level scheduling: the higher level schedules tasks on a single processor taking into account their imprecise nature. The lower level scheduler allocates the time budgeted to a composite task across its component tasks.

Well presented and interesting, but primarily for systems that can take advantage of imprecise calculations where the more time that can be spent by a task on calculating a result, the better the result. Our application doesn't lend itself to that, at least initially. The lower level allocation across component tasks is similar to what we do when allocating CPU time to operations in a thread.


Colin Fidge
``Fundamentals of Distributed System Observation''
IEEE Software 13:6, November 1996, pp. 77-83

Interesting, very readable discussion of the difficulties of observing event ordering in distributed systems. They're summarized in Table 1 of the article; a y in a cell indicates that the mechanism overcomes the respective observability effect:
Table 1: Effectiveness of Time-Stamping Mechanisms
Ordering Mechanisms
 
Real-time stamps Logical time stamps
Effects local clocks global clock totally ordered partially ordered
Multiple observers see different orderings
y
y
y
y
Incorrectly perceived orderings
 
y
y
y
Same computation exhibits different orderings
 
 
y
y
Arbitrary orderings adopted
 
 
 
y

[Fohler:92]
Gerhard Fohler
``Realizing Changes of Operational Modes with Pre Run-Time Scheduled Hard Real-Time Systems.''
In Proc. of the Second International Workshop on Responsive Computer Systems, Saitama, Japan, October 1992.

Discusses the requirements for mode changes in statically scheduled hard real-time systems, and issues for handling them. Good intro to the issues involved with mode changes. The focus is on strictly periodic systems, though.

[GPS:95]
Richard Gerber, William Pugh, and Manas Saksena
``Parameteric Dispatching of Hard Real-Time Tasks''
appeared in IEEE Transactions on Computers, 44(3), March 1995.

Parametric dispatching allows dependencies between tasks and ranges of execution times. It separates the problem into an off-line and on-line components. The schedule is partially produced off-line, but actual task dispatch timers are not generated until run-time.

R. Gopalakrishnan and Gurudatta M. Parulkar
``Bringing Real-time Scheduling Theory and Practice Closer for Multimedia Computing''
appeared in ACM SIGMETRICS Conference, Philadelphia

see Gopal's dissertation chapter citation below.

R. Gopalakrishnan
Chapter 4: The Real-time Upcall Mechanism
chapter of dissertation

Describes the real-time upcall (RTU) mechanism, which is what you'd get if you made a thread so lightweight that it became a function. The state of an RTU doesn't get saved on preemption, because a preempted RTU returns synchronously. Rather than truly preempting, the RTU is allowed to finish its current iteration (for the target application of real-time protocol processing) before returning the dispatcher thread of control. Therefore, RTU's need not have a separate context.

Another byproduct of the target application is that scheduling is dynamic, and uses modified (simplified to account for delayed preemption) rate monotonic. This application decomposes nicely into iterative handlers. Maximizing CPU utilization is important. For some other real-time applications, these assumptions and goals may not apply, of course. But, if they do, and the application can live with the delayed preemption, then the performance benefits should be significant.


Scott Herzinger
``Using C++: Some Words to the Wise''
Electronic Design, 21 February 1994, reprinted by Wind River Systems

Very brief, introductory discussion to uses of inlining, virtual functions, and static const object initialization in embedded systems.

Kevin B. Kenny and Kwei-Jay Lin
``Building Flexible Real-Time Systems Using the Flex Language''
Computer 24:5, May 1991

Describes Flex, a derivative of C++. It supports computations with adjustable execution times by allowing them to return imprecise results. In addition, the runtime system can choose a version of a function based on performance constraints; this is called performance polymorphism. Interesting in that it's almost the opposite of conventional real-time approaches, in which fixed computation times are required (or assumed). However, I don't think this approach is directly applicable for our application. (The mechanism for specifying timing and resource constraints, on blocks of code, might be of interest.)

[KSZ:92] Sandeep Khanna, Michael Sebrée, and John Zolnowsky
``Realtime Scheduling in SunOS 5.0''

Everything you wanted to know about Solaris real-time scheduling. Solaris 2.x provides:

  1. priority-based scheduling for user tasks, so the application has control of scheduling,

  2. bounded dispatch latency, necessary for realtime, and

  3. bounded priority inversion via priority inheritance for mutexes, and to a limited degree (for the first reader only) for readers/writers locks.

Also, hidden scheduling overhead is reduced by separating timeouts into non-realtime and realtime timeouts. Non-realtime timeouts are processed by a kernel thread that has the highest sys class priority, which is below the realtime class priorities (with the default kernel configuration). See also [SunRT:95].

Mark H. Klein, Thomas Ralya, Bill Pollak, Ray Obenza, and Michael González Harbour
A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems, 1993
Kluwer Academic Publishers

The RMA bible.

[RT-MachProtocolProcessing:93]
Chen Lee, Katsuhiko Yoshida, Cliff Mercer, and Ragunathan Rajkumar
``Predictable Communication Protocol Processing in Real-Time Mach''
Second IEEE Real-Time Technology and Applications Symposium, June 10-12, 1996, Boston, MA

Discusses communication protocol processing in real-time operating systems, in particular Real-Time Mach. Protocol processing in real-time is categorized as follows:
Prioritized Processing:
Instead of using FIFO queues, priority queues are used in protocol stacks. This does not solve the problem, however, in interrupt-driven protocol processing approaches where low-priority network traffic can indefinitely preempt higher-priority processing.

Shared Communication Protocol Server:
A separate server can be employed for protocol processing. The protocol stack then acts as a shared resource. The priority of the server can be controlled by applications, but priority inversion is still possible with this approach.

Prioritized Threads:
If priorities are associated with packets, they can be queued and serviced, prior to demultiplexing, in priority order. By using threads for the servicing, the OS dispatcher provides preemption for higher priority packets.

Application-Level Protocol Processing:
In this approach, which is used in RT-Mach, the protocol stack resides in application space. It is in library form so that it can be linked into each application process. The advantage of this approach in RT-Mach is that packets need not be processed by the Unix server.

The RT-Mach protocol processing approach may be used with either fixed priority scheduling or with the RT-Mach processor reservation scheduling policies. The experiments that the authors use to demonstrate the benefits of the RT-Mach reservation policy in combination with application-level protocol processing are interesting. There are five scenarios:
  1. Single thread (Net App) that transmits 10 UDP packets every 40 ms.
  2. Net App and non-real-time application with 5 compute-bound threads.
  3. Net App and non-real-time application with 5 threads that make IPC calls to the Unix server.
  4. Net App and and one low-priority thread that also sends out 10 UDP packets every 10 ms.
  5. Net App and all three of the above non-real-time competing applications, i.e., compute bound, IPC, and background networking.
In each scenario, the processor usage of the Net App thread is measured. Smoother usage over time results from better scheduling and elimination of priority inversion.

[LY:86]
Dennis W. Leinbaugh and Mohamad-Reza Yamini
``Guaranteed Response Times in a Distributed Hard-Real-Time Environment''
IEEE Trans. Software Engineering SE-12:12, December 1986

Presents an analysis approach that considers processing time, shared device access, critical sections, and limited communication errors. The analysis has some intuitive appeal but I'm not comfortable with it. The key is to evaluate blocking; it starts by simply figuring out the worst case blocking. That is almost always too conservative, so transformation rules are applied to eliminate infeasible blocking. While this may be tractable for small systems, it won't scale well.

C. L. Liu and J. W. Layland
``Scheduling Algorithms for Multiprogramming in a Hard Realtime Environment,''
JACM 20:1, pp. 46-61, 1973

The original paper on rate-monotonic scheduling. It starts with these assumptions:

It defines some useful terms, which are expressed in our terms here:

deadline:
The next scheduled execution time for a task.

overflow:
Overflow occurs at a particular time if the time is the missed deadline of a task.

feasible:
A scheduling algorithm in which no overflow occurs.

response time:
The time interval between when a task is able to execute and when it completes its execution.

critical instant:
The particular time for a task at which its response time is maximum.

critical time zone:
The time interval between a critical instant for a task and when it completes its execution.

rate-monotonic priority assignment:
Tasks with higher execution rates are assigned higher priorities.

Some of the more useful theorems on fixed priority scheduling presented in the paper are:
Theorem 1. A critical instant for any task occurs whenever the task is requested simultaneously with requests for all higher priority tasks.

Theorem 2. If a feasible priority assignment exists for some task set, then the rate-monotonic priority assignment is feasible for that task set.

Theorem 5. For a set of m tasks with fixed priority order, the least upper bound on processor utilization is U = m (21/m - 1).

For the special case in which the period of each task is a multiple of the period of each lower priority task, the least upper bound on processor utilization is 1.

In addition to fixed priority scheduling, the paper introduces the dynamic deadline driven scheduling algorithm, now known as earliest deadline first.

Theorem 6. When the deadline driven scheduling algorithm is used to schedule a set of tasks on a processor, there is no processor idle time prior to an overflow.

Theorem 7. For a give set of tasks, the deadline driven scheduling algorithm is feasible if and only if the sum of the task utilizations is less than or equal to 1.

(I wrote this out instead of using an equation because that doesn't work too well with HTML 2.0. Task utilization is the ratio of a task's execution time to its period.)


Cynthia Mavros and Ray Obenza
Guaranteeing Real-Time Performance Using RMA
Embedded Systems Conference, San Jose, CA, September 12-15 1995

Presentation slides of basics: real-time system goals are fast response, guaranteed deadlines, and stability in overload (critical deadlines still met even if all deadlines can't be met. Rate monotonic scheduling assigns highest priorities to tasks of highest rates (assuming uniprocessor, periodic tasks, independent tasks, task deadlines at end of period, no interrupts or task suspension, and no context switch overhead). Rate monotonic analysis is used to determine schedulability and reason about system timing behavior. First, the sum of all task utilizations (CPU time required per period of time) is less than a utilization bound (which is a function of the number of tasks), then the task set is schedulable. If this conservative test fails, then the response-time test can be applied: for independent, periodic tasks, if each task meets its first deadline with worst-case task phasing, then the deadlines can be met.

Extensions: context switching can be modelled as increased execution time. Preperiod deadlines can be handled with a modified utilization bound. Interrupts can be handled by analyzing the utilization of each task individually, and adding terms to account for possible preemption by other (higher priority) tasks.

Priority inversion can be caused by synchronization and mutual exclusion, non-preemptable code regions, and FIFO queues. Handled similar to interrupts, by adding a term to account for inversion. Synchronization protocols such as no preemption, priority inheritance, highest locker's priority, and priority ceiling can bound priority inversion. Aperiodic tasks can be handled with sporadic servers by modeling as periodic tasks, with a fixed execution budget C in replenishment interval T, and then assigning priorities.

The presentation finishes with an interesting case study. A six-event system that was not designed with RMS was analyzed with RMA. Net result: redesigned the schedule to be feasible, requiring 3 people 3 weeks of effort.


[:]
Ashish Mehra, Atri Iniresan, and Kang G. Shin
``Structuring Communication Software for Quality-of-Service Guarantees''
IEEE Transactions on Software, 23(10), October 1997

Very interesting discussion of the architectural considerations for providing QoS guarantees in host communication software.

Sathis Menon, Partha Dasgupta, and Richard J. LeBlanc, Jr.
``Asynchronous Event Handling in Distributed Object-Based Systems''

Discusses event handling in passive, object-based environments. Identifies the necessity for both thread-based and object-based event notification.

It isn't directly relevant to our problem because:

I'm not completely sure what they mean by event; I thought that event delivery is always asynchronous. The last paragraph of the Section 3 introduction confirms that. But in the introduction to Section 5, the authors say the are going to discuss ``synchronous and asynchronous delivery''. Does that really mean ``notification'', which is from the perspective of the object that initiates the event?


Klara Nahrstedt and Jonathan M. Smith
``End-toEnd QoS Guarantees: Lessons Learned from OMEGA''

Experience report on an implementation of OMEGA architecture on a telerobotics application. OMEGA is an end-point architecture for provision of end-to-end QoS guarantees. Interesting for systems that must support QoS provision and negotiation; for us, that won't be until we get to dynamic scheduling.

The lessons learned about using AIX for real-time applications suggest that it isn't a good a real-time platform as Solaris. In particular, it had to be restricted to having only one user (does AIX have a real-time scheduling class with higher priority than all others?). They also restricted the implementation of the application and transport protocols to a single user process, and used joint scheduling by the protocol stack (what does that mean?). They recognize that their scheduler implementation has major limitations, but it's not discussed further here (they reference Nahrstedt's PhD thesis).


[PublisherSubscriber:95]
Ragunathan Rajkumar, Mike Gagliardi, and Lui Sha
``The Real-Time Publisher/Subscriber Inter-Process Communication Model for Distributed Real-Time Systems: Design and Implementation''
First IEEE Real-Time Technology and Applications Symposium, May 1995.

Describes the real-time publisher/subscriber model. It looks a lot like our Event Channel, with publishers analogous to suppliers and subscribers analogous to consumers. They don't appear to have a QoS specification mechanism, though. It looks like scheduling is based on thread properties, which aren't addressed by the publisher/subscriber model. One interesting aspect of the model is the notion of separating priorities for subscription and data transfer, because those activities are handled by different threads.

[Saksena:94]
Manas Saksena
``Parametric Scheduling for Hard Real-Time Systems''
Doctoral Thesis


Lui Sha and John B. Goodenough
Real-Time Scheduling Theory and Ada
Computer 23:4, April 1990

Lists these measures of merit for real-time systems:
Reviews rate monotonic theory. For a set of independent periodic tasks, higher (fixed) priorities are assigned to tasks with shorter periods.
Theorem 1: A set of n independent periodic tasks scheduled by the rate monotonic algorithm will always meet its deadlines, for all task phasings, if C_1/T_1 + ... + C_n/T_n <= n(2**1/n - 1) = U(n)
where C_i and T_i are the execution time and period of task i, respectively. U(n) characterizes schedulability and converges to 69 percent (ln 2) as n approaches infinity.

A less pessimistic test is based on the critical zone theorem:
Theorem 2: For a set of independent periodic tasks, if each task meets its first deadline when all tasks are started at the same time, then the deadlines will always be met for any combination of start times.

If a critical task is excluded from the schedulable set, it might be possible to include it by using period transformation. The period of the task is reduced, so that it only performs part of its processing in each smaller period. This could allow its priority to be raised.

Next, scheduling of both aperiodic and periodic tasks is considered. If an aperiodic task can preempt a periodic task, then it's response time can typically be drastically reduced. Sporadic servers service aperiodic events but can be scheduled usings Theorems 1 and 2, because they are viewed as periodic tasks that perform polling.

Priority inversion, in which a high priority task is delayed by a lower priority task, can occur when the high priority task is blocked by another. The priority ceiling protocol, in which a low priority task inherits the priorities of higher tasks when executing, can help alleviate this problem.

Guidelines for programming real-time systems in Ada in order to support rate monotonic analysis are presented. They include sharing data between periodic tasks using a monitor, rather than calling each other directly. Most of the remainder of the guidelines have to do with priorities.


Lui Sha, Ragunathan Rajkumar, John Lehoczky, and Krithi Ramamritham
``Mode Change Protocols for Priority-Driven Preemptive Scheduling''
J. Real-Time Systems, Vol. 1, 1989, pp. 243-264
Reprinted in John A. Stankovic and Krithi Ramamritham, Advances in Real-Time Systems, IEEE Computer Society Press, 1992.

Discusses mode changes, due to addition, deletion, or parameter changes of tasks. Presents a mode change protocol that is compatible with rate monotonic analysis. The protocol models a task parameter change as a task deletion/task addition pair. The protocol shows how to handle semaphores that are used for task synchronization. The analysis looks like a straightforward augmentation to RMS, and very useful.

The paper does not discuss the application-specific aspects of mode changes, such as deciding the task changes and the order in which they should be implemented. And the really hard issues, like deciding what each task should do when the ``global'' mode changes.


Lui Sha and Shirish S. Sathaye
``A Systematic Approach to Designing Distributed Real-Time Systems''
Computer 26:9, September 1993, pp. 68-78

Describes extensions to generalized rate monotonic scheduling (GRMS) to support distributed systems. Schedulability is extended to include the delay a message incurs reaching its destination on a network, which includes the transmission delay and propagation delay. Preemption in distributed systems is more complicated than in centralized systems because increasing preemptability doesn't always lead to minimizing priority inversion. In special (over-preemption) situations, preemption does not reduce the worst-case duration of priority inversion.

The authors address over-preemption with a preemption control protocol. It includes phase control and rate control. The authors discusses analysis of systems that weren't designed to support GRMS. They quantify the effect on schedulability of limiting the granularity of priority. There are several detailed examples, I didn't go through them yet.


Marc Shepard
``Managing Delays for Optimal Real-Time Performance''
Electronic Design Embedded Controllers Applications Issue, 1 October 1993, reprinted by Wind River Systems

Very brief, introductory discussion of delays due to priority assignment (scheduling), shared critical regions (access to shared resources), and interrupt lockouts. Common scheduling strategies include greater importance, most frequent deadlines (rate monotonic), and dynamic priorities. Greater importance has the disadvantage that short tasks that aren't as important as longer ones may not be scheduled frequently enough to always meet their deadlines. Rate monotonic has the disadvantage that tasks with infrequent deadlines may not be scheduled at sufficiently high priority. Dynamic priorities attempts to address this deficiency by raising a the priority of a task as its deadline approaches. It has the disadvantage of requiring additional overhead. Discussion of the other contributors to delay is also brief: critical regions, interrupt disable intervals, and interrupt service routines should be as short as possible.

David B. Stewart and Pradeep K. Khosla
``Real-Time Scheduling of Sensor-Based Control Systems''
Real-Time Programming, ed. by W. Halang and K. Ramamritham, (Tarrytown, New York: Pergamon Press Inc.), 1992.

Interesting article that proposes a new scheduling algorithm, maximum-urgency-first (MUF), which combines the advantages of rate-monotonic, earliest-deadline first, and minimum laxity first scheduling algorithms. It has a schedulable bound of 100 percent for the critical set, the critical set is guaranteed to meet all of its deadlines, and it allows the scheduler to detect missed deadlines and notify the offending/affected tasks. MUF is the default scheduler for the Chimera RTOS. It is best suited to dynamic scheduling applications.


David B. Stewart and Pradeep K. Khosla
``Mechanisms for Detecting and Handling Timing Errors''
Communications of the ACM 40:1, January 1997, pp. 87-93

Interesting article on kernel support for detection and handling of timing errors in hard and soft real-time systems. Timing errors include missed deadlines due to failure to complete execution on time, and incorrect estimates of execution time. (Though I think the distinction isn't clear for underestimates. It is useful for overestimates, which can lead to underutilization.)

The mechanisms are policy-independent, and have been incorporated into the Chimera RTOS. The overhead per rescheduler operation on a 25 MHz MC68030 is less than 1 microsecond.

This approach is most useful in hard-real time systems that need to handle thread overrun. The authors discuss some applications to soft real-time systems, using an application-specific policy such as ``borrowing'' execution time from the threads next execution cycle. Imprecise computations and adaptive (dynamic) real-time scheduling can be readily handled with this approach as well.


[SunRT:95] Sun Microsystems
``Realtime Programming and Administration''
Sun System Interfaces Guide -- November 1995

Useful information about realtime programming on Suns: scheduling classes, memory locking, early dynamic linking, IPC, asynchronous I/O, asynchronous networking, and time functions. See also [KSZ:92].

Ken Tindell and Hans Hansson
``Real Time Systems by Fixed Priority Scheduling''

Good intro to fixed priority scheduling.

Hideyuki Tokuda and Tatsuo Nakajima
``Evaluation of Real-Time Synchronization in Real-Time Mach''
Proceedings of the USENIX Mach Symposium, November 20-22, 1991.

Interesting discussion of the synchronization model and primitives in Real-Time Mach. Several synchronization policies, namely kernelized monitor, basic priority, basic priority inheritance, priority ceiling, and restartable critical sections, are supported. Includes measurements of their execution time cost on a 25 MHz MC68030:
Execution Time for Lock/Unlock Operation Pair with Single Thread
Locking protocol Lock/unlock time, microseconds
kernelized monitor
288
basic priority
372
basic priority inheritance
388
priority ceiling
488
restartable critical section
452

[WBTK:95]
Victor Fay-Wolfe, John K. Black, Bhavanai Thuraisingham, and Peter Krupp
``Real-time Method Invocations in Distributed Environments''
University of Rhode Island, Department of Computer Science and Statistics Technical Report 95-244, 1995.

This paper describes the essential element to real-time CORBA extensions: how to do timed, distributed method invocations in a CORBA environment.

[URI_RT_CORBA:97]
Victor Fay-Wolfe, Lisa Cingiser DiPippo, Roman Ginis, Michael Squandrito, Steven Wohlever, Igor Zykh,and Russell Johnston
``Real-Time CORBA''
Real-Time Technology and Applications Symposium '97, Montreal, Quebec, Canada

Describes requirements for real-time extensions to CORBA, and surveys current efforts.

Discusses the US Navy NRaD/University of Rhode Island (URI) RT CORBA system, which supports expression and enforcement of end-to-end timing constraints. Timing constraints are supported through timed distributed method invocations ({\tt TDMI}s); a {\tt TDMI} corresponds to TAO's {\tt RT\_Operation}; an {\tt RT\_Environment} structure contains QoS parameters similar to those in TAO's {\tt RT\_Info}. One difference between the two approaches is that {\tt TDMI}s express required timing constraints, {\em e.g.}, deadlines relative to the current time, while {\tt RT\_Operation}s publish their resource, {\em e.g.}, CPU time, requirements. The difference in approaches may reflect the different time scales, seconds versus milliseconds, respectively, and scheduling requirements, dynamic versus static, of the initial application targets. However, the approaches should be equivalent with respect to system schedulability and analysis. In addition, NRaD/URI supply a new CORBA Global Priority Service, analogous to TAO's Scheduling Service, and augment the CORBA Concurrency and Event Services. The initial implementation uses {\em earliest-deadline-first within importance level} dynamic, on-line scheduling, supported by global priorities. A global priority is associated with each {\tt TDMI}, and all processing associated with that TDMI inherits that priority. In contrast, TAO's scheduling is initially static and off-line, and uses importance as a ``tie-breaker'' following the analysis of other requirements such as data dependencies. Both NRaD/URI and TAO readily support changing scheduling policy by encapsulating it in their respective CORBA Global Priority and Scheduling Services.


[XP:90]
Jia Xu and David Lorge Parnas
``Scheduling Processes with Release Times, Deadlines, Precedence, and Exclusion Relations''
IEEE Trans. Software Engineering 16:3, March 1990, 360 - 369

Presents an algorithm for finding an optimal static schedule on a single processor. Interesting in that it considers precedence and exclusion relations. Benefits include elimination of the use of (costly) synchronization and mutual exclusion mechanisms, because all execution is pre-arranged to satisfy the specified precedence and exclusion relations. Similarly, the unpredictability and context switching cost of interrupts are eliminated, because they are handled synchronously. Our RT_Info would work very well for specifying thread parameters with this approach, after just adding specification of exclusion relations.

[XP:93]
Jia Xu and David Lorge Parnas
``On Satisfying Timing Constraints in Hard-Real-Time Systems''
IEEE Trans. Software Engineering 19:1, January 1993, 70 - 84

Interesting survey and discussion of important issues in static scheduling. Very useful in explaining the motivation behind XP:90.

Matthew J. Zelesko and David R. Cheriton
``Specializing Object-Oriented RPC for Functionality and Performance''
(not reviewed)

John A. Zinky, David E. Bakken, and Richard Schantz
``Overview of Quality of Service for Objects''
Extends CORBA IDL with a quality of service description language (QDL). That's an interesting and useful approach to handling QoS, and the paper discusses some relevant issues such as end-to-end QoS, proxy objects, connection state (with a probably canonical example), adaptivity, and multiple connections. QDL isn't really described in this paper, but the authors do have some presentation slides with examples. From those, it appears that we'll have to develop our own QoS specification language.