The intent of this page is to provide information regarding the status of MICO/MT and to provide information on how to contribute to this project.

How to help

Look at the project page on SourceForge. Get MICO/MT and try it. Please
report any problems and offer suggestions.

Where to get MICO/MT

The latest source is alway in the CVS Repository.

Keep in mind that MICO-MT has been compiled and preliminarily tested on HP/UX 11.0/ HP aCC and SUSE Linux GLIBC 2.1. Other "late-model" RedHat-based Linux systems should also work... but...

Also, there are some tar balled releases and Andreas's thesis mirrored on this server at http://micomt.sourceforge.net/dist/

Current Status as of 05/08/2001

MICO/mt has new developers. We are assisting Andreas in the completion of MICO/mt. The game plan is to document the MICO source to lower the barrier to entry for developers, push out a minimal multi-threaded release, make reentrant the services that aren't, merge MICO/mt with the main MICO distribution.

Network I/O is multi-threaded in/out of a thread-safe GIOPConn class - single threaded upcalls to POA-based servants have been successful. It is likely that we'll release a version of MICO containing just the multi-threaded I/O features but with single-threaded invocations to allow the MICO community to check it out and offer criticism, fixes, and kudos etc.

The MICO development graph

We are aware that MICO version 2.3.1 is nearly complete, however, the current MICO multi-threading work is based on MICO 2.3.0-1 as released Sept 6, 1999 with the mico-2.3.0-1.diffs.gz patch applied. The plan is that MICO-MT will be merged into the current version of MICO-2.3.X as MICO-MT or at least parts of MICO-MT are stabilized. In summary, the MICO development graph will look like this:

The version designated as "MICO.2.3.X" above is an appropriate future version (hopefully it's as soon as 2.3.X!!) in which the multi-threaded capabilities will be merged. All subsequent MICO development continues from this future version.

Goals of MICO/MT

Multi-threaded capability will be added (initially) to the following areas:

The Architecture of MICO-MT

"Layer 1" - the bottom-most layer...OSThread::Thread

The multi-threaded framework of MICO-MT is being developed and tested simultaneously on SUSE Linux and HP/UX 11.00. Both O/S's have thread packages that conform (more or less) to the POSIX Draft 10 PThreads implementation. The O/S specific interface details of each potential thread package is masked by an abstraction layer: MICO::OSThread. Using this layer, it is possible to (easily) provide Solaris threads, DCE Draft 4 threads, etc. support without requiring changes to upper levels of MICO/MT.

"Layer 2" - the Thread Manager, Operation Providers, etc...

On "top" of Layer 1, is a layer of software that provides for thread management, scheduling, and "connection" of one thread to another - the "meat" of MICO-MT. Layer 2 closely follows the traditional Boss/Worker model.

Thread Pool Managers

The thread pool manager (TPM) is responsible for creating pools of threads and assigning work to them via message channels (described below). A TPM also has exactly one "Input Message Channel" associated with it from which the manager obtains the next message and "hands" it off to the next available OP (thread).

Operation Providers

An Operation Provider (OP) is a construct that performs a quantized unit of work; e.g. Decode, Invoke, Demarshal, Update etc. OPs can be built that perform a unit of useful work at the CORBA level or at an application level. Indeed it is a goal of this architecture to allow the application developer to use it for their application-level needs. An OP "registers" itself with a thread pool manager so the TPM "knows" the OP is available and then blocks - waiting for a message from its TPM or another OP. OPs are "chained" to each other via "message channels" (MC). A message (the result of processing by an OP) is passed from the OP to a MC. If the MC "belongs" to the "next" OP, no context switch or block occurs - the "next" OP directly sees the message as a method invocation and performs its operation. If the MC belongs to the TPM, the TPM gets the message and places it in the appropriate TPMs MC - additionally, a context switch is made when this message is acted upon.

Message Channels

Message Channels (MC) are the mechanism that makes communication between OPs possible. For "inter-thread" communications (i.e. "hopping" from one thread to another) a message is placed in a TPM's MC. For "intra-thread" communications (no context switch) a message is placed in the destination OP's MC. For a thread pool manager, the next message of appropriate "type" will be obtained from its input message channel. For an OP, the next message of appropriate type will be obtained from the OP's MC. This flexibility makes it possible to select (even at run-time) the "best" thread (OP) allocation strategy for the circumstances. For example, network reads/writes could be handled in one OP, the CORBA method invocation could be handled in a second thread, and the reply be made via a third thread. Remember, to "hop" threads, an OPs message is put in the thread pool manager's MC; to stay in the same thread, the message is handed off to an OP's MC.

"Layer 3" - MICO ORB/POA, etc.

The final MICO-MT layer consists of adding locking mechanisms (rwLocks, mutexes, etc) around key containers in the ORB core - i.e. _invokes, _adapters, etc... to make them thread safe.

"Layer 4" - Application level multi-threading

All of the OP/MC/TPM components discussed above are useful to developers who want to make their CORBA applications multi-threaded and still "play nicely" with MICO-MT's internals.

List of Initial Contributors to this project

see: http://www.mico.org/FrameDescription.html#authors

MICO/mt specific:

Andreas Schultz, CS Master's Degree Student at the University of Magdeburg, Germany.
    Creating a working implementation of MICO-MT as a (useful) side-effect of his thesis on the topic.
    re-structuring the ORB core to make it thread-safe.
    Enhancing OSThread abstraction and creating thread scheduling model.

Andy Kersting, Software Engineer
    Initial HP/UX 11, PThread, HP/UX 10.20 DCE Threads, and Linux Pthreads OSThread abstraction class.
    Making sure the code compiles and runs on HP/UX 11.00
    General contributions, suggestions and ideas throughout the project, this page. etc...