4. Copland's Design
Here you’ll read about various issues regarding Copland’s design. These will probably not be interesting to anyone except those who want to dig into Copland’s internals.
Copland borrows much of its design from the HiveMind IoC container for Java. However, Copland was written without significant experience with HiveMind’s internals, so although they may look similar on the outside, their implementations will differ significantly.
Copland uses HiveMind’s concepts of “modules” (but calls them “packages”), “service points”, “configuration points”, and “contributions”. Also, a “service models” are borrowed from HiveMmind, although Copland implements them in a more flexible (and extensible) manner.
The idea of a service factory was also borrowed from Copland. HiveMind supports digester-like “rules” for processing descriptor files, but because of Ruby’s highly dynamic nature, such complexity was not necessary.
Interceptors, and the way they get created (via special “interceptor factories”) are identical to HiveMind’s interface.
HiveMind uses XML for it’s module descriptors. Copland uses YAML.
HiveMind’s configuration points revolve around XML—you define a configuration point and a schema to go with it, and all contributions to that configuration point must match that schema. Copland’s approach is simpler: simply declare whether a configuration point is a list or a map, and then all contributions must be either in the form of list elements to append to the configuration point, or map elements to merge into it.
Copland names the service models differently:
Events and listeners exist in HiveMind, too, but because of Ruby’s dynamic nature they can be implemented much more simply in Copland. Therefore, the way they are declared and used is different in Copland.
Multicast services do not exist in HiveMind, nor do services that are backed by anything other than objects. Copland allows services to be backed by classes and singletons, as well as objects.
The initialization process for the registry consists of two phases. The first phase is where the package descriptors are loaded and parsed. Once all of the packages have been loaded the the second phase is started, which causes all of the packages to process those parts of themselves that refer to other packages. This includes contributions to configuration points and processing any referenced parameter schema definitions.
One the initialization has been completed, the registry (which is itself exported as a service) loads the copland.Startup service and instructs it to start any services that have registered themselves with it (via its configuration point).
The way a service is instantiated depends on two things: it’s service model, and its instantiator (or implementor).
The instantiator defines how the service is instantiated. It may either be a string (in which case the service is created simply by calling ‘new’ on the class described by that string), or a map, in which case the creation of the class is delegated to a service factory. These two approaches are defined inside Copland as “instantiation factories”, with the first being
Copland::Instantiator::Simple, and the second being
The service models define when the service is instantiated. For example, the “singleton” model says that a service is only instantiated the first time it is requested, with subsequent requests returning the cached instance. The “singleton-deferred” model indicates that a service should only be instantiated the first time any of its methods are invoked (this level of granularity is achieved via a proxy object that defers instantiation of its wrapped service).
Thus, when a user requests a service, the registry calls the
#instance method of the corresponding service point. This calls the
#instance method for the service point’s model, and the service model may then either call the service point’s
#instantiate method (to actually instantiate the service point) or return a cached instance.
The service point’s
#instantiate method will call the
#instantiate method for the instantiation factory for this service point. The
Simple factory simply calls
#new on the service point’s backing class, but the
Complex factory will basically “execute” the parameter scheme for the service factory that was specified, and then invoke the
#create_instance method of the service factory. Either way, the backing object for the service is returned.
Then, any interceptors that were specified are added to the new service, the new service is added as a listener to any requested event producers, and the new service is returned.
Interceptor chains are implemented as chains of proxy objects, each of which wraps an interceptor instance. Each method of the service that is being intercepted is then renamed and replaced with a new method, which just executes the
#process_next method on the first element in the chain. When the last element of the chain invokes the
#process_next element on what it believes to be the next link, the renamed method of the service is invoked.
This makes it look to each interceptor as if they are a filter for the message invocation, which indeed they are. Each interceptor may invoke the
#process_next method of the next link in the chain, if they desire, or not, if they desire.