Notes for a talk I gave on OO frameworks, in April 1992.
New paradigms always seem to start at languages, and filter backwards and upwards:
For example: structured languages came along first, then structured design, then structured analysis methods.
The first OO language, Simula, was invented a quarter of a century ago. Even whizzy Smalltalk is over a decade old! OO design is begining to be established (eg Booch). But there aren't really any OO analysis methods (some attempts, but a little simplistic)
Need the same paradigm throughout. Can't easily do a structured analysis followed by an OO design --- change of concepts gives a non-obvious translation step (even Ed Yourdon agrees!)
So, the ORCA project aims to bring OO to the beginning of the "lifecycle"
We're quite proud of having an OO acronym that has only a single O in it! And our other acronyms are "whale oriented", as you will notice. We've developed 4 notations ...
Extrinsic behaviour is how something needs to behave in its environment, intrinsic behaviour is what it can do. Reusable things can be over-capable ("Screwdriver as lever", electric one no good)
Need 2 descriptions only because of reuse : "what I want versus what I've got --- can I use this to do that?" Not necessary for top down decomposition.
Analysis: discover purpose of system: develop extrinsic behaviour that fulfills this purpose, find (or design) reusable components with intrinsic behaviour that realises extrinsic behaviour
I'm going to talk about just one component of ORCA, that of frameworks.
A framework is an abstraction of a system.
In ORCA-speak, a system realises a framework. Many different systems can realise a given framework, which can be thought of as a system template.
The framework is the type, the system is the instance.
But why do we need system architectures? Surely classes are enough?
Well, a class library gives you a catalogue of components that you can use or modify for your own application.
Think of Meccano: you get lots of reusable bits: fall into families (metal strips, plates, wheels, gears, etc) They are even modifiable: some pieces can be bent (though I never liked doing it --- they never straighten properly again)
But also get a booklet of "ideas", things to build. A big number 10 set, with 1000's of pieces, would be a bit daunting without these instructions for trucks and bridges to build. And it's easier to modify a given truck design than build one from scratch
The same is true of a class library. We need some standard ideas of how to put the pieces together to solve standard problems, to build generic applications.
All this should come as no surprise.
OO programming has been using a few standard architectures for ages.
The best known one is probably Smalltalk's MVC architecture : a way of building user interfaces.
Why bring in this concept of a system? Why not just have a big object encapsulating the complex structure?
Well, we don't want to encapsulate.
In the MVC example, we want to talk to the controller. We don't want to talk to the MVC object which then passes the message on to the controller.
Not only would that be inefficient, but it wouldn't allow us to compose subsystems together in some of the ways we want to, by overlapping them
Here' s a simplified view of the meta-model we use in ORCA to relate the framework concepts to the more familiar classic OO concepts.
As always, objects are instances of classes.
Objects implement systems. This is not one-to-one: even a primitive system could need several objects to implement it (all together, or sequentially one after another) or one object could implement several simple subsystems
The objects that implement systems belong to classes, which are related to the relevant framework, via the concept of a POD. (for the less whale-aware of you, a pod is a group of whales. It can also be called a school, but we thought a school of classes was going a bit far!)
A pod collects together those classes whose instances implement a system, for example, Model, View and Controller classes.
It's conceivable that objects belonging to classes not related by inheritance could implement the same system. In theory there is an abstract superclass relating them, but in practice they might be in unrelated and fixed class libraries.
Macbeth example: king is a system implemented by different objects at different times. Single objects might play multiple parts to cut costs
Having the concept of a framework isn't enough. We also need a language to describe and manipulate the things.
Why? Consider good old MVC again. Here is a more detailed picture of what's going on. There can be multiple controller-view pairs, and there need not even be a model!
The pattern of messages is: either the controller talks to the model, which then talks to the relevant view, or the controller talks directly to its view.
Given candidate classes, we need to be able to determine whether their intrinsic capabilities satisfy the extrinsic requirements.
And this is a simple architecture! To capture the sort of complexity for a real system, a language is needed.
So, in ORCA, we've developed a framework language, and called it Beluga.
This abstraction leads to a relatively complex notation. (It eventually has to be refined away, but initially want to express it --- different refinements can correspond to different design decisions).
Beluga has three different concrete syntaxes!
The diagrams are the sort of things you might sketch on the back of an envelope while analysing a system. They give partial views of the system, highlighting different aspects.
The textual form gives a complete description of the system, and would be used for more formal manipulations.
The organization diagram shows the static structure of the frameworks and how they communicate.
Here's MVC again, with the user events shown explicitly.
The boxes are subframeworks. The little icons in them indicate
The lines indicate communication paths
Here is the timeline diagram.
This shows that the MVC framework behaviour is a choice between two subframework behaviours:
one communicating via the model, the other bypassing it
And here is the textual form, which says all that both the previous diagrams did, in a form suitable for manipulation, but not necessarily so suitable for comprehension!
As you saw in the previous examples, frameworks are built from subframeworks.
For example, a Communication joins two frameworks together by saying they communicate somehow with each other.
The communication can be targetted to an internal sub-framework if required.
Composition is the complementary way of joining two frameworks to produce a bigger one.
It joins two together by overlapping subcomponents.
An interpretation is that the joint subsystem is playing two different roles in the two frameworks, but has been identified as being the "same thing" in the overall system
An Ordering joins frameworks together by saying the behaviour of the second follows on after the behaviour of the first.
So frameworks provide not only spatial static structure, and also temporal dynamic structure. They can be thought of as chopping 4-dimensional space-time up into meaningful chunks
Any notation that is more than just marks on the paper must have a semantics.
The semantics of a Beluga framework is defined as the set of all possible behaviours an instance may exhibit. Any particular instance will, obviously, exhibit only one of these behaviours.
So the job of an analyst is to define a framework that has a big enough set of behaviours to capture every desired possibility, and small enough to exclude every undesired possibility. In other words, to get the right level of abstraction.
(Black sheep joke)
The usual mistake is one of overspecification: excluding perfectly acceptable behaviours.
You might be surprised that I said analysis consists of building a model.
Often you hear that the analysis should be done with an "unbiassed approach", without any preconceptions, or whatever.
But this is impossible. You always have some preconceptions used to filter the data (should you note the colour of the carpet in the IT manager's office, or the star sign of the interviewer? No, because you know these are unimportant, in your model)
As the analysis goes on, you start interpreting new data to fit the model you are building.
So let's make that model explicit: then it at least might become clear that analyst and client are talking at cross purposes
(does he beat you up?)
So let's consider analysis to be the process of modelling, of building a framework of, the problem domain (Design can be considered as building a framework of the solution)
This approach provides an explicit model that can be challenged and criticized.
It also permits reuse, because the framework might be one developed for a different problem.
And the framework can be used to structure the analysis process, rather than just gathering data in a haphazard fashion.
Let's assume we are analysing a well understood domain. By well understood I mean that there is a kit bag of tried and tested reusable frameworks available to the analyst.
The first step is to hypothesize a framework, or set of candidate frameworks, as being ones that describe this particular problem. "You want a four-bedroomed dwelling".
This hypothesis can be tested: gathering analysis data to test it is analogous to doing experiments to test a scientific hypothesis.
The results of the experiments might show that the framework needs to be changed ("you want a five-bedroomed dwelling, or, if the first guess was very bad, you want a shopping arcade"). The framework might be okay, but too abstract "you want a four-bedroom bungalow"), in which case it needs elaboration (adding more detail corresponds to making design decisions)
So a hypothesis should be abandoned if wrong, or refined if too abstract. But it should not need to be modified in any essentials.
This cycle can be continued until the problem has been analysed to a sufficient degree of detail
As I said, the analysis process can be structured as a set of experiments to test the hypotheses.
These experiments can be designed with a particular purpose in mind.
There are analogies with hypotheses in science. The analysis data can be used to refute hypotheses (Oh, you don't want 4 bedrooms, you want 3 bedrooms and a study.)
Missing requirements are often a devil to discover, but these hypotheses can help. "You want a house with bedrooms, bathroom and lounge-diner. All my hypotheses also have a kitchen. Do you want one?" If the answer is no, it's still worth digging around to find out why. It might be cheaper to provide a standard, over-capable, solution than a customized one that fits perfectly.
In science, if the hypothesis doesn't fit the data, you are supposed to change the hypothesis. But it might be a very good hypothesis that has worked well until now, so you are entitled to be suspicious of the data: is someone mistaken? Is someone lying?
Sometimes there won't be an appropriate hypothetical framework available.
The domain might be only partially analysed, or even completely new.
Taking the scientific analogy further, we now have to conjecture a framework.
The experimental approach is still used, but now it is much more likely that the conjecture will have to be modified in some fairly major way (remember, the reusable hypothetical frameworks were replaced if they didn't fit, or elaborated if at too high a level of abstraction, but not hacked about).
Even once the conjecture is right for this system, it will probably still be too detailed to become a new hypothetical framework. A good hypothesis is an abstraction of a kind of system: to do abstraction it is usually necessary to have several systems to abstract from, to indicate what is essential to the system, and what is particular.
But this is the usual problem with reuse.