I documented an enormous hack that worked around the fact that in the original EJB 2.0 spec you had the horrific idea (am I being strong enough?) of "DependentObjects?
" only being manipulable inside of a single EntityBean
The EJB 2.0 "final spec 2" removes the restriction by getting rid of dependent objects altogether. This is an incredible improvement. It does this through introducing a simpler idea -- local interfaces. Here's how they work. To quote from the new spec: "Session and entity beans may have local clients. A local client is a client that is collocated in the same JVM with the session or entity bean that provides the local client view and which may be tightly coupled to the bean."
What this does is allow vendors a LOT of flexibility in building their persistence systems. Now that Entity Beans can be local, it is much simpler, and more efficient, to build Entity beans as fine-grained persistent objects. There will be fewer classes involved (no stubs and skeletons) and the local semantics are now pass-by-reference instead of pass-by-value.
This works so much better in the SessionBeanWrapsEntityBeans
pattern that we have envisioned as the best access pattern for EJB's. For instance, to quote from the spec again:
"For example, a group of related entity beans—Order, LineItem?
, and Customer—might all be
defined as having only local interfaces, with a remotable session bean containing the business
logic that drives their invocation."
I think this is a great idea. For once, Sun made things simpler, rather than more complex.
It is a good idea, but not a great idea. First we need to remember that they basically screw things themselves with "dependents" which could have been a very nice candidate for AntiPattern
catalogues. It brings me to what I think is an organizational AntiPattern
, that you first had two guys, with no responsibility whatsoever writing a spec and the whole industry following blindly, and no you have three others, and once the spec will be settled under the DesignByCommittee
auspices, the industry will continue to blindly implement.
Conceptually though, it is not perfectly consistent. You can have only one interface, being exposed all over the place, and you gain uniformity and flexibility, while you can still allow for local call optimizations by stating that both the caller and the callee should not assume any implied parameter passing semantics. Certainly the introduction of <const> keyword as in C++ would have helped a great deal.
On the other hand it is logical that certain interfaces should not be exposed to certain clients, so you can have "N" possible interfaces for a component, letting the developer choose if N=1, or N=2, or the general case is appropriate - as it happens in ComponentObjectModel
But introducing this duality will make life miserable for those who foolishly believe that EntityBean
is a proper way to do design. Not to mention that now you have 4 kinds of Entity animals: local BMP, remote BMP, local CMP, remote CMP, some of them don't inter-operate with each other. Quite a conceptual nightmare for the "architects" out there trying to decide between a combinatorial explosion of options, don't you think ?
No, I don't think that it makes that much of a difference. The point is that distributed design is not the same as "regular" OO design. There are different drivers. This finally admits that they ARE different, rather than trying to blindly assume that they are not. It is difficult to reach TransparentDistribution, and in fact it's not even a good idea in most cases.
Agreed, therefore the Bean provider should have the liberty to choose N interfaces in the best case solution as in DCOM.
Whether an interface is to be distributed or not, can be easily marked by inheriting a dummy interface like Remote.
Forcind the discussion to stay at the magical number of 2 possible interfaces will create some confusion.
My first proposition meant that EJB clients should program against the worst case assumption (i.e. distribution), while allowing the app server to optimize (nothing would then be lost, except that network failures cannot happen), or allowing the deployer to enforce "locality" upon reference resolution (lookup) with a markup. The TransparentDistribution
failed when it started with the best case scenario as for example the NFS was built on the regular file API. However I have reason to believe (see DCOM, JINI for example) that when the model starts with the worst case assumption, than there's nothing to be lost if the infrastructure layer optimizes for locality.
More, because of the complexity of technologies involved , the EJB client, or inter-EJB calls have to be programmed with a lot of possible failures in mind, so just removing the network failures doesn't add a lot of convenience. As a matter of fact the problem they were trying to solved was not that the developers had to catch remote exceptions - nobody really complained, but it was that the specified a too strict invocation semantic (serialize/deserialize) that prevented local calls optimization - that was one of the big sources of insatisfaction.
Let's say that the infrastructure discovers that a certain service (- that is remote to the current lookup - is too expensive to reinstantiate locally, so it can decide to return the remote reference in responsen to a certain lookup. But in the case that locality semantics are enforced, it cannot. More, because of the extra effort involved a bean provider decides to write only the remote interface to his bean. In this case the container still has to enforce remote semantics to a local call (i.e. at least serializing/deserializing the arguments).
So we lose a lot of flexibility. Anyway probably the right thing to do is to allow an object (either logical - agroup of objects, or physical- a single instance) to expose several interfaces. This allows for a lot of good things, chief among which, is compatible interface evolution
-- this section added mid01 HadTheLastWord?
to EJBs? Is this page still relevant in Sep05?