I once did a short presentation on SoftwareArchitecture
. In the audience was a guy from ICL who'd worked on one of the very earliest CORBA ORB implementations. Afterwards he came up to me and asked: "Why do we need to worry about this stuff [architecture] now that we have CORBA?". His basic premise was that CORBA allows us to define interfaces so all we need to do is split the system up into a number of components with well-defined interfaces and CORBA will take care of the rest.
I'm not going to try to deal with 'all we need to do is split the system up into a number of components'. This is HARD, perhaps one of the hardest problems faced by anyone concerned with the design/architecture of a non-trivial system. But, once we've found these components and defined these interfaces, does CORBA really take care of the rest? I think this is one of the myths of top-down-design-type thinking. An interface is more than just an IDL spec. IDL only describes the syntax of an interface but not its semantics. If we're going to get components to communicate we need to ensure that we understand not only what they communicate but also the nature of what they communicate. If we're passing an event object between two components, they not only need to share understanding of what methods an event supports but also what is meant by an event and how it should be handled. CORBA (or DCOM or any of these other infrastructure-type frameworks) can't do this and this is what we need an architecture for.
There's a good paper that describes experiences with combining 5 COTS pieces of software together to build a contextual diagramming tool. All 5 should, in theory, work together and so the work should 'simply' be a matter of integrating them. The tool took 5 man-years to build and was slow, bloated and buggy when it was finished. Worth reading:
, Carnegie-Melon University Technical Report, August 1994. 
I'd like to elaborate a little bit more on the sentence "they ... need
to share understanding of ... what is meant by an event ...".
CORBA provides metadata information, allowing a client to examine an
interface during runtime. Together with CORBA's DII this is usually
advertised as the foundation for a system architecture which can cope
with system components it has never been designed to work with. DCOM
has a similar mechanism with the "unknown" interface.
There are, however, practical drawbacks: It is really hard to
implement an analysis of an unknown interface and an appropriate
dynamic adaptation of a client to this interface (anyone offering some
patterns for this?). But, even more important, this does
not guarantee an appropriate interface usage in terms of state,
capacity, performance, concurrency, argument interpretation, result
interpretation, etc. pp.
I have observed similar misconceptions about metadata in other areas
also, e.g. with Java and esp. JavaBeans
. I have also seen conventional
data structures supercharged with metadata information - at the expense
of a much more complicated system but with no added value. That's what
I call the MythOfMetadata
And of course having components interact all by themselves, without moderation (as defined in an architecture), then you leave yourself open to problems such as deadly embrace, race conditions of all kinds, etc., etc., etc. Real time systems are full of this. Architecture defines not only what components are responsible for what efforts, but also what conditions mitigate the starting/stopping/pausing signals that need to be dispatched, by whom, to whom, and when. Without an architecture in place you gots yersef a potentially chaotic mish-mash of components running willy-nilly all over data that may be relevant or not depending on prevailing system conditions.