One common pattern I've been using is to wrap all persistence logic in a resource class or perhaps two (one reader and one writer). Lately, I've been seeing more and more about message driven systems. A common pattern in message oriented systems is to publish commands and events to a message queue which can have zero to many message handlers.
The resource classes mentioned above would be consumed by middle layer services which encapsulate business logic and program flow. One policy of this architecture is that resources cannot communicate to anything else. However, this topology combined with event driven design leaves the system open to some holes.
If a middle layer service calls a resource to make some update, then send the event message, then every consumer of that resource would need to do the same. If not, then some important events may be missed. I don't believe that the resource implementation should raise the event either because that is beyond its concern.
A possible solution would be an event publisher via AOP. AOP in WCF can be implemented via a message interceptor that passes a copy off to some event publisher. This interceptor would need information about the transaction (success, or failure). The transaction starts in the client. It also needs some basic data to send.
The event publisher would know if, how and what event data to publish. Event subscribers would register to queues, and the event publisher would be completely unaware of subscribers. The message would be one-way to a queue, something else can broadcast it if desired, or relay to multiple subscribers in a different way (coordinated program flow in response to an event).
One pitfall I could think of is if multiple subs react to a broadcast and those subs are not coordinated in some way. Either the subs would need to handle contention issues, cascading events, race conditions, throttling, and some other timing based issues; or a coordinator should handle the event rather than many separate subscribers to a broadcast.
One possibility of handling broadcast events while minimizing impact of timing issues would be to use bounded contexts. Some data from other bounded contexts would be duplicated in any given bounded context and read only data can be gotten from other sources. However, any updates would need to be managed from within the context by some command coordinator.
Here's how that would work - let's say context 1 sent an update event and context 2 received it. The event handler in context 2 sends a command to update the data from the event within its datastore. The command processor, which would need to handle commands in a certain order, would queue up the command from the event handler and do it's updating. If updates to it's own entities raises events, those events are published by the event publisher. Those events can be handled by any bounded context, even context 2 (the source of the event).
In any case, I'd like to really tackle this concept and put it into use for a bit before I commit to any of these ideas. Perhaps there will be some Greenfield application in the near future that will benefit from this sort of architecture/design. If so, I'm sure I will have many lessons to learn in the process (and to post).