In reading mammograms, radiologists judge for the presence of a lesion by comparing at least two breast projections (views) as a lesion is to be observed in both of them. Most computer-aided detection (CAD) systems, on the other hand, treat single views independently and thus they fail to account for the interaction between the breast views. Following the radiologist's practice, in this paper, we develop a Bayesian network framework for automatic multi-view mammographic analysis based on causal independence models and the regions detected as suspicious by a single-view CAD system. We have implemented two versions of the framework based on different definitions of multi-view correspondences. The proposed approach is evaluated and compared against the single-view CAD system in an experimental study with real-life data. The results show that using expert knowledge helps to increase the cancer detection rate at a patient level.