Abstract: We propose a method for improving approximateinference methods that corrects for the influence of loops in the graphical model. The method is applicable to arbitrary factor graphs, provided that the size of the Markov blankets is not too large. It is an alternative implementation of an idea introduced recently by Montanari and Rizzo (2005). In its simplest form, which amounts to the assumption that no loops are present, the method reduces to the minimal Cluster Variation Method approximation (which uses maximal factors as outer clusters). On the other hand, using estimates of the effect of loops (obtained by some approximateinference algorithm) and applying the Loop Correcting (LC) method usually gives significantly better results than applying the approximateinference algorithm directly without loopcorrections. Indeed, we often observe that the loop corrected error is approximately the square of the error of the approximateinference method used to estimate the effect of loops. We compare different variants of the Loop Correcting method with other approximateinference methods on a variety of graphical models, including "real world" networks, and conclude that the LC approach generally obtains the most accurate results.