This paper considers linear-quadratic control of a non-linear dynamical system subject to arbitrary cost. I show that for this class of stochastic control problems the non-linear Hamilton–Jacobi–Bellman equation can be transformed into a linear equation. The transformation is similar to the transformation used to relate the classical Hamilton–Jacobi equation to the Schrödinger equation. As a result of the linearity, the usual backward computation can be replaced by a forward diffusion process that can be computed by stochastic integration or by the evaluation of a path integral. It is shown how in the deterministic limit the Pontryagin minimum principle formalism is recovered. The significance of the path integral approach is that it forms the basis for a number of efficient computational methods, such as Monte Carlo sampling, the Laplace approximation and the variational approximation. We show the effectiveness of the first two methods in a number of examples. Examples are given that show the qualitative difference between stochastic and deterministic control and the occurrence of symmetry breaking as a function of the noise.