Traditional federated learning involves optimizing point estimates for the parameters of the server model, via a maximum likelihood objective. Models trained with such objectives show competitive predictive accuracy, however they are poorly calibrated and provide no reliable uncertainty estimates. These, however, are particularly important in safety critical applications of federated learning, such as self-driving cars and healthcare. In this work, we propose FSVI, a method to train Bayesian neural networks in the federated setting. Bayesian neural networks provide a distribution over the model parameters, which allows to obtain uncertainty estimates. Instead of employing prior distributions and doing inference over the model parameters, FSVI builds upon recent advances in functional variational inference and posits prior distributions directly in the function space of the network. We discuss two different approaches to federated FSVI, based on FedAvg and model distillation respectively, and show its benefits compared to traditional weight-space inference methods.