In the medical field, several artificial intelligence (AI) applications have already found their way to clinical/commercial implementation. A major limitation for most studies, especially deep learning studies involving imaging data, is related to the collection and handling of patient data for training. Generative methods can provide a solution to this issue, but currently available image generators (e.g. DALL-E, Stable Diffusion) do not yield realistic medical imaging data. This project aims to develop a method for generating realistic dental radiographs using state-of-the-art deep learning techniques. To this end, two approaches will be explored: generative adverserial networks (GANs) and latent diffusion models (LDMs). GANs and LDMs will be trained using a database of annotated panoramic radiographs, using custom architectures and loss functions. The resulting models can generate quasi-infinite, customizable synthetic radiographs for further research, training, etc.