IBM defines generative AI as "deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on." (Martineau, 2023). In a nutshell, these Large Language Models (LLMs) are fed enormous amounts of textual and visual data, which are then analyzed and encoded. In the case of text, the model learns patterns and relationships in the dataset, assigning statistical weights to each word and the words that are most likely to appear before and after it in the training data it has been given. Eventually, the model learns to generate new data based on the data it was trained on.