IBM defines generative AI as "deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on." (Martineau, 2023). In a nutshell, these Large Language Models (LLMs) are fed enormous amounts of textual and visual data, which are then analyzed and encoded. In the case of text, the model learns patterns and relationships in the dataset, assigning statistical weights to each word and the words that are most likely to appear before and after it in the training data it has been given. Eventually, the model learns to generate new data based on the data it was trained on.
Professional learning session recordings, articles, and more to help faculty learn more about generative artificial intelligence from the Teaching Center at Nashville State Community College.
This short video gives a very simplified overview of how large LLMs work:
Martineau, K. (2023, April 20). What is generative AI? | IBM Research Blog. https://research.ibm.com/blog/what-is-generative-AI?utm_content=SRCWW&p1=Search&p4=43700078077908934&p5=e&gad_source=1&gclid=CjwKCAjw48-vBhBbEiwAzqrZVGRo_rla9vqmLC7VANZiRqYfHdGUlXcP9MkkwpxyDe3qTV7ILgLByRoCcbgQAvD_BwE&gclsrc=aw.ds
