Artificial Intelligence (AI) has been at the top of technology-conversations during 2023. AI suggests many opportunities for innovation as it is worked into applications that make use of large datasets. Patterns learned by AI can result in better human-computer experiences, diagnosis, content creation, pattern predictions and improved workflow. As a tech, AI is such a pervasive game-changer that its emergence is leading to rethinking of laws, industry norms as the court of public opinion raises their concerns. A challenge with AI is that understanding of what it is and its “insider language” is not widely understood. In this series we will cover AI language from A to Z.
In Part IV we cover G to L.
- Garbage in, garbage out (GIGO). GIGO is a concept in computer science that states the quality of a system's output is based on the quality of the input. If the input is trash, the output will be trash. If AI is trained on low-quality data, then the user can expect its output to be shoddy as well.
- Generative adversarial network (GAN). A GAN is a type of machine learning that consists of two neural networks: one that generates data and one that discriminates and refines the data. The two neural networks compete to create more accurate predictions.
- Generative AI. Generative AI is an artificial intelligence technology that creates content by learning patterns in training data and synthesizing new material with the same learned characteristics.
- Graphics processing unit (GPU). A GPU is a type of processor that is suited to powering AI hardware because it can perform more simultaneous computations than a CPU.
- Generative pre-trained transformer (GPT). GPTs are the AI algorithms that power some of the most well-known natural language processing and generative algorithms. GPT-3, GPT-3.5 and GPT-4 are examples of the GPT family of algorithms. They were developed by OpenAI.
- Hallucination. An AI hallucination is when an AI system presents false information framed as the truth. For example, a chatbot prompted to write a five-page research report with citations and links might generate fake links that appear real but lead nowhere or fabricate quotes from public figures as evidence. A deepfake is different than a hallucination in that it is intentionally created as a hoax to trick the viewer.
- Hyperparameter tuning. The process of selecting the best values for the parameters that control the behavior of a machine learning algorithm, such as the learning rate or the number of hidden layers.
- Hypnotize. A prompt injection attack where bad actors inject the prompt to hypnotize an LLM. This can be used to leak confidential information, create vulnerable code, create malicious code, and offer weak security recommendations.
- Knowledge engineering. Knowledge engineering is the field of AI that aims to emulate a human expert's knowledge in a certain field.
- Large language model (LLM). LLMs are deep learning algorithms that understand, summarize, generate and predict new content. They typically contain many parameters and are trained on a large corpus of unlabeled text. GPT-3 is an LLM.
- Large Language Model Meta AI (LLaMA). LLaMA is an open source LLM released by Meta.
Click here to keep up with our Artificial Intelligence: A to Z series.
The following sources were used to build this glossary: