Artificial Intelligence (AI) has been at the top of technology-conversations during 2023. AI suggests many opportunities for innovation as it is worked into applications that make use of large datasets. Patterns learned by AI can result in better human-computer experiences, diagnosis, content creation, pattern predictions and improved workflow. As a tech, AI is such a pervasive game-changer that its emergence is leading to rethinking of laws, industry norms as the court of public opinion raises their concerns. A challenge with AI is that understanding of what it is and its “insider language” is not widely understood. In this series we will cover AI language from A to Z.
In the final part of this series we will cover S to Z.
- Sentiment analysis. Also known as opinion mining, sentiment analysis is the process of analyzing text for tone and opinion with AI.
- Supervised learning. Supervised learning trains machine learning algorithms on data with labels, known as structured data.
- Speech recognition. Speech recognition converts spoken language into text using AI.
- Synthetic data. Synthetic data is computer-generated information for testing and training AI models. Large AI models need large quantities of data to train on, which is traditionally generated in the real world.
- Training data. Training data refers to the information or examples used to train a machine learning model. AI models need a large training data set to learn patterns and guide their behavior.
- Transfer learning. Transfer learning is a machine learning system that takes existing, previously learned data and applies it to new tasks and activities.
- Transformer model. Transformer models are an AI model architecture used for NLP. A neural network architecture useful for understanding language that does not have to analyze words one at a time but can look at an entire sentence at once. This was an A.I. breakthrough, because it enabled models to understand context and long-term dependencies in language. Transformers use a technique called self-attention, which allows the model to focus on the particular words that are important in understanding the meaning of a sentence..
- Turing test. The Turing test is a method of inquiry for determining if a computer is capable of thinking like a human being. The test was created by Alan Turing, an English computer scientist and cryptanalyst. A computer has intelligence if it can mimic human responses under certain conditions, according to the Turing test.
- Token. A token is the basic unit of text that an LLM uses to understand and generate language. It may be a word or parts of a word. Paid LLMs, such as GPT-4's API, charge users by token.
- Unsupervised learning. Unsupervised learning trains machine learning algorithms on data without labels, known as unstructured data.
- Variational autoencoder. Variational autoencoders are a generative AI model architecture commonly used for signal analysis and finding efficient coding of input data. They are comparable to GANs in that they pit two neural networks against each other -- an encoder and a decoder.
- Zero-shot learning. Zero-shot learning is a machine learning technique where algorithms observe samples that were not present in training and predict what class they belong to. For example, an image classifier only trained to recognize cats would be told to classify an image of a lion knowing some extra information -- that lions are big cats.
Click here to see the entire Artificial Intelligence: A to Z series.
The following sources were used to build this glossary: