!Tip: Use F3 to search terms
Terms | Explanation |
---|---|
Backpropagation | An algorithm used to calculate the gradient of a loss function with respect to the parameters of a neural network. |
Bias-Variance Trade-off | A concept that describes the relationship between the complexity of a model and its ability to fit the training data while also generalizing to new, unseen data. |
Computer Vision | A subfield of AI that deals with the ability of computers to interpret and understand visual information from the world, including tasks such as image recognition and object detection. |
Data Preprocessing | The process of preparing and cleaning the data before feeding it into a model, it includes tasks such as feature extraction, normalization, and handling missing values. |
Deep Learning | A subfield of machine learning that involves training artificial neural networks with many layers to perform tasks such as image or speech recognition. |
Edge Computing | A method of performing data processing and analysis at the source of data, rather than in a centralized location, such as a data center. |
Embedding | A technique used to represent discrete data, such as words, in a continuous vector space, making it easier for a model to process and understand. |
Explainable AI (XAI) | A subfield of AI that aims to make the decision-making process of AI models transparent and interpretable to humans. |
Generative Adversarial Networks (GANs) | A type of deep learning model that consists of two parts, a generator and a discriminator, that are trained together in a competitive manner. |
Generative Models | A type of model that can generate new data samples that are similar to the training data. |
GPT | "Generative Pre-trained Transformer". It is a language model developed by OpenAI that uses deep learning techniques to generate text. GPT-3 is the third version of this model, and it is one of the largest and most powerful language models available, with 175 billion parameters. GPT-3 is pre-trained on a massive dataset of text, and it can generate coherent and fluent text on a wide range of topics, it can be fine-tuned to perform specific tasks such as language translation, text summarization, and question answering. The model can also generate human-like text, completing the sentence, paragraphs, and even entire articles. GPT-3 has been used in various applications such as chatbots, language translation, and text generation, among others. |
Gradient Descent | An optimization algorithm used to find the values of parameters that minimize a loss function. |
Hyperparameter | A value that determines the behavior of a model, but is set by the modeler, not learned from the data. |
Inference | The process of using a trained model to make predictions or decisions on new data. |
LLM | Large Language Models (LLMs) are artificial intelligence tools that can read, summarize and translate texts and predict future words in a sentence letting them generate sentences similar to how humans talk and write. |
Machine Learning | A method of teaching computers to learn from data, without being explicitly programmed. |
Model | A mathematical representation of a problem or task, which can be trained and used to make predictions or decisions. |
Natural Language Processing (NLP) | A subfield of AI that deals with the interaction between computers and human language, including tasks such as language translation and text summarization. |
Neural Network | A type of algorithm modeled after the structure and function of the human brain, used for tasks such as image or speech recognition. |
Overfitting | When a model is trained too well on the training data, and performs poorly on new, unseen data. |
Regularization | A technique used to prevent overfitting by adding a penalty term to the loss function. |
Reinforcement Learning | A method of machine learning where the model learns by taking actions in an environment and receiving feedback in the form of rewards or penalties. |
Supervised Learning | A method of machine learning where the model is trained on labeled data, meaning that the correct output is provided for each input. |
Training Data | Data used to train a model. It helps the model to learn from examples and improve its performance. |
Transfer Learning | A technique where a model that has been trained on one task is fine-tuned on a different but related task. |
Unsupervised Learning | A method of machine learning where the model is not provided with labeled data, and must find patterns or structure in the input data on its own. |