Glossary

In-depth AI glossary of terms, concepts, and definitions by Fairo.
Accountability
Adapters
AI Alignment
AI Copilot
AI Law
AI Policy
AI Risk
AI Risk Management
Algorithmic Methods
Artificial General Intelligence (AGI)
Artificial Intelligence
Assessment
Associative Memory
Audibility
Audit
Brand Risk
Case Studies
Chatbot
ChatGPT
Compliance Risk
Conformity Assessment
Controllability
Conversational AI
Criteria/Checklists
Data Augmentation
Data Quality
Data Sets
Declarations
Deep Learning
Design Patterns
Deterministic Model
Discriminative Model
Explainability
Evidence
Foundational Model
Frameworks/Concepts
General Adversarial Networks (GANs)
Generative AI
Generative Pre-Trained Transformer (GPT)
GPT-3
GPT-4
Grounding
Guidelines/Codes of Practice
Hallucination
Impact Assessment
Inclusivity
Instruction-Tuning
Interpretability
Large Language Model (LLM)
License Model
Machine Learning (ML)
Metric
Multimodal Language Model
Multistakeholder Collaboration
N-Shot Learning
Natural Language Ambiguity
Natural Language Generation (NLG)
Natural Language Processing (NLP)
Neural Network
Online Communities
Open AI
Optimization
Overfitting
Policy
Pre-training
Privacy
Process Models
Recursive Prompting
Registry
Regulation
Reinforcement Learning
Risk Tolerance
Robustness
Rulemaking Guidelines
Safety
Sequence Modeling
Software Assistant
Software Library
Speech to Text
Stacking
Standards
Steerability
Supervised Learning
Test Case
Text to Speech (TTS)
Training/Tutorial
Transparency Report
Trust
Trust Risk
Unsupervised Learning

Accountability

From a technological standpoint, accountability is all about ensuring that those responsible for creating and executing Artificial Intelligence (AI) systems better recognize and grasp the potential footprint of such systems. It implies that clear lines of responsibility must be evident and maintained for the innovation and execution of AI technology, alongside imperative safeguards in location to progressively handle to negative impacts it might rouse. In order to successfully operationalize accountability, certain methods such as audits and independent monitoring may need to be utilized. By recognizing accountabilities regularly, any other harmful consequences that may emerge can be quickly attended to while the innovation of AI projects advances optimally.

Adapters

Adapters provide an advanced technique for making pre-trained AI models adaptable to new tasks with minimal additional training. The benefits of adapters are manifold, saving time, money, and resources. These modules are highly efficient at repurposing existing models for different tasks in areas like natural language processing, computer vision, and robotics, ultimately making AI more accessible and customizable to a wider range of applications.

AI Alignment

AI alignment is the process of ensuring that AI systems act with human goals and preferences in mind in an attempt to avoid unintended or potentially harmful outcomes. Despite the considerable complexity of AI technology and the difficulty of anticipating every potential outcome, efficient alignment is still achievable. This is where AI Governance comes in with its comprehensive framework of framework of regulations, best practices, and policies. Regulating AI with proper oversight helps to ensure that AI cooperation with the standards of ethics and societal norms while minimizing the chances of misalignment.

AI Copilot

Designed to support users in various tasks and decision-making processes across multiple domains in an enterprise environment, these conversational interfaces are powered by large language models. Not only do they make work more efficient and streamlined, but they also serve as a valuable resource for employees who need assistance with complex tasks. The possibilities for AI copilots are endless, as they can be customized to meet the unique needs of each business and industry.

AI Law

AI Law encompasses a range of legal issues around Artificial Intelligence, including but not limited to intellectual property, data & privacy, liability, consumer protection, antitrust, human oversight, human rights, and ethics. Evolving alongside the increasingly widespread and complex use of AI systems worldwide, AI Law guides the adjudication of court cases related to Artificial Intelligence and aims to set legal precedents. As a rapidly emerging field, AI Law requires continuous research and analysis of the implications of AI, and legal professionals must stay abreast of changing rulings to breathe life into the laws governing AI. Complex issues such as AI ethics must be carefully taken into consideration when developing and deploying AI, given the sweeping impacts of technology on not only the legal system but society as a whole.

AI Policy

AI policy is a comprehensive framework implemented by various organizations—whether private or public—to control the growth and usage of AI technologies. This may incorporate both juridical requirements and internal regulations encompassing a wide array of topics, such as data privacy, explicability and openness, liability and accountability, accountability, boil mitigation, and socioeconomic impact. Moreover, regulations restricting AI use can provide clarity for oversight and guidance steps needed when developing novel advancement, forming the basis of consistent paradigms leveraging machine learning. Through the establishment of AI policy, further harms from the use of AI can be prevented and the ethical practices of utilizing technologies can be highlighted.

AI Risk

The United States National Institute for Standards in Technology (NIST) defines risk in the context of the AI Risk Management Framework (RMF) as: the composite measure of an event’s probability of occurring and the magnitude (or degree) of the consequences of the corresponding events. The impacts or consequences of AI systems can be positive, negative, or both and can result in opportunities or threats (Adapted from ISO 31000:2018).

Source: ISO

AI Risk Management

The United States National Institute for Standards in Technology (NIST) defines AI risk management in the context of the AI Risk Management Framework (RMF) as: coordinated activities to direct and control an organization with regard to risk (Source: ISO 31000:2018).

Source: ISO

Algorithmic Methods

Descriptions of computational techniques for implementing or improving ethical aspects of AI systems. This includes pseudocode, graphical representations, and linguistic descriptions from low-level code to mathematical or computing procedures for implementing computational methods, e.g. privacy techniques.

Source: https://doi.org/10.1007/s43681-023-00258-9

Artificial General Intelligence (AGI)

AGI refers to the development of machines that can operate with the same level of cognitive abilities as humans, demonstrating a diverse range of skills across various domains and tasks. Unlike narrow AI, where the system is designed to perform specific tasks, AGI has the potential to learn, reason, and adapt to new situations, much like a human would. This groundbreaking technology has the potential to revolutionize numerous industries and improve our daily lives in unimaginable ways. However, this is not without its challenges, and further research is needed to overcome the complexities and ethical considerations surrounding AGI development.

Artificial Intelligence

Artificial Intelligence (AI) is an exciting and rapidly advancing field of computer science and engineering. Its primary goal is to create intelligent machines that can perform tasks that were traditionally considered to require human intelligence. The scope of AI is vast and involves developing algorithms, computer programs, and systems that can learn from data and make decisions or predictions based on that learning. Within the realm of AI, there are various fascinating subfields, each with its own unique focus and techniques. One such subfield is natural language processing, which concentrates on enabling computers to understand, interpret, and communicate in human language. Another intriguing subfield is computer vision, where researchers strive to equip machines with the capability to "see" and understand images and videos, just like humans do. In the subfield of robotics, scientists aim to design and develop robots that possess intelligence akin to humans, enabling them to interact with their environment dynamically. Expert systems form another vital subfield within AI and aim to replicate the expertise of human professionals in specific domains, such as medicine or finance. By capturing the knowledge and reasoning that experts use to make decisions, these systems can provide valuable insights and solutions to complex problems. Each subfield within AI employs different techniques and methods to simulate human intelligence and create intelligent machines with distinct capabilities. These AI-powered machines have the potential to revolutionize various industries and change the way we live and work.

Assessment

Assessments are a crucial aspect of understanding how Artificial Intelligence (AI) systems perform and interact within a certain environment. These evaluations allow us to pin down how a system reacts to various circumstances and the overall impact that it has on its resolved task. Through these assessments, we obtain an in-depth insight into the utilization of these complex AI systems, enabling us to get the most out of them.

Associative Memory

Imagine a computer system that thinks like a human brain, able to store vast amounts of information and retrieve it at lightning-fast speed. This is the power of associative memory. Our minds are constantly making connections between different elements of information, allowing us to recall related details quickly and efficiently. The same is true for computer systems equipped with associative memory technology. By processing and storing data based on connections between elements, these systems can quickly identify and retrieve the most relevant information for any given task or decision. This powerful tool could revolutionize industries from healthcare to finance, as companies harness the power of associative memory to make faster and more informed decisions.

Audibility

Audibility, as it pertains to AI systems, is the system’s capacity to undergo an evaluation of the algorithms, data, and design processes specific to that system. A three-pronged approach is used to assess the performance and accuracy of AI: algorithms are examined to determine their effectiveness in generating outputs; data collected is scrutinized to ensure it is of sufficient quality to maintain accuracy; and the design processes utilized to create the system must meet certain standards. With these criteria in mind, diligent audibility can refurbish an AI system into its most efficient form.

Audit

A formal examination of the ethical aspects of an AI system including its components, requirements, system behavior, data, or its impact on users.

Source: https://doi.org/10.1007/s43681-023-00258-9

Brand Risk

Brand risk threatens the reputation, image, and financial health of businesses everywhere. Customer dissatisfaction, product/service quality issues, regulatory violations, legal disputes, and public relations crises can all have adverse effects on a brand’s bottom line. Additionally, the increasing use of AI in business operations and marketing increases the risk, as the magnitude of any given brand risk tends to multiply. To this end, companies must take proactive steps to safeguard their brands and keep consumer faith intact. This can take many forms, from close monitoring of operations to comprehensive preparation around the possibility of brand risk. By doing so, companies can protect themselves and bolster their image.

Case Studies

Analyses of ethical aspects of AI applications and algorithms and how they can be addressed.

Source: https://doi.org/10.1007/s43681-023-00258-9

Chatbot

With a user-friendly interface designed to make chatting easy and intuitive, these virtual assistants can provide almost instantaneous answers to users' queries. Whether it's a simple pre-written response or a more complex AI-driven conversation, chatbots are always there to simplify issue resolution.

ChatGPT

ChatGPT is an innovative and versatile language model developed by OpenAI. Trained on an extraordinary amount of internet text data, it can perform an impressive range of natural language tasks. Fine-tuned to optimize its performance, ChatGPT can tackle language translation, text summarization, and question-answering. In short, this powerful tool boasts the ability to understand language and generate coherent and intelligent responses with remarkable accuracy.

Compliance Risk

Compliance Risk refers to the threat an organization poses in violation of laws, regulations, or internal policies There are various conditions from which a Compliance Risk can arise, which could range from outdated or poor implementation of business standards, substandard employee training, misrepresentation of authority, or other intentionally impious actions. Things that lead to compliance risk may vary based on the standards and assessments of specific organizations.

Conformity Assessment

The International Electrotechnical Commission defines a conformity assessment as: any activity that determines whether a product, system, service and sometimes people fulfill the requirements and characteristics described in a standard or specification. Such requirements can include performance, safety, efficiency, effectiveness, reliability, durability, or environmental impacts such as pollution or noise, for example. Verification is generally done through testing or/and inspection. This may or may not include ongoing verification.

Source: International Electrotechnical Commission

Controllability

Controllability is a crucial element when it comes to managing AI systems. It is an umbrella term that encompasses a range of processes designed to understand, regulate, and manage an AI system's decision-making process. The importance of this cannot be understated, as it ensures that the system is accurate, safe, and adheres to ethical standards. With the growth of AI systems in recent years, the potential for undesired consequences is an ever-present reality. However, by implementing controllability measures, we can minimize the likelihood of these consequences occurring.

Conversational AI

Controllability is a crucial element when it comes to managing AI systems. It is an umbrella term that encompasses a range of processes designed to understand, regulate, and manage an AI system's decision-making process. The importance of this cannot be understated, as it ensures that the system is accurate, safe, and adheres to ethical standards. With the growth of AI systems in recent years, the potential for undesired consequences is an ever-present reality. However, by implementing controllability measures, we can minimize the likelihood of these consequences occurring.

Criteria/Checklists

Standards set to support decision-making in the design, evaluation, or procurement of systems.

Source: https://doi.org/10.1007/s43681-023-00258-9

Data Augmentation

Data Augmentation is a technique used to expand the size and variety of a dataset by producing modified versions of the existing data. This can be achieved by applying minor changes like flipping, resizing, or altering brightness levels to images. Though it may seem like a small addition, data augmentation can have a significant impact on the performance of a machine-learning model. Providing the model with a more diverse training set helps prevent overfitting and ensures that it can handle a wider range of inputs.

Data Quality

Data quality is a foundational aspect of Artificial Intelligence (AI). It ensures that data used by an AI system will produce accurate results. The degree to which data is accurate, relevant, complete, consistent, and free from bias, errors, or other issues known as data quality, affects not only the performance of the AI system but also any associated outcomes. Poor data quality can lead to unreliable outputs from AI systems and incorrect decisions, or missed opportunities, can be made.

Data Sets

Data bases that can help create ethical systems, in particular data sets for training machine learning algorithms.

Source: https://doi.org/10.1007/s43681-023-00258-9

Declarations

Statements describing data, algorithms, and systems to provide insights into aspects of an AI system relevant for assessing ethical aspects, e.g. information about training data or potential system bias. This includes proposals regarding the form and content of such statements.

Source: https://doi.org/10.1007/s43681-023-00258-9

Deep Learning

Deep learning, a fascinating and complex subfield of machine learning, has revolutionized the way we think about AI. Using neural networks with multiple layers, or "deep" layers, these algorithms are able to learn from data in a way that was previously impossible. For instance, a deep learning model could be used to recognize objects in an image by analyzing the intricacies of each pixel through a recursive process of layering. This feat would be straight out of science fiction a mere couple of decades ago, yet it has become a reality thanks to the limitless potential of deep learning.

Design Patterns

A general, reusable method or good practice to address an ethical aspect based on an existing solution to an already identified problem. Usually, design patterns require adaptation to the problem at hand and can go beyond algorithms in including non-computational aspects.

Source: https://doi.org/10.1007/s43681-023-00258-9

Deterministic Model

Deterministic models offer a clear and predictable path to an outcome. These models operate on a strict set of rules and conditions, working on a cause-and-effect basis to create a definitive result. Unlike their probabilistic counterparts, deterministic models don't leave anything up to chance. They provide a level of consistency that can be comforting, particularly when it comes to important decision-making processes. While some may argue that the strict guidelines and rules of a deterministic model don't allow for flexibility or adaptation to change, there's no denying the peace of mind that comes with knowing exactly what to expect.

Discriminative Model

Discriminative models are designed to classify data based on certain characteristics and can be used in a wide variety of applications, from image recognition to speech analysis.

Explainability

Explainability refers to the capacity of an AI system to deliver transparent and comprehensible explanations of its internal workings. It is a vitally important feature for earning trust and strengthening the credibility of AI systems, detecting and remedying errors and biases, as well as making certain AI systems stick to human values and ethical protocols. Different approaches such as those found in model-based, rule-based, and example-based techniques can be utilized to positively impact explainability.

Evidence

Evidence is the tangible data and information that reveals the performance of the model in addition to its reliability, fairness, and adherence to industry regulations or standards. Evidence serves as a measure of success and is necessary to build trust and demonstrate its trustworthiness. With evidence, it is possible to build more sophisticated and reliable solutions that leverage the potential of AI. Without evidence, the landscape could be transformed by diminishing trust in AI systems and the help they can provide.

Foundational Model

The Foundation Model is a powerful and versatile approach to the development of Artificial Intelligence (AI). It builds upon a paradigm in which any general model that is trained using self-supervision can be tailored for a multitude of “downstream” tasks, using large datasets as its base input. With this capability, this model allows a single AI system to have a logically extrapolated understanding of more than one scenario and context that may not share the same primary objective. It can thus offer a distinct advantage to developers by being both utilitarian and efficient in development budgetary allocations.

Frameworks/Concepts

Concepts are suggested to support the design of ethical AI systems, including high-level abstract concepts; frameworks are structures of concepts that serve as a skeleton for addressing ethical aspects of an AI system often serving as a guide and delineating boundaries between different aspects of ethical systems.

Source: https://doi.org/10.1007/s43681-023-00258-9

General Adversarial Networks (GANs)

GANs are a powerful type of neural network capable of generating new, never-seen-before data that closely resembles the training data.

Generative AI

The concept of Generative AI models has revolutionized the way we think about creating new content. These models have the power to analyze existing data inputs or training data and discover hidden patterns that can then be used to generate brand-new data. This means that we can now use AI to create unique pieces of content, such as original short stories, that have never been seen before. By analyzing existing published short stories, a generative AI model can generate a story that follows a similar pattern but is entirely unique in its own right.

Generative Pre-Trained Transformer (GPT)

A type of deep learning model trained on a large dataset to generate human-like text, the underlying architecture of ChatGPT.

GPT-3

GPT-3 is the 3rd version of the GPT-n series of models. It has 175 billion parameters — knobs that can be tuned — with weights to make predictions. Chat-GPT uses GPT-3.5, which is another iteration of this model.

GPT-4

GPT-4 is the latest model addition to OpenAI's deep learning efforts and is a significant milestone in scaling deep learning. GPT-4 is also the first of the GPT models that is a large multimodal model, meaning it accepts both image and text inputs and emits text outputs.

Grounding

Grounding is the process of rooting AI systems in real-world experiences, knowledge, or data, making them more intelligent and adaptable. By doing so, machines become more context-aware, allowing them to provide more personalized responses or actions when they interact with humans. Grounding helps make AI more effective and ensures that, as technology advances, machines remain relatable and relevant to humans.

Guidelines/Codes of Practice

A set of general rules or an outline of conduct (policy) (often issued by a professional association) that lays out ethical standards for key aspects of AI design. Codes of practice do not usually carry the same force as standards but are often recommended within a community of practice.

Source: https://doi.org/10.1007/s43681-023-00258-9

Hallucination

As AI systems become more advanced, they are increasingly used in tasks that once required human expertise. However, sometimes these systems can experience what is known as a hallucination. This surreal phenomenon occurs when the AI generates a response that is completely out of touch with the input it was provided with. This can be a real nuisance, especially in areas like natural language processing. These errors stem from the training data that the AI has been exposed to, as well as a lack of understanding of the context. The challenge lies in ensuring the system knows when to generate a response and when to ask for more context, to avoid any irrelevant or nonsensical outputs.

Impact Assessment

Impact Assessment is an important type of transparency report designed to help stakeholders identify and evaluate the potential impacts of deploying Artificial Intelligence (AI)-supported systems, both beneficial and harmful. Here, everyone involved in the AI deployment is identified in advance in order to consider the specific use case and its context of implementation. It is an involved process, made even more complicated when contending with the nuances of underlying AI model use. The AI Impact Assessment hopes to achieve the ultimate goal of making sure AI systems are used ethically and responsibly so that their potential gains can be realized, while any associated risk can be successfully mitigated. By completing this kind of assessment and proper review before implementing an AI model, we can ensure the best possible outcomes across as many considerations as possible.

Inclusivity

Inclusivity in AI aims to ensure AI technologies are accessible to all members of society, taking into account factors such as gender, age, ethnicity, culture, socioeconomic status, and physical/ cognitive abilities during the design, development, and deployment processes. It means taking into consideration diverse perspectives, needs, and experiences of different groups when designing and deploying AI systems. Inclusivity helps us to combat societal bias, reduce discrimination and inequality, and promote fairness and social justice through the use of this technology. In doing so, we can break down barriers and ensure that the benefits AI technologies can offer are available to all members of society. In short, inclusive AI is a foundation we can strive towards to ensure equitable automation and ethical development of AI.

Instruction-Tuning

Instruction-tuning is a process where a pre-trained model is refined and tweaked so it can effectively perform specific tasks. This is done by providing the model with a set of guidelines or directives. Think of it as something similar to giving an employee a detailed job description: it helps the model to understand exactly what is expected of it and how to go about completing the task. As such, instruction-tuning is a powerful tool in the toolkit of those who work with AI, allowing them to quickly and effectively adapt their models to ever-changing and evolving needs.

Interpretability

Interpretability in AI is the ability of humans to comprehend and make sense of how an AI system reaches a decision, particularly for those decisions that are complex or of great importance. By gaining insight into how the system works, when a conclusion is made, and what components and criteria contributed, our confidence in the system at hand increases and so does our trust. Furthermore, interpretability serves to enhance accountability, fairness, and ethical use for applications with an AI system. There are different methods to increase the interpretability of AI systems—visualization, modeling, and real-world application scenarios are some of the most common techniques.

Large Language Model (LLM)

These deep learning models have been trained on massive datasets to help them better understand natural language. There are several well-known LLMs out there, including BERT, PaLM, GPT-2, GPT-3, and GPT-3.5. These models are all unique in their own right, with varying sizes, tasks, and training methods.

License Model

A pattern (usually a text document) that can be used to create legally binding guidelines governing how a product, technology, or software can be used.

Source: https://doi.org/10.1007/s43681-023-00258-9

Machine Learning (ML)

Machine Learning is an application of Artificial Intelligence that enables computer systems to learn from data without human interaction. This is accomplished by employing elaborate algorithms and models that can discern natural patterns, make predictions, optimize the decision-making process, and find correlations in a given dataset. In terms of applications, machine learning has been built on algorithms for image recognition, natural language processing, fraud detection, predictive maintenance, and personalized recommendations. Such AI-based approaches prioritize the automation of human effort with a focus on sharpening accuracy.

Metric

A definition, system, or standard of measuring ethical aspects of a system, e.g. interpretability or explainability.

Source: https://doi.org/10.1007/s43681-023-00258-9

Multimodal Language Model

Multimodal language models are revolutionizing the way we interact with technology. They are trained on a vast array of data, ranging from traditional textual datasets to non-textual data like images, audio, and video, allowing them to understand and interpret a wide range of inputs. This means that they can generate responses in a multitude of modes, making them even more versatile than large language models. From understanding the meaning behind an image to generating responses to a spoken question, multimodal language models have the potential to fundamentally change the way we communicate with machines.

Multistakeholder Collaboration

Multi-stakeholder collaboration in AI is a concept that entails engaging multiple stakeholders from numerous sectors, backgrounds, and demographics in the form and purpose of AI adoption, production, and management. This could include those from academia, business, government, civil organizations, and impacted community groups. The purpose of assembling multiple stakeholders around AI is to ensure that AI construction and implementation is managed responsibly and fairly and thoroughly acknowledges the diverse opinions among those who will be affected by it. This reliable practice is the key to sticking to ethical elevation, unbiased operation, and critical auditing of AI advances. Furthermore, the open cooperation of members also prompts artificial intelligence continuance in a system of social fairness to match desired standards of coexistence.

N-Shot Learning

Zero, Single, and Few-shot learning are three variations of the same concept – providing a model with very little training data to make predictions about new data. In this field, a "shot" refers to a single training example, thereby indicating how challenging these techniques can be. While it might seem challenging to train a model with so little data, these techniques hold great promise in industries where data is scarce, such as in medical research, speech recognition, and more.

Natural Language Ambiguity

This is a phenomenon that happens constantly, as words, phrases, and sentences can have different meanings depending on the context. This can create confusion, misunderstandings, and even errors, which can be especially problematic in fields such as law, medicine, or finance. As AI systems become more prevalent in our society, it is also crucial to address this issue, as machines rely on precise and unambiguous instructions to function properly. By developing better algorithms and tools that can recognize and resolve ambiguities, we can improve our ability to communicate and interact with the world around us.

Natural Language Generation (NLG)

A subfield of AI that produces natural written or spoken language.

Natural Language Processing (NLP)

Within AI, there is a subfield that specifically deals with language data and the processing of massive volumes of text. This subfield is focused on creating programs that can take free-form text and transform it into a more standardized structure. This is an incredibly challenging task, as natural language is fluid and often has multiple interpretations. However, the potential benefits of such technology are enormous, and scientists and programmers are hard at work trying to crack this fascinating puzzle.

Neural Network

Composed of interconnected nodes, or "neurons", these networks are capable of recognizing patterns and solving complex problems with a high level of accuracy. With the ability to learn from experience and adapt, these networks are a promising technology that can help us solve some of the toughest problems of the modern age. Whether it's recognizing faces, understanding language, or predicting complex outcomes, the potential of neural networks is truly inspiring.

Online Communities

Groups of people such as experts or users that may help realize ethical systems, often organized as networks or online communities. This often includes online links to resources and services (e.g. for cloud data or computation) and online spaces for debate, exchange, or evaluation.

Source: https://doi.org/10.1007/s43681-023-00258-9

Open AI

OpenAI is an organization that has made great strides in developing AI technology, particularly with their ChatGPT program. However, their overall mission sees a greater purpose for AI than just chatbots. OpenAI sets out to develop AI in a responsible and friendly manner, always prioritizing the well-being of society. As the applications and impact of AI continue to grow, OpenAI is committed to ensuring that these advancements are for the benefit of all.

Optimization

Optimization is a crucial process in building models that can accurately predict outcomes, whether it's in the field of machine learning or many others. It involves adjusting the parameters of a model to minimize the loss function, which measures the discrepancy between the predicted and actual values.

Overfitting

Overfitting is a common concern for data scientists and machine learning enthusiasts alike. It can be frustrating to have a model that performs exceedingly well in a controlled environment, only for it to fail miserably when faced with new and different data. The culprit behind this phenomenon is a model that is simply too complex. Instead of learning general patterns, the model becomes fixated on memorizing the training data, effectively creating something that is great at regurgitating what it has already seen, but incapable of adapting to new information.

Policy

A policy is a concerted effort to establish parameters for decision-making and guide behavior within a particular domain. It is an intentional, calculated blueprint for desired outcomes that answers the needs of its stakeholders. It strives to keep the behavior and decisions of individuals consistent with those outlined by the creators of the policy, thereby better representing a team as a whole and increasing the likelihood of success in accordance with their values and aspirations.

Pre-training

Pre-training involves training a model on a large dataset that can be generalized to multiple tasks. This process helps the model learn general features that can be fine-tuned later for specific tasks. As a result, pre-training can save time and resources while boosting the accuracy of deep learning models.

Privacy

The United States National Institute for Standards in Technology (NIST) defines privacy as: assurance that the confidentiality of, and access to, certain information about an entity is protected; freedom from intrusion into the private life or affairs of an individual when that intrusion results from undue or illegal gathering and use of data about that individual; and, the right of a party to maintain control over and confidentiality of information about itself. The Universal Declaration of Human Rights is an international document (adopted by the United Nations General Assembly in 1948) that enshrines the rights and freedoms of all human beings - including the right to privacy. The digital age has amplified the importance of privacy to unprecedented heights. We generate immense amounts of personal data and share them across the internet without a second thought. This continuous flow of sensitive information poses major risks of data breaches, identity theft, and several other privacy transgressions. There is an increasing demand for legal frameworks to protect our data and guarantee respect for our individual privacy. Governments around the world are responding to this need with data protection laws, privacy laws, and even the inclusion of user privacy in certain human rights legislation.

Source: National Institute of Standards and Technology

Process Models

The abstract or visual description of a method or workflow to achieve or improve ethical aspects of systems. The description consists of individual, sequential steps or parts that together provide a model for action such as design, planning, assessment, or improvement.

Source: https://doi.org/10.1007/s43681-023-00258-9

Recursive Prompting

This technique involves feeding the model a series of prompts or questions that refine both the context of the task and the AI's understanding. By building upon previous responses, recursive prompting enables the AI to constantly evolve and improve, producing more accurate and nuanced results. As we continue to rely on AI for everything from language translation to medical diagnosis, recursive prompting will become an increasingly valuable tool for ensuring that these technologies operate at the highest possible standard.

Registry

Access-controlled repository of information pertaining to the specific concept being registered. Common Fairo registries include Use Case, AI Model, and Risk.

Regulation

Regulation refers to a set of rules, guidelines, or laws put in place by a governing body to control behavior in a given sector. Regulations are crafted to achieve a specific objective such as the defense of public interests, the maintenance of fair competition, or the advancement of safety standards, security, and moral engagements. Regulations come in all shapes and sizes — ranging from industry norms to codes of conduct, prerequisites, holding licenses, or AI policies. The process of oversight and control, sometimes in the form of inspections, audits, and legal sanctions in cases of violation, is consulted with key stakeholders to ensure their robustness and efficiency.

Reinforcement Learning

A type of machine learning that mimics the way humans learn, by taking feedback from the environment. GPT, one of the most sophisticated language models ever created, has also embraced the power of reinforcement learning. To tune the model, human annotators played a vital role by providing examples of the desired behavior and ranking the model outputs. By leveraging actionable feedback from humans, GPT-3 has evolved into a potent AI tool that can complete complex tasks in a matter of seconds.

Risk Tolerance

The United States National Institute for Standards in Technology (NIST) defines risk tolerance in the context of the AI Risk Management Framework (RMF) as: the organization’s or stakeholder’s readiness to bear risks in order to achieve its objectives. Risk tolerance can be influenced by legal or regulatory requirements (Adapted from: ISO Guide 73).

Source: ISO

Robustness

Robustness refers to the ability of a model to handle varying or disruptive conditions. Simply put, if an algorithm or model is considered robust, it means that it is able to maintain its performance accuracy despite changes in input data or noise, as well as possible disruptions or manipulation. A strong framework to identify and use models with robustness is key if we want to depend on them in real-world contexts. This is why the development of robust, machine learning models is essential; they must be reliable and enduring to ensure their effectiveness in practical use cases.

Rulemaking Guidelines

Rulemaking guidelines play an important role in the development and implementation of rules or regulations set forth by a regulatory agency or governing body. These guidelines aim to ensure fairness throughout the rulemaking process by providing transparency and an inclusive approach, ultimately yielding effective, appropriate, and enforceable results. To reach such results, public notice and comment requirements, cost-benefit analysis, and consideration of alternative approaches must be included in the guidelines. In this way, the interests and perspectives of all stakeholders can be accounted for, thereby achieving the ultimate goal of preserving fairness and consistency in the rulemaking process.

Safety

The concept of safety in AI involves assuring that AI systems, and their byproducts, do not cause harm to or disturb humans or the environment. This requires both physical and digital safeguards. The goal is to make sure that all AI systems, and any data associated with them, are protected from any kind of unapproved entry or any possible malicious attack.

Sequence Modeling

This subfield of natural language processing focuses on the intricate task of modeling sequential data - data that evolves based on time or some kind of narrative order. It's easy to see how this kind of modeling can be useful for many applications - from predicting the next word in a sentence to detecting heart rate patterns over a period of time.

Software Assistant

A computer program (running code) or software agent that operates autonomously or in a dialog with the user.

Source: https://doi.org/10.1007/s43681-023-00258-9

Software Library

A collection of computer code (program code) or modules (executable code).

Source: https://doi.org/10.1007/s43681-023-00258-9

Speech to Text

The process of converting spoken words into written text.

Stacking

Stacking is a powerful technique in the world of AI that has the potential to unlock new frontiers in machine learning. Rather than relying on a single algorithm to process complex data, stacking combines multiple algorithms to create a more comprehensive and accurate output. This approach is particularly useful in fields like image recognition and natural language processing, where accuracy is key. Stacking's ability to compensate for the weaknesses of individual models and bring together the strengths of each is what makes it so valuable.

Standards

A generally accepted document that provides requirements, specifications, guidelines, or characteristics that can be used consistently to ensure that AI systems are designed in line with ethical considerations. Standards are often formalized and the result of broad stakeholder consultation.

Source: https://doi.org/10.1007/s43681-023-00258-9

Steerability

As AI technology continues to advance, so too does the need for increased control and guidance over the behavior and output of these intelligent systems. This is where AI steerability comes into play. By designing AI models with mechanisms that understand and adhere to human intentions, we can ensure that these systems align with our specific objectives and preferences, while also avoiding any unintended or undesirable outcomes. But improving steerability is an ongoing process that requires constant research and refinement, including innovative techniques like rule-based systems, fine-tuning, and incorporating additional human feedback loops during AI development.

Supervised Learning

At its core, this type of learning involves training a model to make predictions based on labeled data, allowing it to predict outcomes on new, unseen data with remarkable accuracy. By providing annotated training data, supervised learning algorithms can learn to recognize patterns, identify trends, and predict outcomes with incredible precision. This approach has huge implications for a wide range of applications, from computer vision and speech recognition to fraud detection and medical diagnosis.

Test Case

In the field of software engineering, a test case is a procedure enacted on a system or program to determine if it’s functioning correctly. This helps shed light on any issues or errors that the system is prone to and is an integral part of quality assessment. In the realm of law, a test case is an event or ruling that sets a precedent for future cases. When it comes to AI governance, both of these definitions are relevant.

Text to Speech (TTS)

Technology that converts written text into spoken voice output. It allows users to hear written content being read aloud, typically using synthesized speech.

Training/Tutorial

Educational material informing about ethical aspects including videos.

Source: https://doi.org/10.1007/s43681-023-00258-9

Transparency Report

The term “transparency report” refers to a broad category of artifacts created about Artificial Intelligence (AI) systems. These documents have a single goal: to inform stakeholders about the system’s capabilities and associated risks. Transparency reports can take many different forms, ranging from summaries of laws and ethics codes to semi-annual analytics breakdowns. Receiving this information helps stakeholders build a holistic understanding of how the AI system works and any potential risks they may face. By looking at the combined report, stakeholders can make more informed decisions about their dealings with the AI system.

Trust

Trust is the foundation on which Artificial Intelligence (AI) systems set out to achieve success. It reflects the confidence and belief held by individuals or entities in the reliability, integrity, and capability of a system or a person. When it comes to harnessing the power of AI, the presence of trust is paramount. The foundation of trust is established through AI system transparency, fairness, and accountability. Users should feel certain that their privacy and security remain safeguarded while using AI technologies. Hence, trust takes center stage in AI. This is because AI typically involves processes that can significantly impact people's lives and decisions, such as hiring processes and even medical diagnoses. By showing commitment to trust, organizations can prove their worthiness to customers and stakeholders. This reaffirms relationships in the long term and drives the adoption and acceptance of AI.

Trust Risk

AI Trust Risk is an area of serious concern for businesses leveraging the power of artificial intelligence systems. Stakeholders may lose faith in these systems as a result of bias introduced in the training data, mistakes in the decision-making algorithms, not being adequately transparent about decision-making processes, or misusing the technology in unethical or prohibited ways. Because of this, it is of the utmost importance to exercise caution when designing and implementing AI solutions in order to ensure trust and confidence in these morally responsible ways.

Unsupervised Learning

Unlike supervised learning where the model is trained using labeled data to predict outcomes, unsupervised learning works with unlabeled data. This type of machine learning is particularly useful when working with large datasets where manual labeling would be impractical. By finding patterns or features in the data, unsupervised learning can help us identify new trends, group similar data points, and even detect anomalies.

Logo
Adopt, develop, and implement AI solutions successfully and responsibly.
© 2024 Fairo. All rights reserved.