What is Bias in AI?
When an AI system produces unfair or skewed results because of imbalances in its training data or design.
Why It Matters
AI bias can lead to discriminatory outcomes in hiring, lending, and other decisions if not identified and addressed.
Real-World Example
A hiring AI that favours certain demographics because its training data contained historical biases.
“Understanding terms like Bias in AI matters because it helps you have better conversations with developers and make smarter decisions about your software. You do not need to be technical. You just need to know enough to ask the right questions.”
Related Terms
Training Data
The dataset used to teach an AI model patterns and knowledge during its initial training.
AI Safety
The field of research focused on ensuring AI systems behave as intended and do not cause harm.
AI Alignment
The challenge of ensuring AI systems pursue goals that match human values and intentions.
Red Teaming
Deliberately trying to find flaws, vulnerabilities, or harmful outputs in an AI system before deployment.
Learn More at buildDay Melbourne
Want to understand these concepts hands-on? Join our one-day workshop and build a real web application from scratch.
Related Terms
Training Data
The dataset used to teach an AI model patterns and knowledge during its initial training.
AI Safety
The field of research focused on ensuring AI systems behave as intended and do not cause harm.
AI Alignment
The challenge of ensuring AI systems pursue goals that match human values and intentions.
Red Teaming
Deliberately trying to find flaws, vulnerabilities, or harmful outputs in an AI system before deployment.
Large Language Model (LLM)
An AI system trained on massive amounts of text that can understand and generate human language.
Transformer
A type of AI architecture that processes text by paying attention to relationships between all words at once, rather...