Fairness in AI Isn't a Bonus Feature. It's a Necessity with Neda Alipur – VistaTalks Ep 179
Keywords: Computer Vision, AI, Ethics, Data, Bias, Skin Tone
Run Time: 21:29
Release Date: October 1, 2025
Listen to the audio or watch the video below.
Artificial intelligence is changing everything, from healthcare and beauty tech to social media and surveillance. However, as systems powered by machine learning become increasingly embedded in our daily lives, a critical question arises: Are they working fairly for everyone?
That's precisely what Neda Alipur, a PhD researcher at TU Dublin, is tackling through her groundbreaking work on image processing and AI skin tone representation. Host Simon Hodgkins speaks with Neda about her journey into this complex field and the pressing need for more inclusive AI systems in this episode of VistaTalks. This conversation is an important topic and one that goes right to the heart of bias in data and its real-world consequences.
Artificial intelligence is changing everything, from healthcare and beauty tech to social media and surveillance. However, as systems powered by machine learning become increasingly embedded in our daily lives, a critical question arises: Are they working fairly for everyone?
That's precisely what Neda Alipur, a PhD researcher at TU Dublin, is tackling through her groundbreaking work on image processing and AI skin tone representation. Host Simon Hodgkins speaks with Neda about her journey into this complex field and the pressing need for more inclusive AI systems in this episode of VistaTalks. This conversation is an important topic and one that goes right to the heart of bias in data and its real-world consequences.
From Biomedical Circuits to Ethical AI
Neda began her academic path in biomedical engineering before switching to electrical engineering, where she found her passion in image processing. It was a trusted friend who nudged her toward the field, and that shift ultimately shaped her entire career.
Her early work focused on image forgery detection, and later, she worked on flower image classification. However, it was when she began her PhD at TU Dublin in 2021, under the supervision of Dr. Jane Courtney, that she discovered her current focus: skin tone representation in AI-powered image systems.
As Neda explained, this isn't just a technical problem. It's a societal one.
"AI systems reflect the data used to train them. If that data is biased or incomplete, the model will be too. And when those systems are used in medicine or security, it can lead to real harm, especially for people with darker skin tones."
Why Skin Tone Representation Matters
AI applications in computer vision often struggle with people who have darker skin tones. The problem stems from unbalanced datasets and poor labeling. Many models are trained primarily on individuals with lighter skin, leading to performance gaps that can be hazardous in areas such as healthcare diagnostics, where accurate detection is crucial.
Take skin cancer detection, for example. Non-invasive tools that rely on image analysis could misclassify symptoms or miss them altogether if the system hasn't been trained on diverse skin types. The same risk applies in facial recognition, where errors in detection can feed into discrimination and inequality.
Neda's goal is to address the root of the issue: the data itself.
Building a Better Dataset
Neda outlined a significant step she's taking in her research: building a new, more representative dataset for skin tone analysis. Her approach focuses on capturing skin tone under varied lighting conditions, using precise measurements rather than subjective labels.
Most existing datasets rely on visual classification, which individuals with no dermatological expertise sometimes perform. That leads to inconsistencies, especially when lighting alters the appearance of skin color. Neda aims to remove this ambiguity by documenting environmental factors during image capture and applying continuous values rather than rigid categories to represent skin tone.
This work could serve as a benchmark for future researchers and developers, providing a more reliable standard for training and testing AI models.
"We want to create something that others can use to evaluate whether their models perform equally well across different skin tones and lighting conditions," Nadia comments. "This hasn't been done at this level before."
Bridging Technical Innovation and Social Responsibility
One of the most potent points Neda made during our conversation was that technical progress must go hand in hand with ethical responsibility. She's not just focused on improving accuracy; she's working to ensure that AI systems serve everyone fairly, regardless of their skin color or geographic background.
It's a goal that becomes even more crucial as AI tools reach users worldwide. Many publicly available datasets underrepresent people from the Middle East, India, and parts of Africa. That imbalance skews the models and reinforces existing disparities.
"I hope my work raises awareness," Neda said. "We can't just look at how accurate a system is overall. We need to ask, who is it accurate for? And who is it failing?"
Real-World Impact
While much of Neda's work so far has been academic, she hopes to see it applied in the real world. She has already completed work with a Dublin-based company working on hand hygiene technology, and she sees partnerships with industry and institutions as key to scaling her impact.
Her message is clear: fairness in AI isn't a bonus feature. It's a necessity.
As AI continues to shape the way we live, work, and interact, Neda's research is a timely reminder that innovation without inclusion is incomplete. Her work is not only advancing the field of image processing, but it's also helping pave the way for a future where AI works for all of us.
Fairness in AI Isn't a Bonus Feature. It's a Necessity with Neda Alipur – Ep 179
Artificial intelligence is changing everything, from healthcare and beauty tech to social media and surveillance. However, as systems powered by machine learning become increasingly embedded in our daily lives, a critical question arises: Are they working fairly for everyone?
That's precisely what Neda Alipur, a PhD researcher at TU Dublin, is tackling through her groundbreaking work on image processing and AI skin tone representation. Host Simon Hodgkins speaks with Neda about her journey into this complex field and the pressing need for more inclusive AI systems in this episode of VistaTalks. This conversation is an important topic and one that goes right to the heart of bias in data and its real-world consequences.