Can AI really help bring in diversity and inclusion? – ETCIO

Can AI really help bring in diversity and inclusion?Organizations are recognizing that diversity and inclusion are a key organisational strategy to improve the ways of working in the organisation. People feel valued in an organisation when leaders truly create an inclusive culture. Employees representing different diversities feel included and feel psychologically safe. Catering to a diverse group of employees opens up a much wider audience and a more loyal and engaged customer base.

AI programs have been used in a number of fields. However, it wasn’t until 2017 that “AI in Hiring” became the industry buzzword, driving several conferences, blogs, papers, and podcasts that addressed the great potential of this technology in recruitment.

AI tools can aid leaders in managing inclusive work environments by providing guidance to the managers on the well-being, diversity and inclusion health of their teams. There are tools that can provide perspective to the leaders of an organization if gender bias or a bias against a certain community of employees exists. There are AI tools that can sensitize leaders in managing their communication and ensuring no inadvertent disrespect is communicated for any community.

“While AI has the potential to enable overall diversity and inclusion in the workspace, AI as a technical function is self-enabled for diversity. AI is a multidisciplinary function that needs a workforce with varied skills of not just data collection and processing to model creation, but also the skills to understand human behaviors and business acumen for targeted solutions. We are constantly seeing an increase in diverse workforces and diverse talents across the AI solution lifecycle. When a diverse group cohesively works on a problem, it automatically results in an inclusive outcome thus helping in embracing a DEI culture,” explained Padmashree Shagrithaya, Vice President, Head – AI & Analytics, India I&D, Capgemini.

Though AI comes out as a good solution to bring in diversity in the organization, the unintentional bias is what stops it to become the perfect one!

Machine learning (ML), a subset of AI, allows a machine to learn automatically from past data without programming explicitly. It is well documented that, when undesired biases concerning demographic groups are in the training data, well-trained models will reflect those biases.

Sharing an example on that note, Mohinish Sinha, Partner and Diversity, Equity, and Inclusion leader, Deloitte India, said, “The engineer building the AI also represents a certain diversity e.g. we know that there are only between 5-15% female AI engineers, similarly for the other underrepresented communities like LGBT+ . And there in lies the challenge. The inherent unconscious bias of the engineer shows up in the algorithm design he or she designs therefore potentially affecting fairness of the design, the central tenet of diversity and inclusion.”

Hence it is imperative that organizations begin with an understanding of AI-based bias and work towards an ethical AI framework that upholds values of transparency, explicability, and fairness. Companies need to ensure that good AI-related data management practices are set up and followed by the AI teams. Data analysts need to incorporate “privacy-by-design” and “diversity-by-design” principles in the “design and build” phase and ensure robustness, repeatability, and auditability of the entire data cycle to be able to check for and mitigate algorithmic bias.

A recent paper from Harvard Kennedy School suggests that the collection, analysis, and disclosure of diversity data can be a powerful tool for progress when used correctly. Responsible use of AI on the collected data can help in creating a gender-neutral recruiting process, discovering opportunities for the specially abled.

From a data-management standpoint, AI practitioners also need to ensure that data is sourced ethically and in line with regulation, check for accuracy, quality, robustness, and potential bias, including detection of under-represented minorities or events/patterns, build adequate data-labelling practices and review periodically, and store responsibly so that it is made available for audits and repeatability assessments and for constantly monitoring results produced by models, including their accuracy, and test for bias or accuracy degradation.

But once again, can AI really help in bringing diversity?

It sure can but companies have to be very careful about the bias.

And as S. Pasupathi, Chief Operating Officer, Hirepro puts it, “After all, artificial intelligence is created by humans and developed on human data. In the long run, such biases, whether based on gender, age, or experience, can lead to systematic discrimination. Ultimately, we are responsible for bridging these gaps. Recruitment specialists should consider this as a careful concern and monitor what is being fed into the system.”

Spread the love

Leave a Reply

Your email address will not be published.