Introduction

We are in an era of unprecedented development, growth, and adoption of artificial intelligence (AI). Companies across industries and sectors are increasingly integrating AI to gain a competitive advantage. As of 2024, the adoption rate of AI in at least one business function was 72% of all organizations, with AI expected to see an annual growth rate of 37.3% from 2023 to 2030. 

However, this untampered growth poses significant risks. However, this untampered growth poses significant risks. Because AI systems rely on human input for datasets, training, and algorithmic design, they can inadvertently perpetuate human biases and historical patterns of discrimination. With growth outpacing regulatory and policy frameworks designed to ensure ethical standards and safety, there is an emergent need for best practices in the development and deployment of AI systems in a manner that is ethical, transparent, and beneficial while aiming to minimize biases, ensure privacy, and promote fairness and accountability.

Mila, a Quebec-based AI research institute, is at the forefront of such AI development. With a community of 1,400+ professors, researchers, trainees, and students (in both professional master’s and graduate diploma programs), Mila boasts the world’s largest concentration of deep learning academic researchers. Mila engages with a range of stakeholders and multidisciplinary teams of specialists to promote the responsible use of AI across different sectors and industries, and has created distinct initiatives that have the potential to serve as open-source resources and guides of best practice for building socially responsible AI. For instance, its project Biasly is an expert-annotated dataset that can be used to train models to detect, categorize, provide a severity score for, and mitigate subtle misogyny in English text by providing detailed annotations identifying and rewriting misogynistic language. Biasly cannot debias datasets itself, but can be used to teach models how to help people identify their own biased language. What is distinctive about Biasly in the space of AI development are the processes behind its creation, which serve as a potential model of best practice for incorporating socially responsible methods into AI creation processes.

TO VIEW AND DOWNLOAD THE FULL CASE STUDY, FOLLOW THESE INSTRUCTIONS.

 This case was written by Zachary Meager. The author prepared this case under the supervision of Professors Sonia Kang and Hyeun Lee.

The development of this case study was supported by the Latner GATE MBA Internships program.