Using AI without human oversight. Is that a good idea? In some cases, maybe…
I recently presented a keynote at the South African Board of People Practices, Eastern Cape chapter on the impact of AI on HR and the workplace. We had very interesting discussions in the Q&A. One question that sparked my interest was “how ethical is it to use AI without human involvement e.g. in the case of screening job applicant CVs?” The discussion revolved around how blindly applying AI decision-making without human oversight may be unethical since AI may not be able to understand an apply subtleties relating to our complex human nature and ecosystem, and how it is crucial for humans to therefore stay in the loop. Keeping a human eye on AI seems sensible given the known challenges we currently face with AI including lack of algorithm transparency, bias in training data and lack of accountability, to name a few of the pertinent ones.
But this discussion made me wonder: is the “human-in-the-loop” notion always the best option?
My current research focus involving AI chatbot coaching looks at all aspects relating to stand-alone AI coach emulators – AI coaches that can operate without human oversight. My reasons for supporting this direction is that I think we could be limiting the scale of democratizing AI coaching if we insist on active human oversight as a prerequisite to using AI coaches. In my talks I use the example of Malawi, a country of 20 million people in Africa. Based on a cursory Internet search I found a best result of there being only 5 psychiatrists in the entire country! There are less than 1,700 ICF registered coaching in all of Africa (population 1.4 billion). Imagine the potential to provide basic, stand-alone coaching services to such under-served communities. And of course a similar situation exists in organisations that are already using human coaches. Coaching is only reserved for the few.
So while there are potential challenges with the application of stand-alone AI in coaching, I think all in all the benefits outweigh the potential risks. If… there is an if… if the AI coaches are designed with the relevant and appropriate human oversight build into them. I have voiced my concerns before about the proliferation of AI coaches that do not adhere to basic coaching principles and standards. If we want to truly scale coaching, we need have mechanisms in place to ensure that what is called an AI coach does in fact do coaching.
There are promising signs of coaching bodies, individuals and other organisations who are creating AI coaching standards and guidelines. I encourage the continued work in this direction so that soon we can have certified and credentialed AI coaches that we can trust to deliver coaching without human oversight. All in the services of serving the underserved.