Thoughts on Responsible AI
- Midhat Shahid
- Jul 5, 2023
- 4 min read
Updated: Jul 6, 2023
Preparing for an AI Revolution
On the brink of an AI revolution, the world faces pivotal choices. Will AI be our trusted sidekick or an unchecked nemesis?
By Midhat Shahid

The world is on the cusp of an artificial intelligence revolution. AI has the potential to transform every aspect of our lives, from the way we work to the way we interact with the world around us. And if you peel back the curtain, it's plain to see AI is already pulling the strings. Machine learning models are deciding the interest rate on your mortgage and flagging potential threats at the airport.
Before we cozy up to our new overlords, we must answer some hard questions lest we risk having tyrants as puppet masters. If not developed responsibly, AI could be used to harm people, discriminate against them, or according to Scientific American, undermine democracy. We must acknowledge that technology ultimately reflects its authors—are we creating a JARVIS or a HAL 9000?
What is Responsible AI
Responsible AI is the practice of developing and deploying AI in a way that is safe, fair, and accountable. It means ensuring that AI systems are not biased, do not harm people, and can be understood and controlled by humans. In other words, are we creating trustworthy AI? What sets trustworthy AI apart from responsible AI is whereas the former is an ethical framework, the latter is about putting it into practice.
The conversation around responsible AI is familiar. It has featured many illustrious voices, but none as memorable from popular culture as Isaac Asimov with his three hierarchical laws of robotics which still ring true. The first law states that above all else, a robot shall not harm a human or, by inaction, allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, unless this violates the first law. The third law is that a robot shall avoid actions or situations that could cause it to harm itself, unless this is in conflict with the first and second laws.
An Existential Threat?
As AI systems are becoming increasingly powerful and pervasive, we cannot allow responsible AI to become an afterthought. Earlier this month, hundreds of AI executives, experts, and researchers, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
I pay little attention to the oracles of the singularity. Is the AI apocalypse narrative little more than a red herring to hijack attention away from a problem that must be addressed through regulation? We cannot pass the buck on responsible AI. Said simply, AI has the potential to be used for good or for evil, and it's up to us to ensure that it's used for the former.
Responsible AI in Practice
It's essential to start thinking about the economic implications of AI now. By working together, we can ensure that AI benefits everyone, not just a select few. That being said, most step innovations (railways, steam engines, etc.), which generative AI is positioned to be, tend to disrupt the economic, social, and political balance. I will revisit this in a subsequent post.
We can do several things to ensure that AI is developed responsibly. First, we must ensure that AI systems are fair and unbiased. This means avoiding discrimination against certain groups of people, such as race, gender, or sexual orientation. Society (and business) cannot and will not tolerate different standards for AI than we expect for the public commons.
Second, we need to make sure that AI systems are transparent. This means that people should understand how AI systems work and how they make decisions. Think crystal palaces, not black boxes. Third, we need to make sure that AI systems are accountable. This means that people should be able to hold those who develop and use AI systems responsible for their actions.
By responsibly developing AI, we can ensure that it benefits everyone. We can create a world where AI is used to improve our lives, not to harm us.
The Business of Responsible AI
Businesses are responsible for being transparent about using AI and collecting data. Let people know how you are using AI and what data you are collecting. Build AI systems that are designed to be fair and unbiased. Then constantly test your AI systems for bias and make sure they do not discriminate against certain groups of people.
Security and privacy are also cornerstones of responsible AI in business. Secure and protect your AI systems from cyberattacks and data breaches. Always be accountable for the actions of your AI systems. Finally, have a contingency plan for dealing with any harmful consequences of using your AI systems. This is no different from being the owner/operator of a railroad or a hydroelectric dam.
The Role of Government and Regulation
Governments have a responsibility to regulate the use of AI. Over time, the leviathan appropriates the power to regulate, govern, and moderate anything that affects the plurality of society.
Societies will need to create laws and regulations that govern the use of AI. These laws and regulations should be designed to protect people from the potential harm of AI. Secondly, we will get many laws wrong on the first try. Finally, a prerequisite to effective regulation is investing in research on AI's ethical and social implications.
Some will hide under a rock. Others will tend towards laissez-faire. Either way, a fascinating journey awaits us that will keep law and technology scholars gainfully employed for a while.
Final Remarks
Artificial intelligence is one of the most exciting and important technologies of our time. It could revolutionize how we live, work, and play. But will AI augment the best features of humanity and be the harbinger of an unprecedented renaissance? Or a reflection of our lowest nature and the doom of civilization?
We must responsibly develop AI to benefit everyone, not just a select few. We can do several things to make responsible AI a reality. First, we must educate the public about AI and its potential risks and benefits. We also need to develop ethical guidelines for the development and use of AI. And finally, we need to create new laws and regulations to govern the use of AI.
Disclaimer: The views and opinions on this site are solely of the authors, they do not reflect nor represent the views of their employers.
Comments