top of page
Search
  • Writer's pictureSachin Tah

AI Governance - Why it is difficult to manage?



I was going through an article that was published by Bill Gates last week. In the article, he managed to highlight the risks associated with AI and how they are all manageable. I understand that I am no one to challenge a post published by the big brother of technology, however, I disagree and believe that this was his way of giving comfort to everyone and calming them down about the disruption going to be caused by AI in the next 5 years.


One thing I am sure about AI is that there will be a huge impact on jobs and certain jobs even within the IT sector could be replaced by AI-based automation. Every business goes through a revival phase in 25-30 years and I believe it's time for technology companies, especially in IT services to review their operating models. I have seen companies selling automation solutions to their customers which they are not utilizing at their full potential.


If you are a technologist and even understand how to collate currently available technologies over the internet, you know how easy it is to solve some complex use cases like audio-to-text transcriptions, their analysis, and applying AI to figure out sentiments, hit rate, etc. Now if we can solve such complex use cases in no time with less effort, why would you need humans to listen to transcripts and do assessments? And believe me, these technologies are going to keep getting better by the day, they may be 90% accurate today, but they will reach 99.99% very soon. ROI for replacements is also going to be the least as you are picking up tech off the shelves.


One more example is how test automation is carried out today and how AI-driven testing will revolutionize this to find bugs in software more efficiently than humans. Combining this with code generation and generative AI would make it easy to write down test cases, traceability matrices, etc. I strongly believe that tech jobs, back-office operations, and support functions will majorly be impacted by AI and would cause big turnarounds and disruptions.


Coming back to AI governance, whenever we talk about governance, certain elements are controlled by the governing body and usually these governing laws are within the authority's jurisdiction and reach. Governance can be put in place by providing privileged access after background checks, verifications, and registrations, or by limiting and restricting the usage of disruptive technology.


When it comes to AI and the use cases which may get derived from it, what are the raw materials required to create catastrophic tools? You need AI tools, computing power, and high-end hardware to experiment or execute, and of course, you need some real destructive brains to assemble all of this which are obviously available in plenty.


Let us take the example of computer viruses which are somewhat similar to the threats that can arise from AI. Even today, we can detect a computer virus, but we are not able to govern it properly. There are rules and regulations, but we are still unable to curb the

development and destruction caused by it. When was the last time we were able to catch a malicious programmer before he/she commits a crime or at least attempted to? All we are doing is either building a defense system for a virus by learning how it works or recovering after falling victim to it.


And not so surprisingly, business models have evolved around such threats and attacks which we call anti-virus, which is now a billion-dollar industry.


The same will be applicable to AI and with an enhanced level of attacks, threats, and misuse; businesses are waiting to flourish around it. Models like deep fake detectors are already available for commercial use. If there are no real threats, some of the companies will first create threats and then solution for it.


I don’t want to publish a list of use cases where AI could be potentially harmful or give directions on how to misuse it. However, let me give a glimpse of how conventional viruses can be enhanced to create new breeds of AI-based viruses. Current computer viruses are dumb, they have a defined set of features and functionalities which are contained within the virus. However, an AI virus will have a trained model embedded within itself, it can change its signature depending on the scenarios or change itself after getting detected. It can also act deceptive and pretend as an antivirus causing fake detections or false alarms.


AI has already embedded itself deep inside everyone's day-to-day life like in maps, facial detection, smart assistant (Siri, Alexa), auto-driving cars, social media, etc. AI is everywhere. Your cellphone listens to your conversations all day and your camera watches you the whole day, apps track all of your movements and your usage decides your personality. I feel we have already fallen victim to these AI technologies.


Overall, it will be difficult and impossible to control the adverse impact of AI, and it may be misused either by individuals for personal gains or by corporates to win over their competitors.

96 views0 comments

Recent Posts

See All
bottom of page