Transparency and Explainability:
Discussing the Importance of Making AI Systems Transparent and Understandable to Users and Stakeholders
By
Ed Watal, Founder, Intellibus

Although artificial intelligence (AI) has become widespread with more people understanding how to use it to best fit their needs, some ambiguity still exists regarding the technology’s underlying processes. There have been several claims that the output of artificial intelligence models — particularly generative AI — is unreliable or untrustworthy, not to mention inaccessible.

However, the easiest way for developers to improve the reliability of their AI programs is to be more transparent about the data the models use and how it is used.

Why AI Needs to Be More Transparent and Understandable

The primary benefit of increased transparency in AI is improved trust and accountability. Systems that provide users with greater visibility into their underlying processes tend to receive improved trust from stakeholders and users because the decisions made by these AI models are often more reliable. Furthermore, users can be held more accountable for the AI’s outcome if they are more aware of the processes it utilizes to reach those outcomes.

Another important effect of transparency with AI is that it enables more ethical decision-making. Users should be able to know how the model reaches its conclusions, allowing them to assess the ethical implications of its decisions — especially when the use case is sensitive. For example, if a business uses AI to help sort through resumes and inform hiring decisions, the business must know the process behind this decision to avoid as much bias as possible.

Transparency is also crucial to user empowerment when it comes to AI systems. Some of the most powerful uses of AI have been generalized models adapted to the organization’s specific needs through prompting. If the technology and processes of an AI program aren’t transparent and understandable, users would never be able to find these exciting new use cases for the technology.

Additionally, increased transparency in AI systems empowers users to be more responsible with how they use the technology. Knowing how an AI model uses the data it is fed for training can inform what data users are willing to share with the system — if they are willing to share any data at all. Users can also take added precautions to avoid the consequences of certain types of data the model uses, such as unintentional plagiarism or misinformation.

How Regulation Will Affect the Need for AI to Be More Transparent and Understandable

This increased transparency may soon become required for regulatory compliance, with several jurisdictions implementing new laws regarding the development and use of AI. For instance, the European Union recently introduced the Artificial Intelligence Act — one of the first and most comprehensive rules to determine how “trustworthy” AI is. This law contains strict requirements for the transparency AI systems are obliged to have, including technical documentation, proof of compliance with EU copyright laws, and detailed summaries of the content these systems use for training purposes.

However, there is a fine line to tread when it comes to the transparency of artificial intelligence. Some AI developers have expressed concern about sharing their underlying technology so openly, fearing that wrongdoers could use this information to their advantage to create cybersecurity issues or that their intellectual property will be compromised. While these concerns are valid, the importance of mitigating the potentially harmful effects of irresponsible AI use.

Although AI is an exciting technology with the potential to revolutionize the world, responsible use is necessary for it to have the full breadth of its potential positive impact. The good news here is that developers can encourage responsible use of their systems by being more transparent about the underlying data and processes. Soon, they may be required to do so anyway, so now is the time to make this technology more transparent and understandable to users.


Ed Watal is an AI Thought Leader and Technology Investor. One of his key projects includes BigParser (an Ethical AI Platform and Data Commons for the World). He is also the founder of Intellibus, an INC 5000 “Top 100 Fastest Growing Software Firm” in the USA, and the lead faculty of AI Masterclass — a joint operation between NYU SPS and Intellibus. Forbes Books is collaborating with Ed on a seminal book on our AI Future. Board Members and C-level executives at the World’s Largest Financial Institutions rely on him for strategic transformational advice. Mr. Watal has been featured on Fox News, QR Calgary Radio, and Medical Device News.