Subscribe to the Bombay Chartered Accountant Journal Subscribe Now!

Artificial Intelligence (AI) and the Future of Chartered Accountancy

 

“I DON’T BELIEVE AI WILL REPLACE CHARTERED ACCOUNTANTS, BUT I DO FIRMLY BELIEVE THAT THOSE WHO UNDERSTAND AND LEVERAGE AI WILL REPLACE THOSE WHO DON’T.”

Some perceive AI as a big threat to the profession, while others perceive it as a big opportunity. Is it like seeing a glass half full or half empty, or does it have some deep nuances? What is in store for the CA Profession with the advent of AI? Can we ignore it, or do we have to embrace it? CA Ninad Karpe answers these and several other questions in an interview with BCAS.

Ninad Karpe is the Founder of Karpe Diem Ventures, which invests in early stage startups in India. He is also the Founder & Partner at 100X.VC, India’s pioneering early-stage VC firm that has invested in 180 startups through the innovative iSAFE note model. Widely known as a “startup whisperer” for his sharp insights and no-nonsense advice, Karpe earlier served as MD & CEO of Aptech Ltd. and as MD of CA Technologies India. Karpe has authored the business strategy book “BOND to BABA” and served as Chairman of CII Western Region (2017-18). Passionate about storytelling and creativity, he has also produced four Marathi plays, seamlessly blending boardroom strategy with the magic of the stage.

Being a lead technology person from the CA Fraternity, his insights on the AI revolution impacting the CA Profession carry weight. Considering his time constraints, BCAJ sent him questions to receive written answers from him. We hope this interview will enrich readers.

Q. Mr. Karpe, thank you for sparing your valuable time. Let’s begin by discussing the future. How do you see the role of a chartered accountant evolving over the next five years, especially given the rise of AI?

A. Ninad Karpe: Thank you, it’s a pleasure to discuss AI. We are currently witnessing a profound shift in the accounting profession. I don’t believe AI will replace chartered accountants, but I do firmly believe that those who understand and leverage AI will replace those who don’t.

In five years from now, the CA’s role will move away from being execution-heavy and compliance-focused toward something far more strategic and analytical. Much of the routine work, like data entry, reconciliation, standard reporting, etc., will be completely automated. But that only opens up space for CAs to deliver real value through insights, interpretation, and decision-support. Human judgment won’t become irrelevant. In fact, it will become more important, because it will be applied to higher-order problems. The AI-assisted CA will be the norm, not the exception.

Q. That’s a powerful vision. In your view, what’s the most underrated opportunity that AI presents to accounting professionals right now?

A. Ninad Karpe: That would be the ability of AI to make sense of unstructured data.

CAs are used to working with structured ledgers and financial statements. But what about the mountains of unstructured data, like emails, WhatsApp chats, handwritten notes, scanned invoices, or boardroom transcripts? AI can now process, analyse, and even summarise such data. That’s a goldmine.

Most firms are just scratching the surface by using AI for automating data entry or filling out forms. But the real breakthrough lies in using AI for strategic insights like flagging hidden risks, spotting patterns, and even predicting client behaviour. This capacity to derive intelligence from chaos is what can transform how CAs add value.

Q. Which AI tools do you find most effective for day-to-day accounting tasks? And how safe is it to use free versions of these tools?

A. Ninad Karpe: For everyday use, tools like ChatGPT, Microsoft 365 Copilot, and AI-enhanced Google Sheets are quite useful. You can use them for summarising tax policies, preparing checklists, analysing trends, or even drafting emails and reports.

That said, I must stress that data sensitivity is paramount. For anything involving client data, free versions should be avoided. Use enterprise-grade tools that offer robust security, encryption, and compliance controls. Experimentation is great, and free tools are ideal for learning and prototyping. But when it comes to real-world applications, especially involving confidential financial information, always prioritise data privacy.

Q. How should mid-sized firms approach AI adoption? Should they prioritise investing in technology or focus on building talent?

A. Ninad Karpe: Definitely start with talent.

Technology can be bought, but talent needs to be nurtured. I always recommend identifying an “AI Champion” within the firm; someone who is naturally curious, digitally savvy, and willing to experiment. They don’t need to be a coder or a data scientist. But they do need to be open-minded and passionate about exploring new tools.

Start with one small use case, like automating invoice classification or generating audit checklists. Allocate a modest budget, say ₹5–7 lakhs, annually. That’s more than enough for a pilot program that could yield 10x returns in productivity and insights. The key is to build a culture of experimentation. Begin small, learn fast, and scale confidently.

Q. Can AI ever replace human judgment in complex areas like auditing or tax planning?

A. Ninad Karpe: AI can assist, but not replace human judgement.

It can definitely highlight inconsistencies, flag outliers, and run complex simulations. But when it comes to interpretation, especially in areas like tax law or regulatory compliance, human experience is irreplaceable. A CA understands nuance, ethics, and business context, all of which are beyond the capabilities of even the most sophisticated AI models today.

AI might be able to tell you what can be done. But only a human can determine what should be done. The “why” behind a financial recommendation, or the strategic judgment behind audit materiality, still lies in the human domain.

Q. That brings us to a critical concern. What are the biggest risks of placing blind trust in AI?

A. Ninad Karpe: One word. Hallucinations.

AI tools sometimes generate answers that are completely wrong, but sound perfectly plausible. That’s incredibly dangerous in our field, where accuracy is non-negotiable. If those hallucinated results make their way into a tax filing or an audit report, it’s not the AI that is held responsible; it’s the CA who signed off.

Another risk is outdated or irrelevant data. Many AI models are trained on publicly available data, which may not be current or jurisdiction-specific. So yes, AI is a wonderful assistant. But it needs constant supervision, especially in high-stakes accounting environments.

Q. How should firms maintain client trust while increasingly using AI in their advisory processes?

A. Ninad Karpe: Be transparent. Always.

Tell your clients how you’re using AI. Let them know it’s being used to support, not to replace, your professional judgment. For example, explain that the AI tool is helping cross-verify financial entries, scan for anomalies, or summarise reports, but the final call is always yours.

Clients appreciate honesty. When they see that AI enables better, faster, and more accurate service from you, they consider it a value addition. But if they suspect that you’re hiding behind the technology, that’s when trust breaks down. Transparency is not just ethical, it is strategic.

Q. Could you share a real-world example where AI truly made a difference?

A. Ninad Karpe: Absolutely. There’s a retail business I know of that was using an AI-based GST reconciliation tool. This tool flagged a recurring mismatch in filing entries, a pattern that manual checks had missed for months.

Because of that early detection, the company avoided a ₹15 lakh penalty. That one instance alone justified their investment in the tool several times over. It wasn’t just about speed, it was about precision, and about averting a regulatory crisis. That’s the real power of AI, when it turns data into actionable insight.

Q. Before implementing an AI tool, how should a firm assess whether the tool is reliable?

A. Ninad Karpe: Start with internal testing. Feed the AI dummy data and evaluate its outputs. Ask yourself: Do the results make sense? Are they consistent with domain knowledge? More importantly, can the AI explain how it arrived at those conclusions?

Any model that functions like a black box, where you can’t understand or trace the logic, is a red flag. In accounting and auditing, transparency is everything. Reliable AI doesn’t just give you answers, it gives you justifications. That’s what you want to look for.

Q. Is AI adoption creating a divide in the profession between tech-savvy CAs and traditional practitioners?

A. Ninad Karpe: Yes. And that divide is growing. But let me clarify, it’s not an age issue. It’s an attitude issue.

I’ve seen 50-year-old senior partners embrace AI with more enthusiasm than 25-year-old associates. The real difference is mindset. Those who see AI as a threat will struggle. Those who see it as a tool will thrive.

Being tech-fluent is no longer optional. Just like knowing Tally was essential 20 years ago, understanding AI tools is now part of the core skill set. If you’re not learning, you’re lagging.

Q. From a policy standpoint, what framework do you believe India should adopt to ensure ethical AI in finance?

A. Ninad Karpe: We need a national “Finance-AI Code of Conduct.” And this should be co-created by ICAI, regulatory authorities, industry leaders, and clients.

This framework should rest on four key pillars:

  1.  Data Protection: Client information must be encrypted and access-controlled.
  2.  Transparent Algorithms: Firms should understand and disclose the logic behind AI decisions.
  3.  Usage Disclosure: Clients should be aware of how AI tools are used in service delivery.
  4.  Audit Trails: Every AI-assisted output must be traceable and verifiable.

As AI advances, so must our ethical standards. We can’t afford to be reactive – we must be proactive in shaping responsible adoption.

Q. Finally, if you were a young CA starting your career today, how would you prepare for this AI-powered future?

A. Ninad Karpe: I would double down on two things: strong financial acumen and digital fluency.

Master the fundamentals of accounting standards, tax laws, and regulatory frameworks. That’s your core. But alongside that, become proficient with AI tools. Learn to prompt effectively, analyse outputs critically, and integrate these tools into your daily workflow.

Think of yourself as an “augmented accountant”, which is a blend of strategist, analyst, and tech interpreter. That’s not a futuristic fantasy. That’s the reality already unfolding around us. And those who are ready will lead the profession into its most exciting era.

Q. Any final concluding thoughts?

A. Ninad Karpe: As Chartered Accountants, embracing AI isn’t optional — it’s essential. But what sets us apart isn’t the ability to crunch numbers faster — it’s our judgment, ethics, and human context. AI may offer intelligence, but we offer wisdom.

So, the next time your audit file closes at the speed of light, just remember — behind every great AI is a greater CA… quietly debugging the logic, one ledger at a time.

Q. Mr. Karpe, thank you for this insightful and inspiring knowledge sharing. Your perspectives provide a roadmap for firms and professionals navigating the AI transition.

A. Ninad Karpe: Thank you. It’s been a pleasure to connect with BCAS Readers and share these thoughts. The future is not just coming. It is already here. Let’s embrace it.

The AI Revolution in Indian Accounting: A Landscape Analysis and Future Trends

Authors’ note: Reference has been made to certain software/tools/websites in this article only to highlight what is happening in the world in the context of AI. We have no intention of marketing or promoting any of these software/tools/websites.

INTRODUCTION

The world is on the brink of an AI revolution, with artificial intelligence reshaping industries by automating decisions, optimising workflows, and learning new things from data more effectively than before. From healthcare and logistics to finance and education, AI is transforming traditional systems, and accounting is no exception. What was once a field dominated by meticulous, manual work is now being rapidly redefined by AI-driven automation and real-time insights. And it’s not just in big firms or flashy start-ups. From CA offices in Mumbai and Delhi to practitioners in Surat or Bhopal, AI is becoming a part of daily life.

Also, this isn’t just about using a new tool. It’s about learning a new way to think, work, and grow as professionals.

Accounting, by its very nature, is rule-based, repetitive, and highly structured, making it uniquely suited for AI disruption. Tasks like ledger reconciliations, invoice processing, and compliance checks, which once took hours, are now completed in seconds. Modern AI systems can not only automate these functions but also interpret complex data, flag anomalies, and provide strategic insights. India, with initiatives like Digital India, GSTN, and MCA 21 V3, is uniquely poised to lead this AI-driven transformation in accounting.

In the last 10 years, we have already seen how the government has taken giant leaps in terms of digitisation of various services. With AI, all these would be taken to a completely different level in the days to come.

With the increasing role of AI in our daily professional and personal lives, we Chartered Accountants need to understand the disruption that is taking place, accept it and adapt it in our practices. All of us must understand the fact that AI is here to stay and that merely knowing this fact would not be enough. We need to not only have knowledge about AI but also learn how to use it in our daily professional practice.

In the other articles that are carried in this special issue of BCAJ, specific issues are dealt with by the respective authors. In this article, we look at the ways in which AI is impacting accounting and accountants in general and how, because of that, our traditional CA practice areas would also be affected.

INSTITUTIONAL PUSH AND EMERGENCE OF CA GPT

Recognising this shift, the Institute of Chartered Accountants of India (ICAI) has actively supported the integration of AI in accounting. From recommending platforms like Quadratic AI and EasyRecon to supporting Smart GST AI Summarizer, ICAI is paving the way for AI adoption in practice. A landmark development is the emergence of CA GPT a generative AI model tailored for the Indian Chartered Accountancy domain. It can interpret tax laws, generate audit documentation, and provide client-friendly summaries, showcasing the transformative potential of AI for professionals. At the same time, like any other AI tool, the CA GPT will also need to be used with moderation and care. Data privacy of our clients must be protected at all costs. It may also be appropriate and/or necessary to disclose to our client(s) that we have used an AI tool while rendering a particular service to that client.
Further, the ICAI is also conducting certificate courses on AI. It is only a matter of time before which other professional bodies too follow suit and start offering such courses to their members.

AI-POWERED ACCOUNTING PLATFORMS

Platforms like Zoho Books and TallyPrime are revolutionising financial management. They learn patterns, spot errors, and keep your ledgers neat.

Zoho Books, for example, offers powerful automation features:

  •  Automates recurring tasks such as expense entries, invoice generation, and payment reminders.
  •  Enables custom workflows to update, notify, or validate data, improving day-to-day operational efficiency.
  •  Enhances payment collection through auto-charging mechanisms and smart follow-ups.TallyPrime is evolving with smart capabilities:
  •  Automates routine processes like invoice generation, bank reconciliation, and compliance reporting.
  •  Supports integration with procurement systems and e-commerce platforms for seamless data flow.
  •  Offers built-in smart assistants and extensibility via TDL (Tally definition language) code generation, empowering businesses to tailor workflows efficiently.

One compelling example lies in the reimagining of data entry within TallyPrime. Traditional manual data input, especially from invoices, is being phased out in favour of AI-powered automation. Whether invoices are received digitally or as paper copies, intelligent systems can now extract, validate, and enter
data directly into Tally, eliminating human error and saving time.

There are other software that read data from bank statements and then provide ready-made entries that can be imported into Tally along with narrations. Edit facility is obviously available before the actual import of data into Tally. And the efficiency of this software improves as it gets more experience of how you carry out the edits. Thus, mundane and repetitive tasks like accounting are slowly but steadily being taken over by intuitive AI tools.

AI is also transforming compliance. Tools for e-invoicing and GST reconciliation now automate invoice validation, data matching, and error detection, minimising compliance risks and enhancing accuracy. These systems are not just making tax filing easier; they are fundamentally redefining the role of financial professionals by shifting their focus from data handling to strategic decision-making.

AI-DRIVEN ANALYTICS AND SaaS INNOVATIONS

Beyond automation, AI helps us not only to predict what might happen in the future but also to suggest the best actions to take.

These tools help businesses anticipate cash flow needs, detect fraud, and make proactive financial decisions. SaaS-based platforms like RazorpayX, Credgenics, and ClearTax are pushing the boundaries even further:

  •  RazorpayX offers integrations with Zoho Books and Tally, enabling seamless syncing of accounting data.
  •  ClearTax has launched AI-assisted tax filing tools that provide real-time insights and automate compliance.
  •  Credgenics leverages AI for credit risk analysis and intelligent collections, streamlining financial operations.

GOVERNMENT AND REGULATORY DEVELOPMENTS

The CBDT has embraced data analytics to enhance tax enforcement and compliance. A notable initiative includes a comprehensive review of approximately 40,000 taxpayers to identify discrepancies in Tax Deducted at Source (TDS) filings for the financial years 2022-23 and 2023-24. This effort involves a detailed 16-step strategy leveraging data analytics to pinpoint irregularities and ensure tax compliance.

The MCA’s rollout of MCA21 Version 3.0 marks a significant step towards leveraging AI for corporate compliance and fraud detection. This upgraded portal incorporates advanced features such as e-Adjudication, e-Consultation, and Compliance Management, all aimed at strengthening enforcement and promoting ease of doing business. By integrating AI and machine learning capabilities, MCA21 Version 3.0 enhances the ministry’s ability to detect anomalies, monitor compliance, and facilitate real-time data analysis.

THE FUTURE OF AI IN ACCOUNTING

AI-Powered Virtual CFOs are reshaping SME finance by offering intelligent financial planning, budgeting, cash flow optimisation, and real-time forecasting—services once exclusive to large firms with full-time teams. Integrated with platforms like Zoho Books and TallyPrime, they provide live dashboards, alerts, and compliance updates, helping Indian SMEs make informed decisions at a fraction of the traditional cost.

Building on this, AI and Blockchain-enabled Smart Contracts are transforming financial transactions and audits. These contracts self-execute terms, reduce errors and fraud, and, with AI, can learn from past data, detect anomalies, and adapt dynamically—streamlining compliance and taxation workflows.

Meanwhile, predictive and prescriptive analytics are enabling precise forecasting of cash flows, tax risks, and fraud while recommending strategic actions. This shift is moving accountants from record-keepers to real-time advisors.

Finally, AI-powered audit tools like MindBridge AI and Deloitte’s Argus are revolutionising risk detection, using machine learning to uncover anomalies and fraud, fundamentally changing how audits are conducted.

IMPLICATIONS FOR CHARTERED ACCOUNTANTS

As Artificial Intelligence (AI) continues to automate repetitive tasks such as data entry, reconciliations, and compliance checks, the role of Chartered Accountants (CAs) is undergoing a fundamental transformation. Traditional responsibilities are increasingly being handled by machines, compelling CAs to evolve from transactional number crunchers to strategic, tech-savvy professionals. Every traditional practice area of a CA is already and would be further impacted by the use of AI.

GST and compliances made easier

Whether it’s checking for ITC mismatches or sending reminders for upcoming filings, AI tools from platforms like ClearTax have become silent assistants for many mid-sized firms—even in smaller cities like Indore and Pune.

Audits are getting an upgrade

Instead of relying only on sampling, tools like MindBridge scans all the data, flagging unusual entries and helping us focus on where it really matters. It’s like having a microscope for your audit file.

Tax filing with a twist

Some platforms now auto-read your Form 26AS, AIS, and bank statements—and even suggest what deductions might apply. And yes, some can draft replies to scrutiny notices based on past cases. Scary or smart? Maybe both.

Smarter client conversations

Firms are building chatbots trained on their own advice and old case files. These bots answer common queries so that the team can focus on complex, value-added work.

In these very interesting and challenging times, to remain relevant, CAs must upskill in emerging areas such as Python, data analytics, and visualisation tools like Power BI, while also developing a working knowledge of AI and machine learning concepts.

This technological shift brings with it a new set of ethical challenges, including concerns around data privacy, algorithmic bias, and accountability for decisions made by AI systems. As a result, CAs will not only need to navigate these complexities but also advise clients on the responsible use of AI. In this regard, readers may read up on the recent news item about recalling of an ITAT order because it was passed based on submissions made by the DR who relied on AI tools to come up with case laws that never existed. Anyone who relies on AI must take proper care to recheck the facts / figures and verify whether what the AI tool is suggesting is factually correct or not.

Moreover, the profession is seeing the rise of new hybrid roles such as AI implementation consultants, forensic auditors using machine learning, and cyber risk advisors that combine financial expertise with technological fluency. Client expectations are also changing, with a growing demand for real-time insights, predictive analytics, and strategic financial advice. In this evolving landscape, CAs must adopt a forward-thinking mindset, repositioning themselves as financial strategists and trusted advisors who can bridge the gap between finance and technology.

Rise of Strategic Roles

CAs are moving from being ‘compliance experts’ to ‘financial interpreters’—drawing insights, foreseeing risks, and helping clients navigate financial futures rather than just recording the past.

Faster Turnarounds

With AI-enabled data entry and verification, turnaround time is dropping. Clients now expect real-time insights, not month-end reconciliations.

Democratisation of Expertise

AI tools are empowering even solo practitioners in small towns to offer insights once limited to Big 4 firms.

Cultural Shift: How Indian CAs Are Responding

The adoption of AI is uneven—but growing.

  •  Gen Z Articles and Young Partners are embracing tools like ChatGPT, Notion AI, Python scripts, and Airtable automation to optimise their workflows.
  •  Senior Partners are cautiously optimistic. While some see it as an opportunity, others worry about quality control, liability, and client trust.
  •  Training and ICAI Curriculum need to evolve faster. AI literacy must now be as foundational as Ind AS.

Interestingly, the firms leading this revolution are those that build cross-functional teams—pairing accountants with data scientists or assigning articles to innovation pods.

FUTURE TRENDS: WHAT THE NEXT 5 YEARS MAY HOLD

The AI wave is not cresting—it is still rising. Here’s what the future might look like:

1. Real-Time AI-Powered Audits

Blockchains and integrated ERP-AI models could enable continuous auditing—where anomalies are flagged the moment they occur.

2. Client-facing AI Tax Assistants

Imagine a WhatsApp bot that helps a small trader plan taxes, track invoices, and even file returns—all trained by a CA firm.

3. Algorithm Assurance Services

As businesses start relying on AI for decision-making, they will need CAs to audit the AI itself—ensuring it is fair, compliant, and explainable.

4. AI Co-pilots in Litigation & Representation

Drafting responses to show-cause notices or appeal memos with AI support will soon become standard.

5. Compliance-as-a-Service

Entire back offices for SMEs and start-ups may be run on AI-backed systems, with CAs providing periodic strategic oversight.

Ethical and Regulatory Considerations

This transformation must be accompanied by responsibility.

  •  Who is liable if AI makes a mistake?
  •  Should clients be informed when AI is used in their work?
  •  What regulatory framework is needed for AI audit tools?

As guardians of ethical practice, CAs must shape—not just follow—this debate. The ICAI should lead with a Code of Conduct for AI usage in the profession.

Conclusion

The AI revolution in Indian accounting is not a distant prospect; it is unfolding in real-time. While automation is changing the operational core of accounting, the real shift is strategic from compliance to insight, from recording history to predicting the future. CAs who embrace this shift and reinvent themselves will not just remain relevant they’ll lead.

AI is not the end of our profession. It is the rebirth of its most powerful version yet. This is not about man versus machine. It is about a man with a machine, serving better, faster, and with deeper insight.

Firms that embrace AI will not just survive—they will lead. CAs who upskill and reimagine their roles will not be replaced—they will redefine the profession.

And as we stand here, at this incredible intersection of tradition and transformation, we must ask ourselves:

“What kind of CA do I want to be by 2030?”


1 Assisted by Chaitanya Vora and Pranav Nargale, Articled Students

LLMs in Audit – A Double-Edged Algorithm

INTRODUCTION

The exuberance associated with artificial intelligence (“AI”) has seamlessly transcended the practice of auditing. Large Language Models (“LLMs”) are heralded as a transformative solution due to their apparent ability to infer and reason both structured and unstructured data. Traditional auditing applications, constrained by rules and structures, are inherently rigid and complex, requiring intricate coding skills to derive substantive insights. In contrast, LLMs appear to be sentient, with their ability to interpret simple natural language instructions. Their ability to perform various tasks, from complex data analysis to code generation, makes them a versatile, unified tool. A simple instruction can now accomplish what previously required multiple applications and data analysis expertise.

This apparent ease of use and accessibility has made LLMs attractive to auditors seeking efficiency and potentially offers smaller audit firms an economical means to bridge their technology gap with larger competitors. As such, it is not surprising that most auditors intend to use LLMs1. However, the use of LLMs for audits may be fraught with risks, particularly when they are used in relation to matters that involve professional judgement. This article seeks to explore these issues.


1 “Audit Survey 2024”, Thomson Reuters Institute, https://www.thomsonreuters.com
/en-us/posts/wp-content/uploads/sites/20/2024/06/2024-Audit-Survey.pdf, 
Last Accessed on April 7, 2025.

BEYOND RULES: THE PROBABILISTIC NATURE OF LLMs

AI encompasses a wide range of technologies, including robotic process automation and machine learning (“RPA/ML”), which auditors have long leveraged. However, LLMs represent a fundamental shift in this landscape. Unlike RPA/ML systems, which are deterministic and bound by rules programmed by humans, LLMs are probabilistic – a feature enabling them to generate unique content. To use an analogy, RPA/ML is comparable to agreed-upon procedures where specific predetermined steps are undertaken within a tightly structured framework. LLMs function more like a statutory audit by operating within a broad framework with significant discretion in execution.

Unlike human auditors, who rely on professional judgment developed through education, experience, and reasoning, LLMs operate fundamentally as sophisticated pattern recognition systems. At their core, LLMs are probabilistic prediction engines that determine the most statistically likely response based on patterns observed in their training data rather than genuine understanding or reasoning.

When an auditor prompts an LLM with a question or instruction, it calculates probability distributions across its vocabulary, essentially “guessing” which words should follow based on the observed statistical patterns. This process fundamentally differs from human cognitive thinking, which involves causal reasoning, domain expertise, professional skepticism, and ethical judgment. Their ability to produce coherent text arises from identifying and encoding textual patterns as numerical “weights,” parameters reflecting statistical relationships among words, sentences, and broader textual contexts. Think of a parameter as something that demonstrates a connection between two facets of a word, concept, or idea. Recent LLMs have hundreds of billions of parameters. For example, the DeepSeek V3 model has 671 billion parameters2.


 2 “DeepSeek explained: Everything you need to know”, February 6, 2025, 
https://www.techtarget.com/whatis/feature/DeepSeek-explained-Everything-you-need-to-know, 
Last Accessed on April 7, 2025.

LLMs derive their knowledge from the data on which they have been trained. General purpose LLMs like ChatGPT and DeepSeek are trained on generalised information (primarily sourced from the Internet) and possess broad knowledge across various topics. Specialised LLMs, in contrast, are trained on specific data sets, making them more reliable in those particular domains. For instance, LLMs trained on legal material demonstrate greater accuracy on legal topics compared to general-purpose models like ChatGPT3. This distinction holds critical implications for auditing, where domain-specific knowledge of accounting standards, regulatory requirements, and industry practices is essential for practical professional judgement.


3  “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries”, May 23, 2024, 
https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries, 
Last Accessed on April 8, 2024

CONVERGENCE OF LLMs AND AUDIT PROCEDURES

The foundation of auditing rests on the pillars of professional judgement.4 and skepticism5, where auditors are required to apply requisite skills and knowledge in decisions related to an audit while being wary of factors that could lead to misstatement. Standards on Auditing (“SA”) mandate the application of these principles throughout the audit process.6 with particular emphasis on critical stages such as risk assessment7, determining materiality8 and conducting substantive audit procedures9. Contrary to the widespread notion that auditors primarily focus on financial metrics, the SAs require consideration of non-financial elements, such as governance structures, economic conditions, enterprise risks, and internal controls, as may be relevant while applying professional judgement.


4  Paragraph 13(k) of SA 200 - Overall Objectives of the Independent Auditor and 
the Conduct of an Audit in Accordance with Standards on Auditing (“SA 200”)

5 Paragraph 13(j) of SA 200 - Overall Objectives of the Independent Auditor and 
the Conduct of an Audit in Accordance with Standards on Auditing (“SA 200”)

6 Paragraph 15 and 16 of SA 200 - Overall Objectives of the Independent Auditor and 
the Conduct of an Audit in Accordance with Standards on Auditing (“SA 200”)

7 Paragraph A1 of SA 315 - Identifying and Assessing the Risks of Material Misstatement 
Through Understanding the Entity and its Environment (“SA 315”)

8 Paragraph 4 read with Paragraph A2 of SA 320 – Materiality in Planning
 and Performing an Audit (“SA 320”)

9 Paragraph 4 SA 520 – Analytical Procedures (“SA 520”)

LLMs seem attractive in this context, as they can process and analyse numeric and textual data, potentially enabling auditors to adopt a more rigorous and comprehensive approach. ICAI-led initiatives10 and use cases hosted on the ICAI website suggest that LLMs can be utilised for tasks such as risk assessments11, formulating audit procedures12, analytical procedures, fraud detection, and reporting (“LLM Use Cases”), where professional judgment and skepticism are crucial.


10 “Inviting AI Research Paper Submission at AI Innovation Summit 2025,”
 https://ai.icai.org/ais2025/research_paper.php, Last Accessed on April 7, 2025.

11 “Grand Finale AI Hackathon (S1) UC-5 | AI in Auditing”, September 23, 2024,
 https://ai.icai.org/video_details.php?id=348, Last Accessed on April 7, 2025.

12 “Enhancing Auditing Through AI: A Comprehensive Use Case of AI, Audit and
 Governance with ChatGPT Plus (4o)”, https://ai.icai.org/usecases_details.php?id=4, Last Accessed on April 7, 2025.

CONFIDENTIALITY IN LLMs: A MIRAGE

However, an LLM’s output not informed by confidential and/or unpublished information (“Classified Data”) risks being irrelevant as SAs mandate that auditors should consider non-Classified Data. For instance, decisions relating to risk assessment, materiality, and corresponding audit procedures must be made in conjunction with analysing unpublished financials. However, providing Classified Data to LLMs could potentially breach the auditor’s confidentiality obligations under the ICAI’s Code of Ethics13 and SEBI’s Prohibition of Insider Trading Regulations14 (“PIT”).


13  Refer Section 100.4(d) of ICAI’s Code of Ethics.

14  Refer Clause 3(1) of SEBI’s Prohibition of Insider Trading Regulations, 2015,

This risk is accentuated as the Classified Data may be accessible to other users by design.15 (i.e. used by the LLM to train itself) or inadvertently16 (e.g. data breaches), thereby broadening the exposure. Notably, Samsung has banned the use of LLMs after its employees uploaded sensitive data.17 While these risks can be mitigated by instituting curated access controls or using a secure offline LLM, such solutions are costly and complex.18 And would be infeasible for smaller audit firms who may default to general-purpose LLMs like ChatGPT.


15 “How your data is used to improve model performance”, 
Open AI, https://help.openai.com/en/articles/
5722486-how-your-data-is-used-to-improve-model-performance, 
Last Accessed on April 8, 2025

16 “Hundreds of LLM Servers Expose Corporate, Health & Other Online Data”,
 August 28, 2024, https://www.darkreading.com/application-security/
hundreds-of-llm-servers-expose-corporate-health-and-other-online-data,
 Last Accessed on April 4, 2025.

17 "Samsung bans staff’s AI use after spotting ChatGPT data leak”, November 21, 2024,
 https://www.straitstimes.com/asia/east-asia/samsung-bans-staff-s-ai-use-after-spotting-chatgpt-data-leak, 
Last Accessed on April 8, 2025

18 “Should You Use a Local LLM? 9 Pros and Cons”, October 24, 2023, 
https://www.makeuseof.com/should-you-use-local-llms/, Last Accessed on April 8, 2025

Consequently, auditors face an untenable choice: rely on generic and formulaic LLM outputs that exclude critical Classified Data or risk violating professional and regulatory standards by sharing Classified Data with LLMs.

EXPLAINING LLMs’ DECISIONS: A SISYPHEAN TASK

Assuming an auditor has instituted sufficient guardrails to negate the risk of leakage of Classified Data, LLMs pose another challenge. With their billions of parameters, LLMs lack explainability. Unlike traditional audit methodologies, where each step can be documented and justified, it is impossible to analyse the computational steps of an LLM and, therefore, understand the underlying correlation, accuracy, and relevancy between a prompt and the output. For example, an LLM cannot explain why it recommended a particular work procedure or course of action. While one can comprehend the logical accuracy of a response through one’s knowledge and experience, this approach will be infeasible in intricate problems that involve consideration of multiple complex factors.
SA 230—Audit Documentation underscores the importance of articulating the basis for professional judgment, which requires auditors to document the rationale and basis for significant audit matters19.


19 Refer Paragraph 8(c.) of SA 230 – Audit Documentation

Their probabilistic nature compounds this issue. LLMs provide different responses for the same instruction, bias, and their propensity to “hallucinate,” i.e., generate incorrect responses, is well documented. To illustrate these fallacies in an audit context, we queried20 ICAI’s AASB GPT regarding an auditor’s obligations when informed about an established fraud exceeding ₹1 Crore in a ‘limited company”. While superficially accurate, the response contained critical errors.


20 https://chatgpt.com/g/g-QpYe5htDG-icai-aasb-gpt/c/67f91b6b-82bc-8008-abbd-b82ea27a8a43
  •  It universally mandated reporting the fraud to the Central Government under Section 143(12) of the Companies Act, 2013, directly contradicting ICAI’s guidance21 that reporting obligations do not arise when management identifies the fraud. This recommendation would only be correct for listed companies (per NFRA’s 2023 circular22), but the query didn’t specify the company type. By failing to reference the NFRA circular while recommending universal reporting, AASB GPT effectively contradicted ICAI’s official position.
  •  The response incorrectly enumerated “Guidance Note on Audit of Banks (2025 Edition)” as the source document.
    This combination of explainability and output inconsistency creates a fundamental conflict with audit standards that demand transparency, consistency, and justifiable professional judgment. ICAI23 as well as general-purpose LLMs like ChatGPT24, explicitly disclaim any responsibility for the accuracy or correctness of the LLM’s output or the consequences arising therefrom, underscoring this technology’s inherent frailty. As such, attributing an audit error to an LLM would amplify the grounds for professional negligence, as this would be akin to a surgeon blaming their scalpel for a surgical error, or more precisely, blaming an untested experimental medical device that came with explicit warnings against relying on it for critical procedures. The auditor’s decision to delegate professional judgment to a technology explicitly designed without accountability mechanisms represents not merely an error in professional practice but a conscious circumvention of established standards designed to protect the integrity of the audit process.

21 Paragraph V of Part A of ICAI’s Guidance Note on Reporting on Fraud 
under Section 143(12) of the Companies Act, 2013 (Revised 2016),
 https://resource.cdn.icai.org/41297aasb-gn-fraud-revised.pdf,

22  NFRA’s circular dated June 26, 2023, 
https://cdnbbsr.s3waas.gov.in/s3e2ad76f2326fbc6b56a45a56c59fafdb/uploads/2023/06/2023062673.pdf,
 Last Accessed on April 8, 2025

23 Disclaimer on ICAI’s GPT, https://ai.icai.org/cagpt/gptlist.php, 
Last Accessed on April 8, 2025.

24 Open AI – Terms of Use, December 11, 2024, https://openai.com/policies/row-terms-of-use/, 
Last Accessed on April 8, 2025.

LLM DEPENDENCY – A SLIPPERY SLOPE

While technology has ushered in a range of benefits, overuse and overreliance on technology are common outcomes, leading to issues such as a decline in cognitive abilities25. This cognitive offloading, where we increasingly rely on technology to perform mental tasks, has become so pervasive that many can no longer function without it. Consider how few people today can recall phone numbers from memory, having delegated this cognitive function entirely to their devices. This dependency manifests gradually and results in unconscious self-reinforcing dependency.


25 “The impact of digital technology, 
social media, and artificial intelligence on cognitive functions: 
a review”. November 24, 2023, 
https://www.frontiersin.org/journals/cognition/articles/10.3389/fcogn.2023.1203077/full, 
Last Accessed on April 8, 2025.

The risk of over-reliance on LLMs is significantly higher, that humans may subconsciously defer to LLMs. Compared to conventional technology tools based on data analytics or RPA/ML, which are bound by rules and need human oversight, LLMs provide a comprehensive solution for nearly any query or task in a simple interface. This ease of use and all-around functionality amplify the risk of cognitive offloading, and research supports this assertion26. A study conducted across different age groups suggests that an increase in AI usage is correlated with a decline in critical thinking skills, and this decline was markedly increased in young participants. In a recent case, a bench of the Income Tax Appellate Tribunal passed an order based on cases that did not exist, suggesting that the underlying submissions were generated using ChatGPT27.


26 “Increased AI use linked to eroding critical thinking skills”, January 13, 2025,
https://phys.org/news/2025-01-ai-linked-eroding-critical-skills.html, Last Accessed on April 8, 2025.

27 “Bengaluru Tax Tribunal issued order based on ChatGPT research on cases that didn’t exist, 
recalls after finding out”, February 26, 2025, 
https://www.opindia.com/2025/02/bengaluru-tax-tribunal-issued-order-based-on-chatgpt-research-on-cases-that-didnt-exist/, 
Last Accessed on April 8, 2025.

While LLMs project an aura of omniscience, their responses, particularly from general-purpose models like ChatGPT, are inherently generalised answers derived primarily from publicly available data. This statistical approach fundamentally differs from the targeted domain expertise that SAs require auditors to apply28. For instance, an auditor evaluating a pharmaceutical company’s research and development capitalisation policy needs specialised knowledge of industry practices and applicable accounting standards. LLMs may generate plausible-sounding responses that miss crucial industry-specific considerations or regulatory nuances that would be evident to a seasoned professional.


28 Refer Paragraph A24 of SA 200 - Overall Objectives of the Independent Auditor 
and the Conduct of an Audit in Accordance with Standards on Auditing

This has profound implications as auditors may become complacent and overly dependent on LLM-generated insights without applying their professional judgment. When auditors rely on an LLM’s output without understanding its derivation, they effectively delegate their professional judgment to an opaque system that cannot be interrogated about its methodology or assumptions. This delegation potentially undermines the very essence of SAs. In other words, blindly relying on an LLM’s output without critically assessing its relevance, reliability, and appropriateness for the specific audit context would be a failure to exercise professional judgment.

CONCLUSION:

It is undisputed that LLMs can enhance and supplement auditing. Their demonstrated use across different specialised domains, such as finance and medicine, suggests that LLMs can be equally deployed for auditing. However, the emergence of LLMs in auditing represents a double-edged sword that demands careful consideration.

While they offer unprecedented capabilities in processing diverse data, their usage in context may be fundamentally inconsistent with core auditing principles. The inability to incorporate Classified Data without confidentiality risks and their inherent lack of explainability and consistency creates significant tensions with professional standards requiring documented, transparent judgment. Auditors who over-rely on LLMs risk compromising audit quality and potentially breaching their professional obligations under SAs and regulatory frameworks. The distinction between leveraging LLMs as supplementary tools versus delegating professional judgment to them will likely become a critical benchmark in determining professional negligence.

While regulators strive to define rules and guidelines on this vexing issue, maintaining and demonstrating the primacy of human judgement, particularly at critical junctures requiring skepticism and professional expertise, is paramount. Auditors must approach LLM adoption with clear guardrails that preserve their ultimate judgement, documentation, and compliance with SAs.

Challenges and Considerations of AI Adoption (Issues in Ethics, Privacy, Dependency)

AI tools are gradually finding a place in audits, tax work, and compliance reviews. Their appeal lies in speed and automation — but the risks, if ignored, can be operationally and reputationally damaging. This article examines the real-world challenges of AI in professional practice and argues for a disciplined, evidence-based adoption strategy — emphasising human supervision and strong procedural checks.

In June 2024, the ICAI published the results of an online survey on the use of AI within CA firms. Results showed that adoption is still limited, with most respondents expressing concerns about the cost of tools, unclear benefits, and a lack of AI knowledge. The response trend clearly indicates that the profession remains cautious—not because of resistance to technology, but due to practical concerns about reliability and control.

Consider this: an AI tool can scan and index over 1,000 judicial tax rulings in under five minutes. But if it confidently misapplies a case law and uses it in the wrong context for a client matter, the repercussions are real and potentially damaging. This is not just a technical flaw—it’s a professional liability.

The idea isn’t to avoid using AI, but to use it with clear limits and constant human oversight. It shouldn’t be a trial-and-error approach—AI must be handled like any high-risk tool, with proper checks and controls in place.

A January 2025 study by Wolters Kluwer1 based on insights from 2300 global participants revealed that :

  •  57% of accounting professionals view AI advancements as a significant industry influencer.
  • 27% of firms have integrated generative AI into their workflows, with an additional 22% planning adoption within the next 12 to 18 months.
  •  Only 25% have established AI policies, and concerns about data security, accuracy, and implementation costs persist.

[1] 1 https://www.theaccountant-online.com/news/wolters-kluwer-releases-study/?cf-view

The survey indicates that although there is significant global interest in AI technology, its adoption remains limited, with most taking a cautious approach.

Against this backdrop, the article delves into the primary ethical, privacy, and dependency challenges of AI—and highlights what every forward-thinking CA should consider before embracing it.

HALLUCINATION CHALLENGE

AI hallucinations—where AI tools produce seemingly credible but false information—present serious risks in our work. These errors can lead to incorrect financial analyses, misguided tax advice, and flawed audit conclusions.

Case Study: ITAT Bengaluru’s Erroneous Order2

In December 2024, the Bengaluru bench of the Income Tax Appellate Tribunal (ITAT) issued an order in the case of Buckeye Trust vs. PCIT, which cited three Supreme Court judgments and one Madras High Court ruling. Subsequent scrutiny revealed that these citations / judgements were non-existent, raising concerns about the possible use of AI tools like ChatGPT in drafting the order. The ITAT revoked the order within a week, citing “inadvertent errors,” and scheduled a fresh hearing.

This incident highlights the biggest challenge of AI adoption: accuracy and reliability. AI tools can hallucinate information, generating details or facts that seem convincing but are entirely fabricated.

However, despite these inherent limitations, several scientific approaches can significantly reduce hallucination. For example, using well-crafted prompts, connecting the model to verified external information sources, i.e. Retrieval Augmented Generation (RAG). Additionally, custom-trained models can be developed for specific domains to improve performance in specialised areas.


2 https://counselvise.com/blogs/ai-hallucination-itat-buckeye-trust

ACCURACY CHALLENGE

When we use traditional accounting or tax software, the results are predictable. The same input always gives the same output—this is called a deterministic system. For example, if you enter income and deductions into a trusted tax filing software, it will compute the same tax every single time.

But AI systems don’t work like that. Most large language models (LLMs), like those used in AI assistants, are probabilistic. This means the output can vary slightly each time, even for the same question, depending on how it interprets the context. This makes it difficult to guarantee accuracy—especially for tasks like tax calculations, legal interpretations, or audit reporting.

So, how do we know if an AI model is reliable enough to be used in CA practice?

How AI Accuracy is Measured: Benchmarking

AI benchmarking is like a test or exam for AI models. Experts feed the model a large set of carefully designed questions and see how well it performs. These tests help us compare different models and understand where they are strong—or weak.

One of the most relevant benchmarks for our profession is Tax Eval V2, released in May 2025. It includes over 1,500 questions prepared by tax and law experts, covering:

  •  Tax compliance,
  •  Case law reasoning,
  •  Critical thinking in tax scenarios,
  •  Interpretation of tax statutes.

Each model is scored based on whether the final answer is correct and whether the reasoning steps are sound. Here’s how the top AI models performed:

These are top-tier models—and yet, they still get about 20% of tax questions wrong. That’s not acceptable if you’re relying on them for filings, opinions, or representations before authorities.

Source: https://www.vals.ai/benchmarks/tax_eval_v2-05-30-2025

How AI Stacks Up Against Humans

Another interesting study3, compared AI tools with human lawyers across seven real-world legal tasks. The findings help us understand where AI shines—and where it still struggles

This tells us something important: AI is very good at fast, structured tasks. But when precision, legal nuance, or contextual interpretation is needed, human judgment still outperforms.


3 https://www.vals.ai/vlair

What This Means for CAs

Accuracy is not optional in our profession—it’s mandatory. Whether it’s for audit work, return filings, or drafting submissions, we are responsible for what goes out with our name on it. If you’re using AI, ask:

  •  Has this tool been tested for CA use cases?
  •  Is the output verifiable and consistent? What is the source of output?
  •  Is a CA reviewing the output before it’s final?

AI can assist—but not replace—your professional judgment. Not until it can consistently hit 100% accuracy. And as of now, no model comes close.

ETHICAL CHALLENGES

Transparency and Explainability

AI systems must earn the trust of professionals. That starts with transparency—knowing what data the model learned from and how it arrives at a particular conclusion. But transparency alone isn’t enough. What we also need is explainability: the ability of the AI to justify its answer in a way that’s understandable and logical.

If an AI model behaves like a black box—giving you an answer without showing how it got there—it becomes impossible to audit or rely on it. This is especially dangerous in regulated fields like tax, audit, and financial reporting.

Example: Capital Gains Query

Query:

What is the rate of capital gains tax on equity shares purchased 15 months ago and sold on NSE?

Answer A (Black Box Response):

“The rate is 12.5% on gains exceeding ₹1,25,000 per financial year.”

No section is cited. No explanation of other relevant conditions.

Answer B (Explainable Response):

“As per the Income Tax Act, listed equity shares held for more than 12 months qualify as long-term capital assets. Since these were held for 15 months, gains are long-term. Under Section 112A, if STT was paid on both purchase and sale, LTCG on such equity shares is taxed at 12.5% for transfers on or after July 23, 2024 (Finance Act 2024). LTCG up to ₹1,25,000 is exempt. Indexation under Section 48 is not allowed. Surcharge and 4% HEC are additionally applicable.”

While giving Answer A, LLM models behaved as a black box, i.e. did not disclose sources and provide explanations for output. Answer B, on the other hand, gave a step-by-step explanation and cited proper Sections. So, Answer B passes the test of Explainable AI.

This is the kind of explanation a CA would expect. Step-by-step reasoning, proper references, conditionality, and full scope of implications.

Bias and Fairness in Algorithms

AI bias is not always visible—but it can have real-world consequences. If the data used to train an AI model reflects past discrimination or blind spots, the model will carry that forward. This is especially dangerous when used for decisions involving people—like fraud detection or internal audit flags.

Example

A company created an AI system to detect fraudulent expense claims by employees. The model was trained on past incidences of such claims.

Same expense. Different scores. Why?
The model had learned from a biased audit history—where scrutiny was disproportionately applied to junior employees from Tier 2/3 cities. The result: the AI repeated and amplified that bias.
Such systemic errors aren’t just unfair—they can damage employee trust, expose firms to HR and legal risk, and weaken the credibility of internal control systems.

Professional Integrity

Professionals are trusted and respected for their high standards of accountability, independence and judgement. This is the result of their intensive training and knowledge. However, when AI tools are used for generating advice, interpreting laws or drafting legal submissions without sufficient oversight, there is a risk of diluting this trust by delegating the core tasks to machines.

A Utah lawyer was sanctioned by the state Court of Appeals after filing a legal brief containing false case citation that were fabricated by ChatGPT. The brief was authored by one of the law clerks. Hence, the lawyer took full responsibility, acknowledging he neglected his duty to verify the AI-generated research before submission. This serves as a reminder that professional accountability in law remains human.5


5 https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-chatgpt-ai-court-brief

PRIVACY CHALLENGES

Privacy with AI tools is a major worry, especially for jobs that deal with private client information, financial records, or legal documents. When you use AI, your sensitive data often gets processed or stored on internet servers, which creates risks of hackers accessing it, misuse, or information leaks. Many AI tools—particularly free ones—might keep and use your data to improve their systems unless you specifically tell them not to. Organisations need to make sure any AI tool they use follows privacy laws like GDPR in Europe, India’s data protection rules, or specific confidentiality requirements for their industry.

Case Study: Sage Group’s AI Assistant Mishap4

In early 2025, Sage Group, a UK-based accounting software provider, faced issues with its AI assistant, Sage Copilot. The tool inadvertently disclosed business information related to other clients during routine invoice lookups. Although no sensitive data was exposed, the incident highlighted deficiencies in access controls and data isolation, emphasising the need for robust safeguards in AI deployments within accounting systems.


4 https://www.theregister.com/2025/01/20/sage_copilot_data_issue/

Case Study: DeepSeek AI

DeepSeek – a Chinese AI company, rose to sudden fame when they launched their model DeepSeek-R1 in Jan 2025. The company claimed to cost 95% cheaper than OpenAI’s ChatGPT and required 1/10 of computing power as compared to META, yet offers a similar quality of response. However, within a short period, the Government and corporates of several countries (Italy, South Korea, the US and Australia) blocked, prohibited or advised against using DeepSeek. The ban was based on data privacy and security risks associated with the model’s origin and usage.

Data Collection and Consent

Before you upload a file or data to an AI tool, you must clearly understand

  •  Is your data stored permanently on their servers? If not, then what is the retention period?
  •  Is the uploaded data accessible to any support staff in their organisation?
  •  Is your data used for training the model?

Here is a comparison of two commonly used AI Chatbots

ORGANISATIONAL CHALLENGES

Accountability and Professional Liability

AI technologies serve as valuable support tools for tasks like drafting and analysis, but ultimate accountability belongs to the qualified professional who validates and endorses the results. AI technologies cannot face legal consequences, leaving humans fully responsible for mistakes and omissions.

In Nov 2022, Jake Moffatt used Chatbot on the Air Canada website and sought information about bereavement fares for a last-minute trip to attend his grandmother’s funeral. The airline’s chatbot informed him that he could apply for a bereavement fare refund within 90 days of ticket issuance, even after travel had occurred. Later, the airlines rejected the claim, citing the actual rule mentioned on the website that requires bereavement fare requests to be made prior to travel. British Columbia Civil Resolution Tribunal rejected these arguments, stating that the airline is responsible for all information given by the Chatbot.6


6 https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

This and many such cases emphasise that companies and professionals are responsible for the output given by their AI systems.

HUMAN CHALLENGES

Skills Gap and Upskilling Needs

Adopting AI requires a basic understanding of

  •  How LLMs are created and how they generate response
  •  Selecting the right AI tools
  •  Identifying and mitigating risks associated with AI responses
  •  Adhering to data privacy regulations

AI tools have existed for more than two years. Even though chartered accountants and their teams are very aware of this technology and the many tools available, they still need to improve their skills. ICAI has been regularly conducting a Certificate Course on AI ( AICA-Level-1). As per estimates, about 20,000 CAs have taken this course so far, which is just 5% of the total number.

Addressing these skill gaps through structured training, certification programs, workshops, and continuous professional education can significantly enhance AI adoption.

Resistance to Change and Fear of Job Displacement

Leaders across the world are divided about the impact of AI on jobs. While some warn that AI could eliminate substantial white-collar jobs in the near future, others are optimistic about the technology transforming current jobs rather than eliminating them.

The World Economic Forum’s Future of Jobs Report 2025 indicates that 40% of employers anticipate workforce reductions in areas where AI can automate tasks. This trend is particularly affecting entry-level positions, as AI increasingly handles tasks traditionally assigned to junior staff, potentially limiting early career opportunities.7


7 https://www.weforum.org/publications/the-future-of-jobs-report-2025/

There are regular news stories about lay-offs by tech companies across the world, partially driven by AI adoption. This is causing anxiety among people, and they tend to avoid AI tools.

DEPENDENCY CHALLENGES

When AI tools become more powerful and user-friendly, there’s a danger that professionals will depend on them too heavily. This could lead to machines handling critical thinking and ethical choices that humans should make, potentially weakening professional abilities.

Several taxpayers in Ontario received tax demands and penalties from the Canada Revenue Authority for incorrect Child Tax Care Credit. They had relied on TurboTax software to file their tax returns and relied on its computation. No CPA cared to verify the calculation.8


8 https://globalnews.ca/news/11128974/turbotax-ontario-cra-audits/

Skill Atrophy

This refers to the gradual loss of human skills due to over reliance on automation and now on AI tools. There are fears that the professionals will stop practising key tasks requiring analytical or decision-making skills, thereby deferring human judgements to machines.

A pertinent example of skill atrophy is about commercial pilots – who heavily rely on flight autopilot systems. It has been regularly reported that over-reliance on automation has led to atrophy in manual flying skills. The regulators are now emphasising the importance of manual flying skills.
In the context of tax practice, drafting is considered as an intellectual craft among tax practitioners. Several lawyers and CAs are known for their distinguished style of legal drafting, where each clause reflects careful anticipation of risk, future disputes and the nuanced intent of the parties involved.

As AI tools make drafting a routine automated task, younger professionals may never be able to develop the instinct and depth required for sophisticated legal drafting.

Loss of Institutional Memory

Even now, senior legal counsels pass down case strategies, negotiation skills and interpretation of complex legal clauses through hands-on mentorship and formal/ informal internal notes. This process forms the backbone of consistent standing in the market across years and throughout leadership changes. Over-reliance on AI tools may disconnect and harm a firm’s legal heritage.

AI ADOPTION

With all these challenges outlined above, should a CA firm stay away from AI tools altogether or embrace them? Staying away is no option at all. As AI technology evolves and makes strides, it will be impossible to stay away and remain competitive.

Balanced adoption: Human + AI = Augmented Intelligence

The ideal approach for any firm is to strike a balance between AI capabilities and human judgment. AI tools should be considered valuable for augmenting human expertise. For example, during the audit, an AI tool may flag unusual journal entries or patterns in financial data across multiple subsidiaries within seconds—but it takes an experienced auditor to determine whether those anomalies are due to fraud, error, or legitimate business reasons.

Phased Implementation and Clear Objectives

Jumping into full-scale automation without a defined purpose often leads to inefficiencies, employee resistance, and misaligned outcomes.

There are certain areas where AI tools can bring speed and reasonable amounts of precision; such areas should be the first to be implemented. Later, more complex areas can be considered. Each phase should have measurable goals, like reducing turnaround time, and must include feedback loops for refinement. This approach not only builds internal confidence and capability but also allows teams to adapt culturally and technically.

Investing in People and Culture

For AI adoption to succeed sustainably, investing in people and culture is as important as investing in technology. Even the most advanced AI tools will fail to deliver value if the workforce is not prepared, engaged, and aligned with the transformation.

Employees should be encouraged to upskill themselves to utilise the power and understand limitations of the AI technology.

Strategic Tool Selection

The selection of a proper tool is very important for AI adoption to work smoothly.

  1.  Ensure that the tool fits the functional requirements and performs accurately on real-world test cases. Example: A Tax research tool should be able to present a comprehensive note on a given question, considering all relevant legal provisions, case laws and expert commentaries.
  2.  Verify that the tool offers clear reasoning and citations for the output i.e. should follow explainable AI principles. In the above example, in the output, the response must contain specific references to the sections, rules, notifications and citations used for generating the response.
  3.  Data Protection and Privacy: Check the tools provide strong encryption during data transmission. Do not use data for model training / other purposes without consent and compliance with data protection laws.
  4.  The ROI can be justified with measured success criteria, e.g. time-saving.

CONCLUSION

Looking at this comprehensive analysis of AI adoption challenges in professional practice, the path forward is clear: cautious optimism paired with strategic implementation. While AI tools present significant risks around accuracy, bias, privacy, and over-dependence, completely avoiding them is not a viable competitive strategy.

The key lies in treating AI as an intelligent assistant rather than a replacement for professional judgment, maintaining human oversight at every critical decision point, and investing equally in technology and people.

Success requires a phased approach that begins with lower-risk applications, establishes robust verification processes, and builds organisational capability through continuous learning and cultural adaptation.

Paradigm Shift in Drafting of Various Documents in Chartered Accountants’ Office Using Artificial Intelligence

“Tools maketh man. With tools he is everything, without tools he is nothing.” Thomas Carlyle, in Sartor Resartus, circa 1834

INTRODUCTION

In recent years, tools equipped with generative and assistive AI technologies have moved from the fringes into mainstream professional services. Chartered Accountants (CAs) are increasingly leveraging AI to transform how they draft, review, and finalise critical documents—from audit reports to tax opinions. This shift is not merely technological; it represents a fundamental change in workflows, skill sets, and value propositions for CA firms. This article explores that paradigm shift, drawing on industry surveys, flagship initiatives by major firms, and practical implementation guidance. Most importantly, it also identifies the AI edge and shortcomings when such AI technologies are used as tools in drafting, reinventing the drafting process flow.

CONVENTIONAL APPROACH TO DOCUMENT DRAFTING

Traditionally, drafting financial statements, audit opinions, limited review reports, tax submissions, board minutes, etc. has been mostly manual; a mix of labour and skill intensive process. CAs and their teams spend countless hours researching regulations, formatting disclosures, ensuring consistency, and tailoring wording to each client’s facts. Key steps included:

  •  Manual Template Updates: Maintaining Word/Excel templates with standardised language.
  •  Regulation Research: Manually searching for the latest standards or tax provisions.
  •  Drafting and Review: Repeated back-and-forth between juniors and seniors for completeness, accuracy and tone.
  •  Compliance Checks: Ensuring all disclosures meet statutory and professional requirements.

While this approach has served the profession for decades, it often led to bottlenecks, inconsistencies, mistakes and high costs—particularly during peak season. Enter AI.

OVERVIEW OF AI TECHNOLOGIES

Modern drafting tools have evolved to address challenges and limitations of conventional approach. These tools are built on one or more technologies that are Key AI components; these include:

  •  Natural Language Processing (NLP)
    Enables machines to understand and generate human-like text, improving grammar, tone, and context.
  •  Generative AI / Large Language Models (LLMs)
    Models such as GPT-4 can produce full-length narratives—like audit report sections—based on prompts.
  •  Machine Learning (ML)
    Learns from past document versions to suggest consistent phrasing and identify anomalies.
  •  Advanced Search & Knowledge Graphs
    Allow quick retrieval of relevant regulations or precedent documents.
  •  Conversational AI / Chatbots
    Provide on-demand assistance, summarise complex guidance, and automate routine queries.

With these capabilities, AI can draft first drafts, propose edits, extract key data, and even format entire documents—all under human oversight. Besides the popular and general-purpose AI tools like ChatGPT and Perplexity, some AI tools that have particularly found adoption for drafting include Claude, Gemini, Legalfly, and Gavel. Although most of these offer both free and subscribed versions, readers are encouraged to use subscribed version in order to harness their full capabilities.

AI IN DRAFTING: CORE APPLICATIONS

Audit Proposals, Observations and Reports

AI tools can generate complete proposals for Internal / Special purpose audits, given the financial statements or other relevant documents as inputs. It can also suggest fees for the proposed engagement, by identifying and comparing fees for similar engagements that may have been used in its training.

Feed an AI with data from Purchase or Sales register and it can identify an exhaustive list of high-risk transactions along with possible control deficiencies in client’s internal control system. Your audit observations are ready for management comments!

AI tools can generate sections of audit reports—such as Qualified Opinion, Emphasis of Matter, and Key Audit Matters—by analysing trial balance data, risk assessments, and fixed-asset registers. Similarly, AI tools can draft the “Basis for Opinion” section, reducing manual write-ups by up to 50%.

Financial Statements and Notes

Disclosures (e.g., related party transactions, impending litigation, asset impairments) often require standardised wording. AI can fill templates with client-specific numbers, adjust narratives based on materiality thresholds, and update references when accounting standards change.

Tax Returns and Schedules

From populating Schedule AL (Asset/Liability) of income tax returns to drafting TDS certificates, AI can extract figures from ERP systems, apply relevant sections (e.g., 194H, 44AD), and flag inconsistencies such as missing Form 16 entries.

Management Letters and Client Memos

Writing management letters after audit findings may also involve drafting recommendations to address each observed deficiency. AI-driven summarisation can convert bullet points—like control deficiencies—into coherent corrective action points. Chatbots can draft reminder emails or follow-ups, akin to the mail templates used by your firm in data-submission reminders.

Board Minutes and Corporate Filings

AI templates comply with Companies Act requirements for board resolutions, share allotments, and annual filings. A few prompts (e.g., “record today’s meeting approval of financial statements”) generate complete minutes, ready for partner review.

Routine Correspondence

Letters for engagement terms, appointment letters, and client onboarding can be drafted with minimal edits. AI ensures consistent tone and up-to-date compliance references, saving administrative staff over two hours per letter on average.

Tax scrutiny submissions, Grounds of Appeal, Statement of facts, Affidavits, Application to keep penalty proceedings under abeyance, etc.

CAs use AI to generate all of the above and more with remarkable accuracy and unmatched efficiency. Need a tax opinion on any complex matter, fully supported with citations, in a jiffy? With a few well-structured prompts, the first draft is ready, within seconds, literally!

Interpreting regulatory notifications, circulars and assessing their impact on Client’s operations
CAs are increasingly using AI tools to interpret regulatory changes (e.g., changes in TCS provisions and FEMA regulations) and help their clients understand their impact on their operations.

THE AI IMPACT – AI’S HITS AND MISSES

At this stage, the reader must know how this evolution (of implementing AI based tools) for document drafting has fared for the profession so far. Below is a concise comparison of where AI has outperformed even an experienced Chartered Accountant in drafting, and where it still lags behind.

Aspect AI’s Hits AI’s Misses
1. Speed & Throughput Generates first drafts (e.g. audit report sections, board minutes) in seconds versus hours or days of manual work. Studies show up to a 75% reduction in drafting time for standard documents. Cannot autonomously verify the factual accuracy of source data; human review remains essential to catch mis-pulls or misalignments with client-specific facts.
2. Consistency & Standardisation Always applies the latest approved wording and formatting, eliminating fatigue-induced inconsistencies across multiple documents. Lacks the ability to subtly tailor tone, emphasis, or “voice” to long-standing client relationships or firm culture—often resulting in language that feels generic or impersonal.
3. Regulation & Template Updates Instantly integrates new tax rulings or accounting-standard changes from a centralised knowledge base. No lag between enactment and template update. May “hallucinate” or misquote regulations if its underlying model isn’t rigorously fine-tuned and constantly validated, risking non-compliance without close human oversight.
4. Scalability Can draft hundreds or thousands of similar documents (e.g., TDS certificates, engagement letters) in parallel, with zero incremental fatigue or margin for human error. Cannot exercise professional judgement in distinguishing which items truly warrant emphasis in complex, non-standard cases—AI treats every file as a cold “data dump” unless explicitly guided.
5. Availability Operates 24/7 without downtime or shift constraints, enabling off-hours drafting and on-demand updates for global teams. No ethical responsibility or accountability. If a draft contains errors that lead to regulatory penalties, AI cannot be held liable—only the human practitioner can certify and assume professional risk.
6. Cost Efficiency Virtually zero marginal cost for each additional draft once deployed, driving down per-document costs significantly for high-volume tasks. Requires substantial upfront investment in secure, compliant infrastructure, model licensing, and ongoing retraining—often out of reach for smaller practices without clear ROI.
7. Multilingual & Formatting Quickly localises documents into multiple languages (e.g., English → Marathi) with minimal post-editing, and auto-formats tables, footnotes, and numbering. Struggles with idiomatic expressions or culturally nuanced phrasing—post-translation editing by a native speaker remains necessary to ensure readability and avoid misinterpretation.
8. Data Extraction & Linking Automatically pulls figures from ERP/GL and populates schedules or disclosures, linking cross-references accurately across a firm’s documents. Cannot detect missing disclosures or interpret ambiguous data without clear rules—in complex scenarios (e.g., unusual related-party transactions), the AI may omit or misclassify items, requiring a CA’s domain insight to catch and correct.

IMPLEMENTATION ROADMAP

Are you tempted to embark on your journey to make this paradigm shift in document drafting at your firm? Super! Here are the steps –

Assessing Readiness

Conduct an internal assessment to gauge your firm’s AI maturity, identify areas with high drafting volumes, and evaluate how well your systems and teams can adapt to AI-driven workflows.

Pilot Projects

Select one document type, such as tax scrutiny submissions or internal audit observations, for a pilot project to assess the AI tool’s ability to generate accurate drafts, track time savings, and measure user satisfaction with the process.

Training and Change Management

Provide targeted training to your teams on how to effectively use AI tools, focusing on prompt engineering, managing AI output, and integrating these tools into daily workflows. Also ensure that continuous support and resources are available to teams, such as AI usage workshops and a dedicated support team, to help with the transition and encourage adoption across the firm.

Governance and Controls

Establish clear governance policies to oversee AI usage, ensuring proper privacy and confidentiality of clients’ data, validation of AI outputs, compliance with applicable regulations, maintaining audit trails, and implementing change management procedures to monitor and adjust AI models as necessary.

In conclusion, while AI offers significant advantages in efficiency, scalability, and standardisation, it remains essential that Chartered Accountants oversee and guide AI-driven drafts to ensure compliance, judgement, and ethical considerations are consistently met.

And yes, good luck to you in the journey ahead!

Leveraging AI for Enhanced Ca Practice: A Practical Guide To Publicly Available Models

The post-pandemic digital transformation has accelerated professional adoption of AI-enabled tools across industries. For chartered accountants, the emergence of sophisticated AI models presents opportunities to enhance practice efficiency, analytical capabilities, and client service delivery. This guide explores how Indian CAs can strategically leverage publicly available AI models whilst maintaining professional standards and ethical obligations.

THE AI REVOLUTION IN PROFESSIONAL PRACTICE

The launch of ChatGPT in late 2022 marked a turning point in AI accessibility. What began as curiosity-driven experimentation has evolved into practical business applications across audit, taxation, advisory services, and compliance functions. By 2025, AI integration will be crucial for maintaining a competitive advantage and meeting evolving client expectations.

This transformation requires CAs to understand not merely what AI can do, but how to use it responsibly and effectively within professional frameworks. The approach involves viewing AI as an augmentation tool that enhances human expertise rather than replacing professional judgment.

CHATGPT BY OPENAI: THE FOUNDATIONAL TOOL

Core Features and Customisation

ChatGPT remains the most accessible entry point for AI adoption in professional practice. However, effective utilisation requires proper configuration  and understanding of its capabilities. The below-mentioned list gives specific suggestions on how it can be made better:

a. Custom Instructions Setup

Users should begin by personalising ChatGPT through Settings > Personalisation > Custom Instructions. This feature allows practitioners to provide context about their professional role, preferred communication style, and specific requirements. For instance, specifying that one is a chartered accountant in India ensures responses consider relevant regulatory frameworks and professional standards.

Figure 1 – Customise ChatGPT

Figure 2 – Set Custom Instructions

b. Leveraging Custom GPTs The Custom GPTs feature (free for all) provides pre-built specialisations that can enhance productivity. Notable options include “Data Analyst” by ChatGPT, YouTube Summarisers, and Whimsical Diagrams.

Practitioners can also create bespoke GPTs tailored to their practice needs, such as proposal generation, minute formatting, or specific compliance checklists.

c. ICAI’s CA-GPT Integration

The Institute of Chartered Accountants of India has developed CA-GPT (accessible at https://ai.icai.org/cagpt/), which provides authenticated access to specialised GPTs with ICAI publication repositories. This resource offers multiple domain-specific GPTs, including Direct and Indirect Tax GPTs, as well as industry-specific GPTs with annual report data for comparative analysis of FY 2023-24.

Figure 3 – CAGPT

Figure 4 – Industry GPT

d. Model Selection Strategy

Users with paid accounts can often access different models, such as GPT-4 and GPT-3, which are quite powerful. A model for simplicity’s sake is like a thinking hat that the AI puts on every time you ask a question. Some can answer with advanced reasoning (like the O3 model) and some with quick answers for general purposes (4O).

COMPARATIVE INSIGHTS: GPT-4O VS GPT-O3

Prompt Used in Both Models: “Clarify if input tax credit is available on RCM paid for legal services.” The prompt was kept simple and to the point to see how both models respond to a compliance-based GST question.

Using the GPT 4o Model

Figure 5 – Using CAGPT – Indirect Taxes – in GPT 4-o

RESPONSE FROM GPT-4O: QUICK, CONCISE, AND BUSINESS-FOCUSED

Figure 6 -Response from – Indirect Taxes – in GPT 4-o

GPT-4o answered promptly within 2–5 seconds and offered a well-structured, client-ready response.

RESPONSE FROM GPT-O3: DETAILED AND RESEARCH-FOCUSED

Figure 7 – Using CAGPT – Indirect Taxes – in GPT o3 with reasoning

GPT-o3 would take much longer to process the same question, indicative of its more analytical nature. Although the screenshot depicts it only as “thinking,” this model typically tries to probe questions in greater depth.

PERPLEXITY AI: RESEARCH AND COMPLIANCE INTELLIGENCE

Perplexity AI distinguishes itself as a research-focused tool that prioritises accuracy through source verification. Unlike traditional generative AI, it combines conversational intelligence with real-time web access, making it valuable for regulatory research and compliance updates.

Figure 8 – Perplexity giving reference to sources and linkages for further reference.

ILLUSTRATIVE PRACTICAL APPLICATIONS FOR CAs

  •  Source Verification: Every response includes citations from government websites, regulatory agencies, and official databases, enabling users to verify information independently.
  •  Real-Time Updates: Live connectivity ensures access to the latest amendments, notifications, and regulatory changes necessary for tax and compliance professionals.
  •  Factual Focus: Perplexity concentrates on factual information rather than interpretative content, making it suitable for compliance-sensitive work.

PRACTICAL APPLICATIONS

  •  Regulatory Monitoring: Track RBI, SEBI, and ministry announcements for weekly compliance digests
  •  Research Support: Fetch current provisions and notifications with source links for verification
  •  Due Diligence: Compile recent regulatory changes affecting specific sectors or transactions

The tool’s emphasis on source attribution makes it particularly useful when preparing regulatory updates or compliance memoranda where citation accuracy is critical.

CLAUDE BY ANTHROPIC: PROFESSIONAL COMMUNICATION EXCELLENCE

Claude excels in contextual understanding and ethical alignment, making it particularly valuable for professional environments requiring nuanced communication and balanced analysis. In addition, the ability to code and showcase VBA Scripts, Python Programs or even simple artefacts is compelling.

Figure 9 -Illustrative Valuation Forecasting Model created using Claude

DISTINCTIVE CHARACTERISTICS

  •  Contextual Reasoning: Claude interprets queries within broader professional and regulatory contexts, providing more relevant responses than literal text interpretation.
  •  Risk Sensitivity: Responses regularly include appropriate caveats and highlight potential exceptions, supporting balanced professional advice.
  •  Coding Proficiency: Strong capabilities in automation, macro development, and process scripting for practice efficiency improvements.
  •  Professional Tone: Maintains formal, legally prudent communication suitable for both internal and client-facing documentation.

ILLUSTRATIVE PRACTICAL APPLICATIONS FOR CAs

Claude proves particularly effective for:

  •  Draft preparation requires professional language and structure
  •  Complex regulatory interpretation requiring balanced analysis
  •  Automation scripts for repetitive tasks
  •  Client communication requires diplomatic language

The tool’s emphasis on ethical considerations and balanced responses aligns well with professional requirements for objective advice.

GEMINI: GOOGLE WORKSPACE INTEGRATION

Gemini represents Google’s integration of AI capabilities throughout its Workspace environment, including Docs, Sheets, Gmail, Slides, Meet, and Drive. This integration enables professionals to access AI assistance within their existing workflow.

KEY FEATURES

  •  Contextual Integration: Gemini analyses current documents, emails, or spreadsheets to provide contextually relevant suggestions and content.
  •  High Context Window: Capability to process approximately 500,000+ words or 25,000+ lines of code, enabling analysis of large documents or datasets.
  •  Collaborative Features: Functions as a co-author or co-analyst, proposing edits, formatting tables, and summarising meeting content.
  •  Clean Formatting: Outputs are structured with appropriate headings, bullet points, and tables for immediate use in professional documents.

ILLUSTRATIVE PRACTICAL APPLICATIONS FOR CAs

  •  Google Sheets Financial Analysis: Automated margin analysis, ratio report creation, and variance identification for management information systems and board presentations.
  •  Google Docs Compliance Drafting: Formatted tax summaries, CSR applicability notices, and FEMA checklists with appropriate formatting and legal clarity.
  •  Gmail Client Communication: Professional update drafting, audit query clarification, and reminder generation through prompt-based email composition.
  •  The tool’s integration within Google’s ecosystem makes it particularly valuable for practices already using Google Workspace for collaboration and document management.

MICROSOFT COPILOT: OFFICE 365 ENHANCEMENT

Microsoft Copilot integrates across Microsoft 365 applications (Word, Excel, PowerPoint, Outlook, Teams), providing AI assistance within existing workflows rather than requiring platform changes.

Figure 10 – Microsoft Copilot Integration and Use Cases

Features and Capabilities

  •  Context-Aware Support: Copilot understands file formats and content context, providing appropriate responses whether working in Excel, Word, or Outlook.
  •  Task-Specific Commands: Users can request email summarisation, financial report creation, audit schedule building, or client message refinement with appropriate tone adjustments.
  •  Data Integration: Leverages existing spreadsheets, documents, calendars, and Teams messages to produce accurate outputs without repetitive input requirements.
  •  Professional Standards: Employs skilled and consistent formatting that adheres to business conventions across all applications.

Applications in Practice

  •  Excel – Financial Modelling: Natural language input for pivot table creation, GST summary automation, cash flow forecasts, and working capital ratio analysis.
  •  Word – Document Preparation: Professional memo drafting, report formatting, and compliance documentation with appropriate structure and language.
  •  Teams – Collaboration: Meeting note recording, action item management, and team onboarding with a checklist and SOP-based briefings.
  •  Outlook – Communication: Email composition assistance, meeting scheduling optimisation, and client communication management.

ADDITIONAL SPECIALISED TOOLS

Several other AI applications serve specific professional needs:

Meeting and Documentation Tools

  •  Fireflies, Otter, Spinach.ai: Meeting transcription and minute preparation
  •  Guidde: Process documentation and flowchart creation

Content Creation

  •  Gamma.App, AIPPT.com: Professional presentation development
  •  Grammarly, Quillbot, Rytr: Writing enhancement and grammar correction

Custom Solutions

  •  Dante.ai, BotPress.com: Knowledge-based chatbot development for client service
  •  Loveable.dev, Cursor, Replit: Custom application development through natural language programming

Analysis and Summarisation

  •  Summarise.ing, TLDR, Google Notebook LM: Article and video summarisation for research.
  •  Midjourney: Professional infographic and visual content creation

CRITICAL CONSIDERATIONS FOR ETHICAL AI USAGE

The implementation of AI tools in CA practice must align with professional standards, regulatory requirements, and ethical obligations.Several considerations are essential for responsible adoption:

Data Privacy and Confidentiality

  •  Client Data Protection: Never input confidential client information, including financial statements, PAN numbers, or sensitive business details, into public AI tools.
  •  Enterprise Solutions: Use enterprise-grade AI solutions that comply with GDPR, Indian Data Protection Laws, and ICAI data security guidelines.
  •  Implementation Protocols: Establish strict data handling protocols when using cloud-based AI services, and consider local deployment options for highly sensitive information processing.

Professional Judgement Maintenance

  •  Independent Analysis: AI outputs must never replace professional scepticism and independent judgement in audit or advisory work.
  •  Validation Requirements: Always validate AI-generated content before incorporating it into reports, filings, or client deliverables.
  •  Professional Responsibility: Maintain full responsibility for all professional opinions regardless of AI assistance utilised.

ICAI Code of Ethics Compliance

  •  Fundamental Principles: Ensure all AI usage aligns with ICAI’s principles of integrity, objectivity, professional competence, and due care.
  •  Independence Considerations: Avoid situations where AI usage could compromise independence or create conflicts of interest.
  •  Ethical Standards: Maintain consistent ethical standards when using AI tools, as with traditional practice methods.

Transparency and Documentation

  •  Stakeholder Disclosure: Disclose to stakeholders when AI has been used in analysis, reports, or audit procedures that are material to their understanding.
  •  Record Maintenance: Maintain detailed records of AI tool usage in decision-making processes and report generation.
  •  Audit Trail: Document the extent and nature of AI assistance in audit working papers and client files.

Regulatory Compliance

  •  Legal Adherence: Verify that AI usage complies with the Income Tax Act, Companies Act 2013, SEBI guidelines, and relevant audit standards.
  •  Regulatory Updates: Stay current with regulatory guidance on AI usage in professional services.
  •  Jurisdictional Considerations: Consider jurisdictional differences when serving clients across multiple regulatory environments.

Continuous Professional Development

  •  ICAI Guidance: Stay informed about ICAI’s evolving guidance on AI and digital tools in professional practice.
  •  Education Participation: Engage in continuing education programmes focused on AI ethics and responsible usage.
  •  Policy Updates: Regularly review and update firm policies on AI usage based on emerging best practices and regulatory developments.

ILLUSTRATIVE IMPLEMENTATION STRATEGY

Successful AI adoption in CA practice requires a structured approach:

Phase 1: Foundation Building

  •  Begin with ChatGPT customisation and Custom GPT exploration
  •  Establish data handling protocols and ethical guidelines
  •  Train team members on basic AI tool usage and limitations

Phase 2: Workflow Integration

  •  Implement Perplexity AI for research and compliance monitoring
  •  Integrate Gemini or Copilot based on the existing software ecosystem
  •  Develop standard operating procedures for AI tool usage

Phase 3: Advanced Applications

  •  Create custom GPTs for specific practice needs
  •  Implement specialised tools for meeting management and documentation
  •  Establish quality control processes for AI-assisted work

Phase 4: Continuous Improvement

  •  Monitor AI tool developments and updates
  •  Regularly assess effectiveness and adjust usage patterns
  •  Stay current with professional guidance and regulatory requirements

CONCLUSION

The strategic integration of AI in chartered accountancy practice represents both an opportunity and a responsibility. AI tools offer substantial capabilities for enhancing efficiency, analytical depth, and client service quality, but professional judgement, ethical considerations, and regulatory compliance must guide their implementation.

Success in AI adoption requires understanding each tool’s strengths and limitations, implementing appropriate safeguards and validation protocols, and maintaining the professional scepticism and independent judgement that define chartered accountancy practice. By thoughtfully integrating AI as an augmentation tool rather than a replacement for professional expertise, chartered accountants can enhance their practice capabilities while preserving the trust and integrity that are fundamental to the profession.

The future of chartered accountancy lies not in choosing between human expertise and artificial intelligence, but in strategically combining both to deliver enhanced value to clients whilst maintaining the highest standards of professional practice. Practitioners who master this integration will be well-positioned to serve their clients effectively and contribute to the profession’s continued evolution in an increasingly digital landscape.