Artificial Intelligence words are seen in this illustration.
Credit: Reuters Photo
Artificial Intelligence (AI) is increasingly becoming a central tool for businesses, particularly those managing large volumes of contracts – its capabilities extend far beyond mere automation; it can draft contracts, ensure compliance with corporate policies, provide reminders for renewals, and organise vast amounts of data. The efficiency and speed brought by AI in contract management are transformative. Traditionally, drafting and reviewing contracts could take weeks, if not months. AI tools such as LawGeex and Kira Systems have revolutionised this process, reducing the time required to mere minutes. For corporations handling thousands of contracts, this means important savings in both time and assets. If trained properly, an AI model can ensure that contract language is consistent and complies with relevant laws and company policies, mitigating the risk of non-compliance and potential legal issues.
In addition to drafting and compliance, AI has potential to excel in managing contract renewals. Automated reminder systems ensure that no critical dates are missed, which is particularly beneficial for companies with extensive contract portfolios. Furthermore, AI’s ability to organise and retrieve large amounts of contract data swiftly enhances operational efficiency and allows businesses to locate specific contracts or clauses quickly, facilitating better decision-making and responsiveness. Another considerable benefit of AI is its capability to extract important provisions from numerous contracts. The potential for AI to streamline and enhance contract management processes is immense, offering a level of efficiency and accuracy that is difficult to achieve manually.
However, these benefits come with substantial challenges, primarily related to biases within AI systems. These biases often stem from the historical data used to train AI models. Historical data can reflect existing societal biases, and if the AI is trained on such data, it will replicate these biases in its outputs. A prominent example of this is Amazon’s AI recruiting tool which developed a bias against female candidates. The tool was trained on resumes submitted over a decade, primarily from men. Consequently, the AI began to favour male candidates, penalising resumes that included terms associated with women.
Past examples show that biases are not limited to corporate environments. When governments in New York and California adopted AI for awarding contracts, the systems demonstrated a bias against minority-owned businesses. The AI tools were not sufficiently attuned to diversity and inclusion considerations, leading to unfair contract awards and perpetuating existing disparities. This highlights a significant challenge of AI systems inadvertently reinforcing and amplifying societal biases, if not carefully managed.
Corrective strategies
Human-induced biases can also play a role in the functioning of AI systems. Developers, consciously or unconsciously, may embed their stereotypes into AI systems, affecting the fairness of AI outputs. Additionally, AI can create feedback loops where biased decisions reinforce the biases in the training data. For example, if initial contracts preferred vendors based on certain demographic characteristics, the AI would continue to favour those characteristics in future contracts, perpetuating and amplifying the bias over time.
Ensuring that AI training data is diverse, varied, and representative can help mitigate biases, potentially leading to a significant decrease in biased decision-making. This can also help minimise the replication of historical biases in AI outputs. Diverse development teams are likely to bring varied perspectives, crucial for recognising and mitigating potential biases effectively. Diverse teams have more potential of having inherent resources to identify and possibly correct biases as they can draw on a broader range of perspectives and solutions. Different cultural backgrounds and life experiences are likely to provide diverse teams with a richer toolkit for addressing biases effectively.
Another essential strategy that may help in addressing biases involves retaining human oversight in AI-driven processes. Independent human reviewers can monitor AI decisions, identify biases, and make necessary adjustments. Regular audits and interventions ensure that biases are addressed promptly, and the AI system continuously improves. Developing transparent AI systems where decision-making processes can be scrutinised could also help in identifying and correcting biases. Establishing such accountability measures for AI-driven decisions is likely to ensure that biases are detected and corrected systematically.
AI holds the promise of revolutionising the contracting process, providing unmatched efficiency and precision. Nonetheless, it is imperative to acknowledge and rectify the biases ingrained within AI systems to allow just and impartial results. By implementing strategies such as using diverse training data, fostering inclusive development teams, maintaining human oversight, and conducting regular audits, it is likely that the risks of biases can be mitigated, if not eliminated. In doing so, AI’s full potential can be harnessed, with greater efficiency and fairness, which is likely to drive better contracting outcomes and foster trust in AI technologies.
(The writer is a lawyer)