Highlights:
- Using distributed ledgers and cryptography, blockchain technology offers a data tracking layer that can produce a tamper-proof historical record of modifications made to an AI model.
- With IBM’s Watsonx.governance toolbox, users can automate and manage their AI models, handle the legal and ethical issues around generative AI and machine learning models, and comply with training and running model requirements.
The flexible blockchain solutions provider Casper Labs disclosed that it is collaborating with IBM Corp. to develop an artificial intelligence governance tool that will include access controls, version control, and monitoring for AI models using decentralized ledger technology.
According to Mrinal Manohar, the CEO of Casper Labs, the company began to recognize the need to address the confluence of blockchain and AI technologies in May of last year. That’s particularly crucial, he said, as more businesses see how critical it is to control their model behavior and adhere to government-established rules.
“There’s been a lot of cynical approaches we think like sprinkle some AI ‘fairy dust’ on blockchain with AI chatbots to understand the ecosystem and so on. But really the two big insights were, one, AI governance will become a really big thing because of government and regulatory needs, and everyone feels AI must become more responsible. And second, this is one of the rare instances where an industry is emerging and the tech stack isn’t well-defined,” said Manohar.
To implement this, Casper Labs started working with IBM to develop a SaaS product using IBM’s watsonx.governance that integrates it with the Casper Blockchain platform to offer companies an analytical solution for AI model control.
Using distributed ledgers and cryptography, blockchain technology offers a data tracking layer that can produce a tamper-proof historical record of modifications made to an AI model. This implies that a transaction can be added to a blockchain record each time an AI model version is changed, and that transaction can be utilized as a “reset point” if something goes wrong with the model thereafter.
“Given that blockchain is tamper-resistant, completely time-synched and can be completely automated, AI governance can be done right, be highly automatable and highly certifiable right out of the gate,” Manohar mentioned.
With IBM’s watsonx.governance toolbox, users can automate and manage their AI models, handle the legal and ethical issues around generative AI and machine learning models, and comply with training and running model requirements. Reduced risks and problems can be achieved by combining its capabilities with an unchangeable historical record of all changes made to a model over its lifespan.
Transparency in determining the exact time of an event and the change that triggered it is even more important, since AI models are prone to drift due to new data introduced into them between versions. The ability to travel back in time to the precise instant before the issue arose is even more advantageous.
Manohar said, “It gives you real version control. Since your blockchain gives you state at every moment in time, it’s very easy to just reboot or revert back to a previous state.”
This is particularly crucial in the case of anomalies like hallucinations, in which a large language model or conversational chatbot may start making erroneous claims with confidence far more frequently than before. Developers might take similar action if they found that the model had been trained on private or confidential material that needed to be removed or copyrighted works. Finding the transaction that occurred before the event when it was introduced may be all that is necessary to go back to the version that was uploaded first.
Large companies can also apply access restrictions and permissions on who can update or train AI models using the same blockchain technology. This feature will enable users to choose which persons or groups of individuals need to participate before training data can be entered. According to Manohar, controlling sensitive datasets for AI would be crucial for huge organizations.
According to Manohar, the first users of this product would likely come from a range of highly regulated and sensitive businesses, particularly those that deal with financial products, insurance, and other related fields where there are stringent restrictions and potential legal repercussions from hallucinations. Like any other industry that uses artificial intelligence (AI) for “track and trace,” supply chain firms require a robust audit trail to determine the rationale for their reliance on a specific model approach.
Shyam Nagarajan, Global Partner, blockchain and responsible AI leader at IBM Consulting, said, “An AI system’s efficacy is ultimately as good as an organization’s ability to govern it. Companies need solutions that foster trust, enhance explainability, and mitigate risk. We’re proud to bring IBM Consulting and technology to support Casper Labs in creating a new solution offering an important layer to drive transparency and risk mitigation for companies deploying AI at scale.”
Currently in an active beta phase, Casper Labs is using the program to gather enterprise user feedback to develop the most practical solution possible. According to the business, the service is expected to begin in the third quarter of this year.