AI Models Standards Currently are Not FAIR, Poses Many Challenges!



A FAIR AI model will set new standards for the implementation of artificial intelligence and ML

Implementing AI is the newest trend in implementing efficiency and productivity along the AI implementation pipeline. However, even after the advancements in artificial intelligence, the deployment of an AI model in other environments is still complex and difficult. Computational scientists have several problems adapting and reproducing the algorithms that they created in various environments ranging from climate analysis to brain research.

So, scientists have formulated a data-driven AI model to build new systems from the ground up. They say that AI models are currently not FAIR and pose serious challenges to scientific and technical discoveries. The creation of these FAIR AI models can reduce the amount of duplication of effort and share best practices for how to use these models to enable great science. Furthermore, to meet the growing demands of the AI community, the scientists also combined unique data management and high-performing computing platforms to establish a FAIR protocol among these models and quantify the ‘FAIR-ness’ of these AI models.

Through these methods, the researchers have been able to create a computational framework that could help bridge hardware and software, creating AI models that could be run similarly across platforms and that would yield reproducible results. The keys to reproducing this framework were funcX and Globus which enabled researchers to access high-performing computing resources straight from their laptops.

In this study, the researchers used example datasets of an AI model for the diffraction of data from Argonne’s Advanced Photon Source, also a DOE Office of Science user facility. Besides this, the team also used the ALCF AI Testbed’s SambaNova system and the Theta supercomputer’s NVIDIA GPUs (graphics processing units).