AI Model Validation with ZeroKnowledge Proof: Trust and Transparency Without Data Access

dc.contributor.authorTaş, Özge
dc.date.accessioned2025-07-23T06:21:41Z
dc.date.available2025-07-23T06:21:41Z
dc.date.issued2025
dc.departmentKapadokya Üniversitesi
dc.description.abstractSummary Zero-Knowledge Proof Machine Learning (ZKML) is a new approach that combines zero-knowledge proofs (ZKP) with machine learning (ML) to develop privacy-focused and secure artificial intelligence systems. ZKP are cryptographic techniques that enable one party to prove the validity of certain information without disclosing any additional data. This mechanism is particularly important in fields that require high levels of privacy, such as finance, healthcare, and identity verification. ZKML enables the cryptographic verification of the accuracy of model inferences or training processes without disclosing model parameters or user data. In the context of federated learning, the accuracy of each participant's contribution to the model training process can be verified using proof systems such as zk-SNARKs, thereby enabling a secure collaboration environment without the risk of data leakage. Similarly, during the inference phase, it can be verified whether the model produced a specific output, which builds trust in fields such as medicine and finance where sensitive decisions are made. Currently, the production of ZK proofs requires high computing power. However, thanks to advances in hardware, distributed systems, and cryptography, proof production is now more feasible even for larger and more complex models. Startups like Modulus Labs and tools like the ezkl library enable the production of ZK proofs on models in ONNX format, offering practical solutions to developers. Systems like Plonky2 have reduced proof production for models with millions of parameters to just minutes. ZKML has a bunch of use cases, including on-chain ML verification (e.g., in DeFi protocols), transparency of ML services (MLaaS), fraud detection, and private inference. For example, in decentralized Kaggle-like systems, the accuracy of a model can be proven without revealing its details. In healthcare, patients can access diagnostic results without disclosing their data. In conclusion, ZKML combines privacy protection with the security of verification processes, enabling the development of more ethical and reliable artificial intelligence systems. This approach, which lies at the intersection of cryptography and machine learning disciplines, has the potential to increase the transparency and security of AI systems at both technical and societal levels.
dc.identifier.endpage28
dc.identifier.startpage16
dc.identifier.urihttps://www.duvaryayinlari.com/Webkontrol/IcerikYonetimi/Dosyalar/theoretical-and-applied-approaches-in-engineering_icerik_g4633_bGBzHgd7.pdf
dc.identifier.urihttps://hdl.handle.net/20.500.12695/3672
dc.identifier.volume10
dc.institutionauthorTaş, Özge
dc.institutionauthorid0000-0001-7220-5054
dc.publisherDuvar Yayınları
dc.relation.ispartofTHEORETICAL AND APPLIED APPROACHES IN ENGINEERING
dc.relation.publicationcategoryKitap Bölümü - Uluslararası
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.subjectMachine Learning
dc.subjectZero-Knowledge
dc.subjectArtificial Intelligence.
dc.titleAI Model Validation with ZeroKnowledge Proof: Trust and Transparency Without Data Access
dc.typeBook Chapter

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
[ X ]
İsim:
content.pdf
Boyut:
1.78 MB
Biçim:
Adobe Portable Document Format
Lisans paketi
Listeleniyor 1 - 1 / 1
[ X ]
İsim:
license.txt
Boyut:
1.17 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: