Abstract Nonsense

Posts tagged with "LLMs"

Talk: Implementing Efficient Language Models under Homomorphic Encryption

Today (11/11 Pepero Day in Korea), I had the pleasure of attending a fascinating talk on Implementing Efficient Language Models under Homomorphic Encryption by Donghwan Rho (λ…Έλ™ν™˜) at the Seoul National University Research Institute of Mathematics’ 2025 11.11 Symposium.

I am on holiday in Korea and didn’t want to miss this opportunity to learn more about homomorphic encryption - an area I’ve been been increasingly fascinated by as of late!

# Talk Abstract

As large language models (LLMs) become ubiquitous, users routinely share sensitive information with them, raising pressing privacy concerns. Homomorphic encryption (HE) is a promising solution to privacy-preserving machine learning (PPML), enabling computations on encrypted data. However, many core components of LLMs are not HE-friendly, limiting practical deployments. In this talk, we investigate the main bottleneck - softmax, matrix multiplications, and next-token prediction - and how we address them, moving toward the practical implementation of LLMs under HE.