By Office of Communications and Public Affairs
Dr. Sein Minn has joined the Department of Information and Communications Technologies (ICT) at the AIT School of Engineering and Technology as an Assistant Professor in the Data Science and Artificial Intelligence and Computer Science programs.

He is an AI researcher focusing on the development of adaptive and interpretable systems that bridge machine learning, user modeling, and cognitive modeling. His research advances human-centered and explainable AI, particularly in education, to build transparent and trustworthy AI applications.
Dr. Sein Minn earned his Ph.D. in Computer Engineering from Université de Montréal and has held research positions at INRIA, École Polytechnique, CNRS, and the Université du Québec à Montréal. His work has appeared in top-tier venues such as AAAI, AIED, ICDM, and PAKDD, and he actively contributes as a reviewer and editorial board member for leading AI journals.
Supported by research grants from NSERC (Canada), Inria (France), and partners in Singapore and Japan, his career reflects a strong commitment to advancing trustworthy, human-centered AI for adaptive learning and beyond. In an interview, Dr. Sein shared insights on his research and the future of explainable AI in education.
1. What inspired you to join AIT, and how do you see yourself contributing to the growth of the Data Science and Artificial Intelligence programs?
AIT’s mission of fostering innovation through interdisciplinary education and regional collaboration aligns deeply with my vision of human-centered and interpretable AI. Throughout my academic and research journey at Université de Montréal, INRIA and École Polytechnique, my focus has been on applying AI to human-centered contexts, particularly in Education.
At AIT, I hope to contribute by developing interpretable AI frameworks that not only advance research but also empower to design ethical, transparent and socially impactful AI systems. I also envision introducing new courses and collaborative projects on causal AI and explainable ML, which will strengthen and expand the capabilities of AIT’s Data Science and AI programs.
2. Your research focuses on adaptive and interpretable AI systems. Could you explain how your work bridges human-centered design and machine learning, particularly in the context of education?
My research aims to make AI understandable and adaptive to human needs. For example, during my postdoc work at UQAM and Inria, I developed probabilistic and interpretable knowledge tracing models to analyze students’ learning and how their knowledge evolves over time.
By integrating psychometric principles and cognitive modeling with machine learning, I design systems that can explain their reasoning in human-understandable terms. These models provide knowledge assessments that help teachers and learners gain actionable insights into skill mastery and learning process.
3. Having worked at institutions like INRIA, École Polytechnique, and Université de Montréal, how have these experiences shaped your research philosophy and collaborative approach?
Working across these leading research institutions has profoundly shaped my collaborative and interdisciplinary mindset. At Inria and École Polytechnique, I engaged in projects combining deep learning, data mining and cognitive science, emphasizing rigorous methodology and explainability. At Université de Montréal, I learned the importance of combining theory and data-driven modeling with empirical validation.
These experiences taught me to value open collaboration, reproducible research that I bring into every project. My approach is to create inclusive, cross-institutional teams where expertise in AI, education and data ethics can converge to address complex societal challenges.
4. AI transparency and interpretability are increasingly vital in today’s world. How do you envision trustworthy AI shaping sensitive applications such as education, healthcare, or public policy?
Trustworthy AI is essential wherever human lives or opportunities are directly affected. AI must be interpretable, fair and accountable (especially in sensitive domains). In education, for instance, students and teachers should understand why a system recommends a specific intervention or assessment. Similarly, in healthcare or policy, decisions informed by AI must be explainable to both professionals and the public.
I envision trustworthy AI as a partnership between human judgment and machine reasoning. Through causal and probabilistic approaches that I am currently working on, we can design systems that provide transparent cause-and-effect reasoning, supporting ethical decision-making while maintaining performance and reliability.
5. What trends or emerging directions in Artificial Intelligence and ICT excite you the most, and how can students at AIT prepare to engage with these frontiers?
I’m very excited about the rise of causal AI, generative modeling and human-AI collaboration. These emerging directions are driving us toward intelligent systems that can reason, explain, and create, rather than simply predict. They also raise important questions about ethics, bias and the dynamics of human interaction with intelligent systems, which have become central themes in AI research and application.
For AIT students, preparation involves building strong foundations in mathematics, statistics and programming, but also developing interdisciplinary thinking. Understanding human factors (such as cognition, fairness, and ethics) will be key to shaping responsible and human-aligned AI systems. I strongly encourage students to take part in open-source research, interdisciplinary collaborations, and leading international conferences (like AAAI, NeurIPS, KDD, etc) where they can engage directly with the global AI community.
In this spirit, I am establishing a research group called the Human Insight+AI, which explores the synergy between human understanding and AI. The lab’s goal is to design AI systems that not only learn from data but also amplify human insight, creativity and decision making, fostering a new generation of trustworthy and collaborative AI technologies.
6. What advice would you give to aspiring researchers and engineers who wish to pursue a career in AI and data science?
My advice is to focus on depth, curiosity, and ethical awareness and develop a solid grounding in both theoretical foundations (probability, statistics, optimization) and practical implementation (Tensorflow, Pytorch, scikit-learn). Always ask not just “Can we build it?” but also “Should we build it and how will it help people?”
Engage in research that balances technical rigor with societal impact, and never underestimate the value of collaboration. The future of AI belongs to those who can integrate machine intelligence with human understanding. That is a philosophy that I have carried throughout my career.






