Research

Nitid AI actively participates in cutting-edge R&D projects. Born out of the MIT research ecosystem, our mission is to tackle foundational scientific and technological challenges through advanced artificial intelligence.

LensNet: Enhancing Real-time Microlensing Event Discovery with Recurrent Neural Networks in the Korea Microlensing Telescope Network

Astrophysical Journal

Traditional microlensing event vetting methods require highly trained human experts, and the process is both complex and time-consuming.
This reliance on manual inspection often leads to inefficiencies and constrains the ability to scale for widespread exoplanet detection, ultimately hindering discovery rates. To address the limits of traditional microlensing event vetting, we have developed LensNet, a machine learning pipeline specifically designed to distinguish legitimate microlensing events from false positives caused by instrumental artifacts, such as pixel bleed trails and diffraction spikes. Our system operates in conjunction with a preliminary algorithm that detects increasing trends in flux. These flagged instances are then passed to LensNet for further classification, allowing for timely alerts and follow-up observations. Tailored for the multi-observatory setup of the Korea Microlensing Telescope Network (KMTNet) and trained on a rich dataset of manually classified events, LensNet is optimized for early detection and warning of microlensing occurrences, enabling astronomers to organize follow-up observations promptly. The internal model of the pipeline employs a multi-branch Recurrent Neural Network (RNN) architecture that evaluates time-series flux data with contextual information, including sky background, the full width at half maximum of the target star, flux errors, PSF quality flags, and air mass for each observation. We demonstrate a classification accuracy above 87.5%, and anticipate further improvements as we expand our training set and continue to refine the algorithm.
Viaña, J. et al.
2025

cecilia: A Machine Learning-Based Pipeline for Measuring Metal Abundances of Helium-rich Polluted White Dwarfs

MNRAS

Over the past several decades, conventional spectral analysis techniques of polluted white dwarfs have become powerful tools to learn about the geology and chemistry of extrasolar bodies.
Despite their proven capabilities and extensive legacy of scientific discoveries, these techniques are however still limited by their manual, time-intensive, and iterative nature. As a result, they are susceptible to human errors and are difficult to scale up to population-wide studies of metal pollution. This paper seeks to address this problem by presenting cecilia, the first Machine Learning (ML)-powered spectral modeling code designed to measure the metal abundances of intermediate-temperature (10,000≤Teff≤20,000 K), Helium-rich polluted white dwarfs. Trained with more than 22,000 randomly drawn atmosphere models and stellar parameters, our pipeline aims to overcome the limitations of classical methods by replacing the generation of synthetic spectra from computationally expensive codes and uniformly spaced model grids, with a fast, automated, and efficient neural-network-based interpolator. More specifically, cecilia combines state-of-the-art atmosphere models, powerful artificial intelligence tools, and robust statistical techniques to rapidly generate synthetic spectra of polluted white dwarfs in high-dimensional space, and enable accurate (≲0.1 dex) and simultaneous measurements of 14 stellar parameters — including 11 elemental abundances — from real spectroscopic observations. As massively multiplexed astronomical surveys begin scientific operations, cecilia’s performance has the potential to unlock large-scale studies of extrasolar geochemistry and propel the field of white dwarf science into the era of Big Data. In doing so, we aspire to uncover new statistical insights that were previously impractical with traditional white dwarf characterisation techniques.
Badenas-Agusti, M., Viaña, J., et al.
2024

Front-propagation Algorithm: Explainable AI Technique for Extracting Linear Function Approximations from Neural Networks

NAFIPS

This paper introduces the front-propagation algorithm, a novel eXplainable AI (XAI) technique designed to elucidate the decision-making logic of deep neural networks.
Unlike other popular explainability algorithms such as Integrated Gradients or Shapley Values, the proposed algorithm is able to extract an accurate and consistent linear function explanation of the network in a single forward pass of the trained model. This nuance sets apart the time complexity of the front-propagation as it could be running real-time and in parallel with deployed models. We packaged this algorithm in a software called 𝚏𝚛𝚘𝚗𝚝-𝚙𝚛𝚘𝚙 and we demonstrate its efficacy in providing accurate linear functions with three different neural network architectures trained on publicly available benchmark datasets.
Viaña, J. et al.
2024

Forcing the Network to use Human Explanations in its Inference Process

NAFIPS

We introduce the concept of ForcedNet, a neural network that has been trained to generate a simplified version of human-like explanations in its hidden lay- ers.
The main difference with a regular network is that the ForcedNet has been ed- ucated such that its inner reasoning reproduces certain patterns that could be some- what considered as human-understandable explanations. If designed appropriately, a ForcedNet can increase the model’s transparency and explainability. We also pro- pose the use of support features, hidden variables that complement the explanations and contain additional information to achieve high performance while the explana- tion contains the most important features of the layer. We define the optimal value of support features and what analysis can be performed to select this parameter. We demonstrate a simple ForcedNet case for image reconstruction using as explanation the composite image of the saliency map that is intended to mimic the focus of the human eye. The primary objective of this work is to promote the use of intermediate explanations in neural networks and encourage deep learning development modules to integrate the possibility of creating networks like the proposed ForcedNets.
Viaña, J., Vanderburg A.
2022

Leading the Way in Explainable AI Innovation

We share our latest scientific contributions in Explainable AI and applied machine learning. Explore published papers and projects developed through collaborations with academic and industry partners.