Transparent, Robust and Ultra-Sparse Trees (TRUST)

TRUST is my flagship Ph.D. project in Trustworthy AI. It achieves comparable accuracy to state-of-the-art machine learning algorithms - including black box models like Random Forest - while remaining fully interpretable. Scroll down for a short demo of TRUST. Current version solves regression problems (variants like time series only experimentally). Extensions to multiclass classification and beta regression are already under development and I will soon make them available as well.

Free version (launching in June 2025)

Premium version (launching in August 2025)

Below is a demo of the integrated LLM capabilities within TRUST. The video starts by showing the call to the .explain() method included with the free version of the model, where a user wishes to know more about the model’s prediction for a specific instance (a target house). After the default output is shown, including the key features influencing the prediction and their direction, plus a final summary explanation, the user then asks Gemini a completely custom question (premium feature): what minimum changes should be made in the attributes of the given house for the model to output a cheaper predicted price instead? This demonstrates the potential for actionable insights and counterfactual analysis offered by the premium LLM integration.