Transparent, Robust and Ultra-Sparse Trees (TRUST)
TRUST
is my flagship Ph.D. project in Trustworthy AI. It achieves comparable accuracy to state-of-the-art machine learning algorithms - including black box models like Random Forest - while remaining fully interpretable. Scroll down for a short demo of TRUST
. Current version solves regression problems (variants like time series only experimentally). Extensions to multiclass classification and beta regression are already under development and I will soon make them available as well.
Free version (launching in June 2025)
- Complete core functionality including main visualization and explainability tools
- Explainability tools include state-of-the-art variable importance scoring and ALE plots + instance-level explanations including SHAP analysis
- Perfect for small to medium datasets where both accuracy and interpretability are essential
Premium version (launching in August 2025)
- All the core functionality included in the free version
- Faster training times (can handle bigger datasets)
- Guaranteed support
- Added functionality:
- LLM integration for enhanced explainability
- Signed (+/-) variable importance plots
- 2-way interaction ALE plots
- Prediction confidence intervals
- Out-Of-Distribution detection
Below is a demo of the integrated LLM capabilities within TRUST
. The video starts by showing the call to the .explain()
method included with the free version of the model, where a user wishes to know more about the model’s prediction for a specific instance (a target house). After the default output is shown, including the key features influencing the prediction and their direction, plus a final summary explanation, the user then asks Gemini a completely custom question (premium feature): what minimum changes should be made in the attributes of the given house for the model to output a cheaper predicted price instead? This demonstrates the potential for actionable insights and counterfactual analysis offered by the premium LLM integration.