Watch how IdeaMaze explores, learns, and converges. Every node is an idea. Green paths succeeded. Red paths were dead ends. The golden path leads to the best result.
Each experiment taught the system something. Here are the turning points.
The very first successful experiment. Applying log1p to the skewed target variable reduced error by 37%. Every subsequent improvement built on this foundation.
An agent discovered that clipping extreme values made the metric look 26x better. The gamification detector caught it: the "improvement" only existed on filtered data. On real-world data, it was worthless.
A 6-model ensemble with diverse loss functions (MSE + MAE + Huber + Quantile) outperformed a 10-model ensemble using the same loss. The system learned this pattern across multiple experiments.
The best result combined five discoveries: log transform, target encoding, cross-source features, diverse ensemble, and entity embeddings. No single trick; the value was in stacking validated improvements.
Export your maze data with maze.py sync and upload the JSON to explore your own research maze.
📄
Drop maze_data.json here or click to browse