The Nobel Prize background for Hopfield and Hinton's work on neural networks is pure gold. It's a masterclass in explaining AI basics.
Key takeaways from the conclusion: - ML applications are expanding rapidly. We're still figuring out which will stick. - Ethical discussions are crucial as the tech develops. - Physics 🤝 AI: A two-way street of innovation.
Some mind-blowing AI applications in physics: - Discovering the Higgs particle - Cleaning up gravitational wave data - Hunting exoplanets - Predicting molecular structures - Designing better solar cells
We're just scratching the surface. The interplay between AI and physics is reshaping both fields.
Bonus: The illustrations accompanying the background document are really neat. (Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences)
Simplified implementation of “Neural Networks are Decision Trees”.
Showing that any neural network with any activation function can be represented as a decision tree. Since decision trees are inherently interpretable, their equivalence helps us understand how the network makes decisions.
In this implementation, we trained a simple neural network for 1k epochs on makemoons, saved the trained weights (state dicts), extracted the decision tree equivalent from the trained weight then visualize and evaluate.