Our brains only use about 30-40 watts of power, yet are more powerful than neural networks which take extensive amounts of energy to run. So what can we learn from the brain to help us build better neural networks? Join Michael McCourt as he interviews Subutai Ahmad, VP of Research at Numenta, about his latest work.
In this episode, they discuss sparsity, bioinspiration, and how Numenta is using SigOpt to help them build better neural networks and save on training costs.
1:31 - Background on Numenta
2:31 - Bioinspiration
3:47 - Numenta's three research areas
4:06 - What is sparsity and how does it function in the brain?
7:15 - Training costs, Moore's Law, and how deep learning systems are on a different curve
9:58 - Mismatch between hardware and algorithms today in deep learning
11:04 - Improving energy usage and speed with sparse networks
14:10 - Sparse networks work with different hyperparameter regimes than dense networks
14:18 - How Numenta uses SigOpt Multimetric optimization
15:48 - How Numentat uses SigOpt Multitask to constrain costs
18:06 - How Numenta chose their hyperparameters
19:40 - What's next from Numenta
Subscribe to Experiment Exchange on your favorite podcast platform, available now on Spotify, Apple Podcasts, and more!
✔️ Learn more about SigOpt: https://sigopt.com
✔️ Learn more about Numenta: https://numenta.com
✔️ Follow us on Twitter at / sigopt