This article is part of our “How AI is changing the world” event series, held in San Francisco, New York, and Tel Aviv from June to November 2019, featuring insights by leading scientists and entrepreneurs on how AI will change healthcare, communication, agriculture, travel, and other industries. Check out all 12 talks here.
* * * * *
At Uber AI, the lab's co-founder Jason Yosinski — whose background includes stints at NASA's Jet Propulsion Laboratory and Google DeepMind — is working to uncover how machine learning algorithms actually, well, learn. "Today, we build models and machines and AI that are more complicated than we can understand," Jason said. "It's really a new field — AI neuroscience."
Jason's research is focusing on how neuron layers in neural nets, a subset of machine learning, play a part in helping or hindering the learning process. Much like the neurons in our brains, these neuron layers learn to find patterns from data, allowing them to perform tasks.
The initial results from his research are surprising: neuron layers, on average, help only about 50 percent of the time, and actually hinder the learning process the other half of the time. Using a new technique named LCA (Loss Change Allocation), Jason presented a visualization of how different parts of the neural network were contributing to the learning process. This research explores how neural nets work, and could help data scientists build better AI models. You can read more in the team’s blog post, or at NeurIPS where this work will be presented.