Rather than automating people out of the equation, our current golden age of artificial intelligence offers new opportunities to realize Douglas Engelbart’s 1960s vision of “Augmenting Human Intellect.” To be effective, collaboration between people and machines must share a representation of tasks — one that systems can tractably reason about and that people can easily manipulate.
In this talk, I present two approaches to developing these domain-specific representations. The first, the Vega project, describes new declarative visualization languages that serve as platforms for novel visualization design tools and chart recommender engines. The second uses visualization to uncover the abstractions learned by GoogLeNet, a neural network trained for image classification.
Arvind Satyanarayan is an assistant professor of computer science at MIT CSAIL where he leads the Visualization Group. His research uses visualization as a lens to explore how software systems can enhance our creativity and cognition, while respecting our agency. Visualization systems he has built are in use on Wikipedia, and have been broadly adopted within data science (Jupyter and Observable) as well as industry (including at Apple, Google, Microsoft, Netflix).