Fei-Fei Li‘s new book is the story of her journey from China to the U.S., from small business to Big Tech, and from academic research to corporate life, and back again.
But more than that, it’s the story of artificial intelligence, as told through her experience as one of the people summoning this new day and standing there awestruck, excited, and concerned about what it will mean for humanity.
Dr. Li joins us on the GeekWire Podcast to discuss her book, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, from Moment of Lift Books, an imprint from Melinda French Gates and Flatiron Books.
“Every time a technology is powerful enough to become products, to become applications, to become horizontally impactful to so many people … it is messy. It is very messy,” she says. “And this is where I think we should take a breath and recognize this messiness, embrace this messiness.”
Ultimately, she says, it’s most important to ensure that technology operates in service to people. On balance, she says the rise of generative artificial intelligence seems to have moved us further toward that goal by spurring the conversation about the need for human-centered AI.
“There’s noise, there’s a lot of hyperbole, and it’s a necessary phase, but I’m hopeful that the human-centeredness is much more front and center now,” she says.
Known for her foundational contributions to AI and computer vision, Dr. Li is the inventor of ImageNet, a large-scale dataset of images that enabled rapid advances in deep learning for visual recognition. She is a professor of computer science at Stanford University and a co-director of the Stanford Institute for Human-Centered Artificial Intelligence, who worked as Google Cloud’s chief scientist for AI/ML during a 2017-2018 sabbatical.
I’ll be speaking further with Dr. Li about her book, The Worlds I See, on Monday evening Nov. 13 at Town Hall in Seattle. See this site for details and tickets.
Listen above, or subscribe to GeekWire in any podcast app. Continue reading for edited highlights.
Perseverance and scientific discovery: “Just like any other thing that’s hard in life, you have to grind. It’s teamwork. It’s both creativity and perseverance. It’s grit and passion. And this is why I deliberately choose to give the kind of details of the journey as this book did.
“And my message to anyone, especially young people who are going through this phase of their journey is, don’t give up. Be resourceful, be creative. Be willing to know when your tool is wrong, or your first idea, your second idea, your 15th idea is wrong. But don’t give up that North Star. Don’t give up that dream, and you persevere if you believe in it and are passionate and work for it.”
Private companies vs. public institutions in AI: “This is actually the challenge of our time. I think this is a critical question. I cannot overemphasize enough that we have a terrible imbalance right now. The imbalance is not only on the resource front, it’s also the voice, the megaphone. …
“Policymakers are meeting with business leaders left and right, which is fine, it’s good they meet with them, but they also need to hear from academia and the public sector. AI is a very powerful technology. It can help us to actually discover more critical science, a cure for cancer, fusion, there’s many things, but it also can be used to optimize advertisement placements and revenue. The public wants the former, some companies want the latter, but we definitely want both to be healthy in our society. And right now, we only have the companies using AI, we don’t have enough resources for the public institutions.
“On top of that, we are worried about the catastrophic or existential risks of AI. Who has the resources to open the hood and examine what’s going on? You need trusted public sector partners. You cannot just completely rely on self-reporting by these companies. In order to do that, you need a healthy academia and public sector.
“Last but not least, who’s benchmarking? Who is assessing? Who is evaluating? Not only evaluating on the speed and performance, but also fairness, privacy, hallucination, alignment, all these issues that we are seeing in today’s technology. And again, we need public sector and academia’s thought leadership in this.
“So for all these reasons, I think we are really in a bit of a crisis if we overlook the public sector at this moment in AI’s development.”
The origins of her human-centered approach, in her experience as a small business owner with her parents: “When I was building the technology, especially when I was seeing the link between AI and healthcare and also other industries, it [was] so easy for me to understand people from the other side, because as a small business owner for a dry cleaner shop, everything you’re trained on is to understand your customers and make sure they’re happy.
“And it really made me understand the struggle of someone, as an immigrant as well as the receiving end of a product or service, the customers and users. So when I was working in healthcare with AI or other businesses at Google, it was second nature to me to try to ground technology in human perspective. So that was helpful.
“And besides, I live in Silicon Valley. I can tell people, I had a startup when I was 19, and it was a dry cleaning shop.”
Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.