Everything in AI is in flux. In fact, a lack of predictability will define this new era.
That was one of the overarching takeaways from the second annual Intelligent Applications Summit, hosted by Madrona Venture Group in Seattle on Wednesday.
“We’ve moved from a world of structured data and deterministic applications to a world that includes many forms of data, including many types of unstructured and semi-structured data, and apps that are, at some level, non-deterministic,” said Matt McIlwain, Madrona managing director, opening the conference.
McIlwain described a future world “where every single person, hundreds of millions to billions of people, have their own personal and customized apps, because that input mechanism is shaping the app.”
A few other highlights from day-long event:
Flexible and open AI models: Ali Farhadi, the new CEO of the Allen Institute for Artificial Intelligence (AI2), discussed the changing nature of the foundation models that power AI, and the need to keep them open and accessible.
“We’re actually moving the notion of what the model is, from a static creature that we train, freeze, and deploy, to a fluid structure that we need to mess with all the time, continuously, forever,” he said.
Farhadi explained, “Because now a model is not a model anymore. A model is a whole family of models. From these kinds of models, you get thousands of models at your disposal.”
Before returning to AI2 as CEO this year, Farhadi was a leader of Apple’s machine learning projects. He is also a faculty member at the Paul G. Allen School of Computer Science and Engineering at the University of Washington.
Projects at AI2 include OLMo (Open Language Model), a generative language model created by the institute; and Dolma, an open dataset derived from OLMo.
“AI was born and raised in the open,” Farhadi said. “Putting it behind closed doors will only slow down, if not hinder, the progress.”
Did AI write that email? The unsettled nature of AI norms was illustrated by an exchange between Sumit Chauhan, a corporate vice president in the Microsoft Office product group, and journalist Dina Bass of Bloomberg News, about the new Microsoft Outlook “sounds like me” feature that learns and matches the sender’s style and tone of voice.
Bass pointed out that recipients of these messages aren’t alerted to the fact that AI is involved in writing them, and asked Chauhan where the line should be drawn on transparency and disclosure.
Chauhan pointed out that, in many cases, the company’s products cite the sources of AI-generated information and the use of AI in the creation of content.
However, she said, “the email one is a little bit tricky,” because the sender may not want the recipient to know that AI played a role in writing the message.
“Why not?” Bass asked.
“Maybe you want the other person to know, maybe you don’t,” Chauhan said. “As AI is doing work on my behalf, and the work is up to my standards … the lines start to blur a little bit.”
Impact of tech regulation: Brad Gerstner, CEO and founder of the Altimeter investment firm, spoke bluntly about the implications of the current U.S. regulatory environment for tech in response to an audience question at the end of a session on public and private market perspectives on generative AI.
“I think Lina Khan is an unmitigated disaster for American capitalism,” he said, referring to the Federal Trade Commission chair. “But that’s going to change. And then the great thing about this system that we have is, she will come and go, and sanity will prevail.”
He continued, “The fact of the matter is, founders who are risking everything deserve the exits to bigger companies. It’s good for consumers. It’s good for the capital system. It’s good for the entrepreneurial ecosystem. And where did we lose our minds that actually doing something that’s beneficial to consumers should be blocked by Washington?”
“So that’s going to change. But in the interim, it’s caused a moratorium, really, in boardrooms and among CEOs focused on buying things. And it’s caused me as a capital provider to look at companies I would have otherwise funded and said, ‘Well, my downside protection is Meta buys it, or Google buys it, or Amazon buys it.’ And now I have to discount that at a much higher rate. … So it’s not a good thing for the funding environment today.”
Internal efficiencies and AI safeguards: For all the buzz about artificial intelligence, most companies are still taking a cautious approach, focusing initially on internal applications rather than external products.
“The plurality of use cases right now are people that are trying to make their own teams more efficient and effective,” said Matt Garman, the Amazon Web Services senior vice president for sales, marketing and global services.
Big companies want to mitigate the risk of AI hallucinations, and in cases where AI is used in products, companies are making sure there’s a human layer between generative AI systems and end users, Garman said.
Ian Cook, the president of Slalom Build, offered a similar take. He said his group has about 20 AI-related projects currently in production for customers, and about three-quarters of them are for internal use by those customers.