The great decoupling continues – Qualcomm sees application structures evolving as AI instantiates intent beyond app constraints
Editor’s note: Qualcomm provided travel, lodging and other accommodations related to Snapdragon Summit.
MAUI — Qualcomm CEO Cristiano Amon ended the Snapdragon Summit with some interesting (and important) commentary that wasn’t about the latest feeds and speeds offered by the connected computing powerhouse. It has begun. Rather, he discussed the long-term view of how agent AI systems separate the intent of device users from the limitations of applications as we know them today.
First, let’s briefly review how the pre-AI separation of hardware and software profoundly changed the way technology was developed and used. In the past, and in some cases still today, hardware and software were designed together to perform specific tasks with limited or no flexibility or portability. Remember the early mainframes? Then came virtualization, which allows you to run multiple applications and operating systems simultaneously using a single hardware platform. Hardware and software are now separated, with the latter existing independently of the former.
So what did it do? It contributed to the rise of cloud and infrastructure-as-a-service styles of computing resource consumption. Instead of running a specific software application on a dedicated hardware platform, companies can deploy software on virtual machines that come and go amorphously in the cloud. Apart from the economic revolution brought about by the cloud, it also allows users to upgrade hardware without rewriting software and deploy applications across different environments to optimize how, when, and where resources are used. It gave me the flexibility to do it. It has also allowed software innovation to flourish by allowing developers to focus on driving software-based features without worrying about hardware limitations. Similarly, hardware vendors can iterate on their products at a pace that is not necessarily related to the pace of software development.
The era of AI has, of course, seen an increase in specialized hardware, especially in silicon, but we have also seen continued decoupling as software can leverage this specialized hardware without being tied to it. I did. In the real world, this means continuity from the edge to the cloud. Run your AI workloads when, where and how you want. This hybrid AI architecture and accompanying orchestration, like the previous separation of hardware and software, supports more flexibility, more efficiency and optimization, and more innovation.
Now, we return to Amon to reflect on the present and look to the future. “Computers can now understand human language. Computers can now communicate just like humans,” he said today. “We have this structure that we’re so used to now…it’s an app-centric experience…but that’s changing…this isn’t about one killer app. The experience is changing.”
He gave the example of opening your bank app to check your cash balance or check your savings. With gen AI, you can simply ask your AI-enabled device a question and receive information. If you want to visually interact with an app and display its information, the AI model can render the screen to your liking. Amon then expanded his thinking to multimodal AI. When you show your bill to your device, it will review the bill, instruct the Assistant to pay the bill, and notify you when the payment is complete. The possibilities are endless when you tell the agent AI your intent and the agent AI composes an “app” that perfectly matches your intent.
“I’m convinced it’s the future,” Amon said. “The timing is completely unpredictable…We’re going to see a real revolution enabled by AI-first experiences, and I think that’s a really exciting future ahead of us. .”