Amazon last week made an unusual deal with a company called Adept in which Amazon will license the company’s technology and also poach members of its team, including the company’s co-founders.
The e-commerce, cloud computing, online advertising, digital streaming and artificial intelligence (AI) giant is no doubt hoping the deal will propel Amazon, which is lagging behind companies like Microsoft, Google and Meta in the all-important area of AI. (In fact, Adept had previously been in acquisition talks with both Microsoft and Meta.)
Adept specializes in the hottest area of AI that hardly anyone is talking about, but which some credibly claim is the next leap forward for AI technology.
But wait, what exactly is agentic AI? The easiest way to understand it is by comparison to LLM-based chatbots.
How agentic AI differs from LLM chatbots
We know all about LLM-based chatbots like ChatGPT. Agentic AI systems are based on the same kind of large language models, but with important additions. While LLM-based chatbots respond to specific prompts, trying to deliver what’s asked for literally, agentic systems take that further by incorporating autonomous goal-setting, reasoning, and dynamic planning. They’re also designed to integrate with applications, systems, and platforms.
While LLMs, such as ChatGPT, reference huge quantities of data and hybrid systems, like Perplexity AI, combine that with real-time web searches, agentic systems further incorporate changing circumstances and contexts to pursue goals, causing them to reprioritize tasks and change methods to achieve those goals.
While LLM chatbots have no ability to make actual decisions, agentic systems are characterized by advanced contextual reasoning and decision-making. Agentic systems can plan, “understand” intent, and more fully integrate with a much wider range of third-party systems and platforms.
What’s it good for?
One obvious use for agentic AI is as a personal assistant. Such a tool could — based on natural-language requests — schedule meetings and manage a calendar, change times based on others’ and your availability, and remind you of the meetings. And it could be useful in the meetings themselves, gathering data in advance, creating an agenda, taking notes and assigning action items, then sending follow-up reminders. All this could theoretically begin with a single plain-language, but vague, request.
It could read, categorize and answer emails on your behalf, deciding which to answer and which to leave for you to respond to.
You could tell your agentic AI assistant to fill out forms for you or subscribe to services, entering the requested information and even processing any payment. It could even theoretically surf the web for you, gathering information and creating a report.
Like today’s LLM chatbots, agentic AI assistants could use multimodal input and could receive verbal instructions along with audio, text, and video inputs harvested by cameras and microphones in your glasses.
Another obvious application for agentic AI is for customer service. Today’s interactive voice response (IVR) systems seem like a good idea in theory — placing the burden on customers to navigate complex decision trees while struggling with inadequate speech recognition so that a company doesn’t need to pay humans to interface with customers — but fail in practice.
Agentic AI promises to transform automated customer service. Such technology should be able to function as if it not only understands the words but also the problems and goals of a customer on the phone, then perform multi-step actions to arrive at a solution.
They can do all kinds of things a lower-level employee might do — qualify sales leads, do initial outreach for sales calls, automate fraud detection and loan application processing at a bank, autonomously screen candidates applying for jobs, and even conduct initial interviews and other tasks.
Agentic AI should be able to achieve very large-scale goals as well — optimize supply chains and distribution networks, manage inventory, optimize delivery routes, reduce operating costs, and more.
The risk of agentic AI
Let’s start with the basics. The idea of AI that can operate “creatively” and autonomously — capable of doing things across sites, platforms and networks, directed by a human-created prompt with limited human oversight — is obviously problematic.
Let’s say a salesperson directs agentic AI to set up a meeting with a hard-to-reach potential client. The AI understands the goal and has vast information about how actual humans do things, but no moral compass and no explicit direction to conduct itself ethically.
One way to reach that client (based on the behavior of real humans in the real world) could be to send an email, tricking the person into clicking on self-executing malware, which would open a trojan on the target’s system to be used for exfiltrating all personal data and using that data to find out where that person would be at a certain time. The AI could then place a call to that location, and say there’s an emergency. The target would then take the call, and the AI would try to set up a meeting.
This is just one small example of how an agentic AI without coded or prompted ethics could do the wrong thing. The possibilities for problems are endless.
Agentic AI could be so powerful and capable that there’s no way this ends well without a huge effort on the development and maintenance of AI governance frameworks that include guidelines, safety measures, and constant oversight by well-trained people.
Note: The rise of LLMs, starting with ChatGPT, engendered fears that AI could take jobs away from people; agentic AI is the technology that could really do that at scale.
The worst-case scenario would be for millions of people to be let go and replaced by agentic AI. The best case is that the technology would be inferior to a human partnering with it. With such a tool, human work could be made far more efficient and less error-prone.
I’m pessimistic that agentic AI can benefit humanity if the ethical considerations remain completely in the hands of Silicon Valley tech bros, investors, and AI technologists. We’ll need to combine expertise from AI, ethics, law, academia, and specific industry domains and move cautiously into the era of agentic AI.
It’s reasonable to feel both thrilled by the promise of agentic AI and terrified about the potential negative effects. One thing is certain: It’s time to pay attention to this emerging technology.
With a giant, ambitious, capable, and aggressive company like Amazon making moves to lead in agentic AI, there’s no ignoring it any longer.
More by Mike Elgan:
The rise of AI-powered killer robot drones
Amazon last week made an unusual deal with a company called Adept in which Amazon will license the company’s technology and also poach members of its team, including the company’s co-founders.
The e-commerce, cloud computing, online advertising, digital streaming and artificial intelligence (AI) giant is no doubt hoping the deal will propel Amazon, which is lagging behind companies like Microsoft, Google and Meta in the all-important area of AI. (In fact, Adept had previously been in acquisition talks with both Microsoft and Meta.)
Adept specializes in the hottest area of AI that hardly anyone is talking about, but which some credibly claim is the next leap forward for AI technology.
But wait, what exactly is agentic AI? The easiest way to understand it is by comparison to LLM-based chatbots.
How agentic AI differs from LLM chatbots
We know all about LLM-based chatbots like ChatGPT. Agentic AI systems are based on the same kind of large language models, but with important additions. While LLM-based chatbots respond to specific prompts, trying to deliver what’s asked for literally, agentic systems take that further by incorporating autonomous goal-setting, reasoning, and dynamic planning. They’re also designed to integrate with applications, systems, and platforms.
While LLMs, such as ChatGPT, reference huge quantities of data and hybrid systems, like Perplexity AI, combine that with real-time web searches, agentic systems further incorporate changing circumstances and contexts to pursue goals, causing them to reprioritize tasks and change methods to achieve those goals.
While LLM chatbots have no ability to make actual decisions, agentic systems are characterized by advanced contextual reasoning and decision-making. Agentic systems can plan, “understand” intent, and more fully integrate with a much wider range of third-party systems and platforms.
What’s it good for?
One obvious use for agentic AI is as a personal assistant. Such a tool could — based on natural-language requests — schedule meetings and manage a calendar, change times based on others’ and your availability, and remind you of the meetings. And it could be useful in the meetings themselves, gathering data in advance, creating an agenda, taking notes and assigning action items, then sending follow-up reminders. All this could theoretically begin with a single plain-language, but vague, request.
It could read, categorize and answer emails on your behalf, deciding which to answer and which to leave for you to respond to.
You could tell your agentic AI assistant to fill out forms for you or subscribe to services, entering the requested information and even processing any payment. It could even theoretically surf the web for you, gathering information and creating a report.
Like today’s LLM chatbots, agentic AI assistants could use multimodal input and could receive verbal instructions along with audio, text, and video inputs harvested by cameras and microphones in your glasses.
Another obvious application for agentic AI is for customer service. Today’s interactive voice response (IVR) systems seem like a good idea in theory — placing the burden on customers to navigate complex decision trees while struggling with inadequate speech recognition so that a company doesn’t need to pay humans to interface with customers — but fail in practice.
Agentic AI promises to transform automated customer service. Such technology should be able to function as if it not only understands the words but also the problems and goals of a customer on the phone, then perform multi-step actions to arrive at a solution.
They can do all kinds of things a lower-level employee might do — qualify sales leads, do initial outreach for sales calls, automate fraud detection and loan application processing at a bank, autonomously screen candidates applying for jobs, and even conduct initial interviews and other tasks.
Agentic AI should be able to achieve very large-scale goals as well — optimize supply chains and distribution networks, manage inventory, optimize delivery routes, reduce operating costs, and more.
The risk of agentic AI
Let’s start with the basics. The idea of AI that can operate “creatively” and autonomously — capable of doing things across sites, platforms and networks, directed by a human-created prompt with limited human oversight — is obviously problematic.
Let’s say a salesperson directs agentic AI to set up a meeting with a hard-to-reach potential client. The AI understands the goal and has vast information about how actual humans do things, but no moral compass and no explicit direction to conduct itself ethically.
One way to reach that client (based on the behavior of real humans in the real world) could be to send an email, tricking the person into clicking on self-executing malware, which would open a trojan on the target’s system to be used for exfiltrating all personal data and using that data to find out where that person would be at a certain time. The AI could then place a call to that location, and say there’s an emergency. The target would then take the call, and the AI would try to set up a meeting.
This is just one small example of how an agentic AI without coded or prompted ethics could do the wrong thing. The possibilities for problems are endless.
Agentic AI could be so powerful and capable that there’s no way this ends well without a huge effort on the development and maintenance of AI governance frameworks that include guidelines, safety measures, and constant oversight by well-trained people.
Note: The rise of LLMs, starting with ChatGPT, engendered fears that AI could take jobs away from people; agentic AI is the technology that could really do that at scale.
The worst-case scenario would be for millions of people to be let go and replaced by agentic AI. The best case is that the technology would be inferior to a human partnering with it. With such a tool, human work could be made far more efficient and less error-prone.
I’m pessimistic that agentic AI can benefit humanity if the ethical considerations remain completely in the hands of Silicon Valley tech bros, investors, and AI technologists. We’ll need to combine expertise from AI, ethics, law, academia, and specific industry domains and move cautiously into the era of agentic AI.
It’s reasonable to feel both thrilled by the promise of agentic AI and terrified about the potential negative effects. One thing is certain: It’s time to pay attention to this emerging technology.
With a giant, ambitious, capable, and aggressive company like Amazon making moves to lead in agentic AI, there’s no ignoring it any longer.
More by Mike Elgan:
The rise of AI-powered killer robot drones
Digital nomads just got huge screens and fast internet
AI washing: Silicon Valley’s big new lie Read More