Andrew Ng
·
2025-10-17
Module 5: Patterns for Highly Autonomous Agents
5.1 Planning workflows
0:03
Welcome to this final module where you learn about design patterns that lets you build
0:05
highly autonomous agents, where you don't need to hard code in advance the sequence of steps to take,
0:11
but it can be more flexible and decide for itself what steps it wants to take to accomplish a task.
0:16
We'll talk about the planning design pattern and then later in this module,
0:20
how to build multi-agent systems. Let's dive in.
0:23
Suppose you run a sunglasses retail store and have information on what sunglasses are in your
0:30
inventory stored in a database. You might want a customer service agent to be able to answer
0:35
questions like, do you have any round sunglasses in stock? They're under $100. This is a fairly
0:40
complex query because you have to look through the product descriptions to see what sunglasses
0:45
are round, then look at what is in stock, and then finally see what's under $100 in order to tell the
0:52
customer, yes, we have classic sunglasses. How do you build an agent to answer a broad range of
0:59
customer queries like this and many others? In order to do so, we're going to give LLM
1:05
a set of tools to let it get item descriptions, such as look up if different glasses are round,
1:11
check inventory, maybe process item returns, which is not needed for this query,
1:15
but we need it for other queries, get item price, check past transactions, process item sale,
1:22
and so on. In order to let an LLM figure out what's the right sequence of tools to use to respond to
1:29
the customer request, you might then write a prompt like this. You have access to the following tools
1:35
and give it a description of each of the, say, six tools or even more tools that the LLM has,
1:40
and to then tell it to return a step-by-step plan to carry out the user's request. In this case,
1:47
to answer this particular query, a reasonable plan that an LLM might output might be to first
1:53
use get item descriptions to check the different descriptions to find the round sunglasses,
1:59
and then use check inventory to see if they're in stock, and use get item price to see if the
2:03
in-stock results are less than $100. After an LLM outputs this plan with three steps, what we can do
2:11
is then take the step one text, that is this text written here in red, and pass that to an LLM,
2:18
maybe with additional context about what are the tools with your user query, with additional
2:22
background context, and pass an LLM to carry out step one. And in this case, hopefully the LLM will
2:29
choose to call the get item descriptions to get the appropriate descriptions of items,
2:34
and the output of that first step can let it select which are the round sunglasses,
2:39
and that output of step one is then passed together with the step two instructions, that
2:45
would be these instructions that I have here in blue, to an LLM to then execute the second step
2:49
of the plan. Hopefully it will then take the two pairs of round sunglasses we found on the previous
2:55
slide and check the inventory, and the output of that second step is then used to another LLM call,
3:02
where you have the output of the second step as well as the instructions of what to do for step
3:06
three. Pass the LLM to have it get the item price, and finally this output is fed back to
3:12
the LLM one last time to generate the final answer for the user. In this slide, I've simplified a lot
3:17
of details a little bit. The actual plan typically written by the LLM is more detailed than these
3:22
simple one-line instructions, but the basic workflow is to have an LLM write out multiple
3:27
steps of a plan, and then task it to execute each step of the plan in turn with some appropriate
3:33
surrounding context about what is the task, what are the tools available, and so on. And the
3:38
exciting thing about using an LLM to plan this way is that we did not have to decide in advance
3:45
what is the sequence in which to call tools in order to answer a fairly complex customer request.
3:52
If a customer were to make a different request, such as I would like to return the gold frame
3:58
glasses that I had purchased but not the metal frame ones, then you can imagine an LLM similarly
4:05
being able to come up with a different plan to figure out based on what they had purchased previously,
4:09
which glasses they had bought based on get item descriptions, where the gold frame ones they want
4:14
to return, and then maybe call process item return. So with an agent that can plan like this, it can
4:19
carry out a much wider range of tasks that can require calling many different tools in many
4:25
different orders. One more example of planning, let's take a look at an email assistant. If you
4:30
want to be to tell your assistant, so please reply to that email invitation from Bob in New York,
4:34
tell him to attend and archive his email. Then an email assistant may be given tools like this to
4:40
search email, move an email, delete an email, and send an email. And you might write an assistant
4:44
prompt saying you have access to the following tools, and again please return the step-by-step plan.
4:48
In this case, maybe the LLM will say the steps for this are to use search email to find the email from
4:54
Bob that mentioned dinner and New York, and then generate and send an email to confirm attendance,
4:59
and then lastly move that email to the archive folder. Given this plan, which looks a reasonable
5:04
one, you would then again task an LLM step-by-step to carry out this plan. So the text from the first
5:11
step, shown here in red, will be fed to the LLM with additional background context, and hopefully
5:16
it'll trigger search email. Then the output of that can be given to an LLM again with the step
5:22
two instructions to send an appropriate response. And then finally, assuming the email was sent
5:27
successfully, you can take that output and have the LLM execute the third step of moving the email
5:33
from Bob into the archive folder. The planning design pattern is already used successfully
5:39
in many highly agentic coding systems, where if you ask it to write a piece of software to build
5:45
some fairly complex application, it might actually come up with a plan to build this component, build
5:51
this component, to almost form a checklist, and then do those steps one at a time to build a
5:56
decently complex piece of software. For many other applications, the use of planning is still
6:02
maybe more experimental. It's not in very widespread use. And one of the challenges of
6:07
planning is it makes the system sometimes a little bit hard to control, because you as a developer,
6:13
you don't really know at runtime what plan it will come up with. And so I think outside highly
6:19
agentic coding systems, where it actually works really well, adoption of planning is still growing
6:24
in other sectors. But this is exciting technology, and I think it will keep getting better and we'll
6:29
see it in more and more applications. The cool thing about building agents that can plan for
6:34
themselves is you don't need to hard code in advance the exact sequence of steps an LLM may
6:39
take to carry out a complex task. Now, I know that in this video, I've gone over the planning process
6:45
at a fairly high level, with it all putting a list of steps and then tasking an LLM to carry
6:51
out the steps of the plan one step at a time. But how does this actually work? In the next video,
6:57
we'll take a deeper dive to look further into the guts of what these plans actually look like,
7:03
and how the strings together to have an LLM plan and execute the plan for you.
7:07
Let's take a look at that in the next video.
5.2 Creating and executing LLM plans
0:04
In this video, we'll look in detail at how to prompt an LLM to generate a plan,
0:04
and how to read, interpret, and execute that plan. Let's dive in.
0:08
This is a plan that you saw in the previous video for the customer service agents,
0:13
and I have presented this plan at a high level using simple text descriptions.
0:17
Let's take a look at how you can get an LLM to write very clear plans that go a little bit
0:23
beyond these simple high-level text descriptions. It turns out that many developers will ask an LLM
0:30
to format the plan it once executed in JSON format, because this allows downstream code
0:36
to parse what exactly are the steps of the plan in relatively clear and unambiguous ways,
0:42
and all of the leading LLMs are pretty good at generating JSON outputs at this point.
0:46
So the system prompt might say something like this. You have access to the following tools,
0:51
and then create a step-by-step plan in JSON format, and you might describe the JSON format
0:56
in enough detail with the goal of getting it to output a plan like that shown here on the right.
1:02
So in this JSON output, it creates a list where the first list item has clear keys and values
1:09
that say step one of the plan has the following description, and it should use the following tool
1:15
with the following arguments parsed to that tool. Then after that, step two of the plan
1:20
is to carry out this task, and then use this tool, and so on. So this JSON format, as opposed to
1:26
writing the plan in English, allows downstream code to more clearly parse out exactly what are
1:32
the steps of the plan so that it can be reliably executed one step at a time. Instead of JSON,
1:38
I also see some developers use XML, where you can use XML delimiters. You use XML tags
1:45
to clearly specify what are the steps of the plan and what step number it is.
1:50
Some developers, I feel like fewer developers, will use markdown, which is just sometimes
1:55
slightly more ambiguous in terms of how we parse it, and I think plain text is maybe the least
2:00
reliable of these options. But I think either JSON, which I'm showing here, or XML would be
2:05
good options for how to ask the LLM to format a plan unambiguously. So that's it. By opening
2:12
plans in JSON, you can then parse it and have downstream workflows execute different steps of
2:18
the plan more systematically. Now, in terms of getting LLMs to plan, it turns out there's one
2:24
other really neat idea that lets an LLM output very complex plans and get them executed reliably,
2:31
and that's to let them write code and to have code express the plan.
2:35
Let's take a look at this in the next video.
5.3 Planning with code execution
0:03
Planning with code execution is the idea that, instead of asking an LLM to output a plan in,
0:06
say, JSON format to execute one step at a time, why not have the LLM just try to write code and
0:12
that code can capture multiple steps of the plan, like call this function, then call this function,
0:17
then call this function, and by executing code generated by the LLM, we can actually carry out
0:22
fairly complex plans. Let's take a look at when you might want to use this technique.
0:27
Let's say you want to build a system to answer questions about coffee machine sales based on
0:33
a spreadsheet with data like this of previous sales. You might have an LLM with a set of tools
0:38
like these to get column max, to look at a certain column and get the maximum value,
0:44
so there's a whole answer, what's the most expensive coffee, or get column mean, filter
0:49
rows, get column min, get column median, sum rows, and so on. So these are examples of a range of
0:55
tools you might give an LLM to process this spreadsheet or these rows and columns of data
1:00
in different ways. Now, if a user were to ask which month had the highest sales of hot chocolate,
1:06
it turns out that you can answer this query using these tools, but it's pretty complicated.
1:11
You'd have to use filter rows to extract transactions in January for hot chocolate,
1:16
then do stats on that, and then repeat for February, figure out stats on that,
1:20
then repeat for March, repeat for April, repeat for May, all the way through December,
1:24
and then take the max, and so you can actually string it together with a pretty complicated
1:28
process using these tools, but it's not such a great solution. But worse, whether someone to
1:35
ask how many unique transactions were there last week, well, these tools are insufficient to get
1:39
that answer, so you may end up creating a new tool, get unique entries, or you may run into
1:45
another query, what were the amounts of last five transactions, then you have to create yet another
1:49
tool to get the data to answer that query. And in practice, I've seen teams, when they run across
1:56
more and more queries, end up creating more and more and more and more tools to try to give the
2:01
other enough tools to cover all the range of things someone may ask about a dataset like this.
2:06
So this approach is brittle, inefficient, and I've seen teams continuously dealing with edge cases
2:13
and trying to create more tools, but it turns out there is a better way, which is if you were
2:18
to prompt LLM to say, please write code to solve the user's query and return your answer as Python
2:24
code, maybe delimited with these beginning and ending execute Python XML tags, then LLM can just
2:31
write code to load the spreadsheet into a data processing library, here it's using the pandas
2:38
library, and then here it actually is coming up with a plan. The plan is, after loading the CSV,
2:44
first it has to ensure the date column is parsed a certain way, then sort by the date, select the
2:49
last five transactions, show just the price column, and so on. But these are the steps one, two, three,
2:55
and four, and five, say, of the plan. Because a programming language like Python, and in this
3:01
example, also with the pandas data processing library imported, because this has many built-in
3:07
functions, hundreds or even thousands of functions, and moreover, these are functions that the LLM has
3:14
seen a lot of data on how to call when. By letting your LLM write code, it can choose from
3:21
these hundreds or thousands of relevant functions that it's already seen a lot of data on when to
3:26
use, so this lets it string together different choices of functions to call from this very large
3:33
library in order to come up with a plan for answering a fairly complex query like this.
3:39
Just one more example. If someone were to ask, how many unique transactions last week?
3:44
Well, you can come up with a plan to read the CSV file, parse the date column, define the time
3:50
window, filter rows, drop duplicate rows, and count. The details of this aren't important, but hopefully
3:55
what you can see is, if you read the comments here, the LLM is roughly coming up with a four-step plan
4:02
and is expressing each of the steps in code that you can then just execute, and this will get the
4:08
user their answer. So for applications where the task can plausibly be done by writing code, letting
4:15
an LLM express its plan in software code that you can just execute for the LLM can be a very powerful
4:22
way to let it write rich plans. And of course, the caveat that I mentioned in the module on tool use
4:29
to consider if you need to find a safe execution environment like a sandbox to run the code, that
4:35
also applies. Although I know that even though it's probably not the best practice, I also know a lot of
4:40
developers that don't use a sandbox. Lastly, it turns out that planning with code works well.
4:46
From this diagram adapted from a research paper by Xinyao Wang and others, you can see that for many
4:53
different models for the tasks that they examined, code as action in which the LLM is invited to write
5:00
code and take actions through code, that is superior to having it write JSON and then translate
5:07
JSON into action or text. And you also see a trend that writing code outperforms having the LLM write
5:15
a plan in JSON, and writing a plan in JSON is also a bit better than writing a plan in just plain text.
5:21
Now, of course, there are applications where you might want to give your custom tools to an LLM
5:26
to use, and so writing code isn't for every single application. But when it does apply, it can be a
5:32
very powerful way for an LLM to express a plan. So that wraps up the section on planning. Today, one of
5:39
the most powerful uses of Agentic AI that plans is highly agentic software coders. It turns out that
5:47
if you ask one of the highly agentic software coding assistance tools to write a complex piece
5:52
of software for you, it may come up with a detailed plan to build this component of software first,
5:58
then build a second component, build a third, maybe even plan to test out the components as
6:02
going along. And then it forms a checklist that then goes through to execute one step at a time.
6:08
And so it actually works really well for building increasingly complex pieces of software. For other
6:14
applications, I think the use of planning is still growing and developing. One of the disadvantages
6:19
of planning is that because the developer doesn't tell the system what exactly to do, it's a little
6:25
bit harder to control it, and you don't really know in advance what will happen at runtime. But giving
6:30
up some of this control, it does significantly increase the range of things that the model may
6:35
decide to try out. So this important technology is kind of cutting edge, doesn't feel completely
6:42
mature outside of maybe agentic coding where it works well, although I'm sure there's still a lot
6:47
of room to grow. But hopefully you enjoy using it in some of your applications someday.
6:54
That wraps up planning. There's one last design pattern I hope to share with you in this module,
6:59
which is how to build multi-agent systems. We have not just one agent, but many of them
7:05
working in collaboration to complete the task for you. Let's take a look at that in the next video.
5.4 Multi-agent workflows
0:00
We've talked a lot about how to build a single agent to complete tasks for you.
0:04
In a multi-agent or multi-agentic workflow, we instead have a collection of multiple agents
0:09
collaborate to do things for you.
0:12
When some people hear for the first time about multi-agent systems, they wonder, why do I
0:16
need multiple agents?
0:18
It's just the same LLM that I'm prompting over and over, or just one computer.
0:22
Why do I need multiple agents?
0:25
I find that one useful analogy is, even though I may do things on a single computer, we do
0:31
decompose work in a single computer into maybe multiple processes or multiple threads.
0:37
And as a developer, thinking, even though it's one CPU on a computer, say, thinking
0:42
about how to take work and decompose it into multiple processes and multi-computer programs
0:47
to run, that makes it easier for me as a developer to write code.
0:52
And in a similar way too, if you have a complex task to carry out, sometimes, instead of thinking
0:59
about how to hire one person to do it for you, you might think about hiring a team of
1:04
a few people to do different pieces of the task for you.
1:08
And so in practice, I found that for many developers of agentic systems, having this
1:12
mental framework of not asking, what's the one person I might hire to do something, but
1:17
instead, would it make sense to hire people with three or four different roles to do this
1:22
overall task for me, that helps give another way to take a complex thing and decompose
1:28
it into sub-tasks and to build for those individual sub-tasks one at a time.
1:33
Let's take a look at some examples of how this works.
1:36
Take the task of creating marketing assets, say you want to market sunglasses.
1:40
Can you come up with a marketing brochure for that?
1:43
You might need a researcher on your team to look at trends on sunglasses and what competitors
1:48
are offering.
1:49
You might also have a graphic designer on your team to render charts or nice-looking
1:54
graphics of your sunglasses.
1:56
And then also a writer to take the research, take the graphic assets and put it all together
2:00
into a nice-looking brochure.
2:02
Or to write a research article, you might want a researcher to do online research, a
2:06
statistician to calculate statistics, a lead writer, and then an editor to come up with
2:10
a polished report.
2:11
Or to prepare a legal case, real law firms will often have associates, paralegals, maybe
2:16
an investigator.
2:18
And we naturally, because of the way human teams do work, can think of different ways
2:24
that complex tasks can be broken down into different individuals with different roles.
2:31
So these are examples of when a complex task were already naturally decomposed into sub-tasks
2:38
that different people with different skills can carry out.
2:41
Take the example of creating marketing assets.
2:44
Look into detail into what a researcher, graphic designer, and writer might do.
2:49
A researcher might have the task of analyzing market trends and researching competitors.
2:55
And when designing the research agents, one question to keep in mind is what are the tools
3:01
that the researcher may need in order to come up with a research report on market trends
3:06
and what competitors are doing.
3:07
So one natural tool that an agentic researcher might need to use would be web search.
3:14
Because as a human researcher, asked to do these tasks might need to search online in
3:18
order to come up with their report.
3:20
Or for a graphic designer agent, they might be tasked with creating visualizations and
3:25
artwork.
3:26
And so what are the tools that an agentic software graphic designer might need?
3:31
Well, they may need image generation and manipulation APIs.
3:36
Or maybe, similar to what you saw with the coffee machine example, maybe it needs code
3:41
execution to generate charts.
3:44
And lastly, the writer has transformed the research into report text and marketing copy.
3:49
And in this case, they don't need any tools other than what an LLM can already do to generate
3:54
text.
3:55
In this and the next video, I'm going to use these purple boxes to denote an agent.
4:00
And the way you build individual agents is by prompting an LLM to play the role of a
4:06
researcher or a graphic designer or a writer, depending on which agent it is part of.
4:11
So for example, for the research agents, you might prompt it to say, you are a research
4:15
agent, expert at analyzing market trends and competitors, carry out online research to
4:22
analyze market trends for the sunglasses product and give a summary as well of what competitors
4:26
are doing.
4:27
So that would allow you to build a researcher agent.
4:30
And similarly, by prompting an LLM to act as a graphic designer with the appropriate
4:35
tools and to act as a writer, that's how you can build a graphic designer as well as
4:41
a writer agent.
4:43
Having built these three agents, one way to have them work together to generate your final
4:49
reports would be to use a simple linear audit workflow or a linear plan in this case.
4:56
So if you want to create a summer marketing campaign for sunglasses, you might give that
5:00
prompt to the research agents.
5:02
The research agent then writes a report that says, here are the current sunglasses trends
5:06
and competitive offerings.
5:08
This research report can then be fed to the graphic designer that looks at the data the
5:13
research has found and creates a few data visualizations and artwork options.
5:17
All these assets can then be passed to the writer that then takes the research and the
5:23
graphic output and writes the final marketing brochure.
5:27
The advantage of building a multi-agent workflow in this case is when designing a researcher
5:32
or graphic designer or writer, you can focus on one thing at a time.
5:36
So I can spend some time building maybe the best graphic designer agents I can, while
5:41
maybe my collaborators are building research agents and writer agents.
5:45
And in the end, we string it all together to come up with this multi-agent system.
5:50
And in some cases, I'm seeing developers start to reuse some agents as well.
5:56
So having built a graphic designer for marketing brochures, maybe I'll think about if I can
6:01
build a more general graphic designer that can help me write marketing brochures as well
6:05
as social media posts, as well as help me illustrate online webpages.
6:10
So by coming up with what are the agents you might hire to do a task, and this will sometimes
6:16
correspond to who are the types of human employees you might hire to do a task.
6:22
You can come up with a workflow like this with maybe even building agents that you could
6:27
choose to reuse in other applications as well.
6:30
Now, what you see here is a linear plan where one agent, the researcher does his work, then
6:36
the graphic designer, and then the writer.
6:38
With agents, you can also, as an alternative to a linear plan, you can also have agents
6:44
interact with each other in more complex ways.
6:47
Let me illustrate with an example of planning using multiple agents.
6:51
So previously, you saw how we may give an LLM a set of tools that we can call to carry
6:56
out different tasks.
6:57
In what I want to show you, we will instead give an LLM the option to call on different
7:03
agents to ask the different agents to help complete different tasks.
7:07
So in detail, you might write a prompt like you're a marketing manager, have the following
7:11
team of agents to work with, and then give a description of the agents.
7:14
And this is very much similar to what we're doing with planning and using tools, except
7:19
the tools, the green boxes, are replaced with agents, these purple boxes that the LLM can
7:24
call on.
7:25
You can also ask it to return a step-by-step plan to carry out the user's request.
7:28
And in this case, the LLM may ask the researcher to research current sunglasses trends and
7:33
then report back.
7:34
Then it will ask the graphic designer to create the images and then report back, then ask
7:38
the writer to create a report, and then maybe the LLM will choose to review or reflect on
7:42
and improve the report one final time.
7:45
In executing this plan, you would then take the step one text of the researcher, carry
7:50
out research, then pass that to the graphic designer, pass it to the writer, and then
7:55
maybe do one final reflection step, and then you'd be done.
7:59
One interesting view of this workflow is as if you have these three agents up here, but
8:04
this LLM on the left is actually like a fourth agent that's a marketing manager, that is
8:10
a manager of a marketing team, that is setting direction and then delegating tasks to the
8:15
researcher, the graphic designer, and the writer agents.
8:18
So this becomes actually a collection of four agents for a marketing manager agent coordinating
8:22
the work of the researcher, the graphic designer, and the writer.
8:26
In this video, you saw two communication patterns.
8:30
One was a linear one where your agents took actions one at a time until you got to the
8:35
end.
8:36
And the second had a marketing manager coordinating the activity of a few other agents.
8:41
It turns out that one of the key design decisions you may end up having to make when building
8:46
multi-agentic systems is what is the communication pattern between your different agents?
8:51
This is an area of hard research and there are multiple patterns emerging, but in the
8:56
next video, I want to show you what are some of the most common communication patterns
8:59
for getting your agents to work with each other.
9:02
Let's go see that in the next video.
5.5 Communication patterns for multi-agent systems
0:01
When you have a team of people working together, the patterns by which they communicate can be
0:06
quite complex. And in fact, designing an organizational chart is actually pretty
0:10
complex to try to figure out what's the best way for people to communicate, to collaborate.
0:16
It turns out, designing communication patterns for multi-agent systems is also quite complex.
0:22
But let me show you some of the most common design patterns I see used by different teams today.
0:26
In a marketing team with a linear plan, where first a researcher worked, then a graphic designer,
0:31
then a writer, the communication pattern was linear. The researcher would communicate with
0:36
the graphic designer, and in both the research and the graphic designer, maybe pass the outputs to
0:40
the writer. And so there's a very linear communication pattern. This is one of the
0:46
two most common communication plans that I see being used today. The second of the two most
0:51
common communication plans would be similar to what you saw in this example, with planning using
0:58
multiple agents, where there is a manager that communicates with a number of team members and
1:05
coordinates their work. So in this example, the marketing manager decides to call on the researcher
1:10
to do some work. Then if you think of the marketing manager as getting the report back, and then
1:15
sending it to the graphic designer, getting a report back, and then sending it to the writer, this would be a
1:20
hierarchical communication pattern. If you're actually implementing a hierarchical communication
1:25
pattern, it'll probably be simpler to have the researcher pass the report back to the marketing
1:29
manager, rather than the researcher pass the results directly to the graphic designer and to
1:34
the writer. But so this type of hierarchy is also a pretty common way to plan the communication
1:40
patterns, where you have one manager coordinating the work of a number of other agents. And just to
1:45
share with you some more advanced and less frequently used, but nonetheless sometimes used in
1:50
practice communication patterns, one would be a deeper hierarchy, where same as before, if you have
1:56
a marketing manager send tasks to the researcher, graphic designer, writer, but maybe the researcher has
2:01
themselves two other agents that they call on, such as a web researcher and a fact checker. Maybe the
2:07
graphic designer just works by themselves, whereas the writer has an initial style writer and a citation
2:13
checker. So this would be a hierarchical organization of agents, in which some agents
2:19
might themselves call other sub-agents. And I also see this used in some applications, but this is
2:25
much more complex than a one-level hierarchy, so used less often today. And then one final pattern
2:31
that is quite challenging to execute, but I see a few experimental projects use it, is the all-to-all
2:38
communication pattern. So in this pattern, anyone is allowed to talk to anyone else at any time. And
2:44
the way you implement this is you prompt all four of your agents, in this case, to tell them that
2:50
there are three other agents they could decide to call on. And whenever one of your agents decides
2:55
to send a message to another agent, that message gets added to the receiver agent's contacts. And
3:02
then a receiver agent can think for a while and decide when to get back to that first agent. And
3:07
so if you can all collaborate in a crowd and talk to each other for a while until, say, each of them
3:13
declares that it is done with this task, and then it starts talking. And maybe when everyone thinks
3:18
it's done, or maybe when the writer concludes it's good enough, that's when you generate the final
3:22
output. In practice, I find the results of all-to-all communication patterns a bit hard to predict.
3:27
So some applications don't need high control. You can run it and see what you get. If the marketing
3:32
brochure isn't good, maybe that's okay. You just run it again and see if you get a different result.
3:37
But I think for applications where you're willing to tolerate a little bit of chaos and
3:41
unpredictability, I do see some developers using this communication pattern. So that, I hope,
3:47
conveys some of the richness of multi-agent systems. Today, there are quite a lot of software
3:54
frameworks as well that support easily building multi-agent systems. And they also make implementing
4:00
some of these communication patterns relatively easy. So maybe if you use your own multi-agent system,
4:06
you'll find some of these frameworks hopeful for exploring these different
4:09
communication patterns as well. And so that now brings us to the final video
4:17
of this module and of this course. Let's go on to the final video to wrap up.
5.6 Conclusion
0:04
Welcome to the final video of this course. It feels like we've been through a lot together,
0:04
just you and me, and we've gone through a lot of topics in Agentic AI. Let's take a look.
0:10
In the first module, we talked about what are the applications you can build with Agentic AI
0:15
that just were not possible before. And we started then to look at key design patterns,
0:22
including the reflection design pattern, which is a simple way to sometimes give your application a
0:27
nice performance boost, and then tool use or function calling, which expands what your LLM
0:33
application can do, with code execution being one important case of that. And then we spent a lot of
0:38
time talking about evaluations, as well as error analysis, and how to drive a disciplined process
0:44
of building, as well as analyzing, to be efficient in how you keep on improving the performance of
0:50
your agentic AI system. This fourth module is some of the material that I think you will find
0:56
most useful as you keep building Agentic AI systems, I hope, for a long time. And then in
1:01
this module, we talk about planning and multi-agent systems that can let you build much more powerful,
1:07
although sometimes harder to control, and harder to predict in advanced types of systems. So with
1:13
the skills you learn from this course, I think you now know how to build a lot of cool, exciting
1:19
Agentic AI applications. When my team, or I see other teams as well, interview people for jobs,
1:25
I find interviews often try to assess whether or not candidates have pretty much the skills you're
1:31
learning about in this course. And so I hope that this course will also open up new professional
1:36
opportunities for you, and you just will do more. Whether you're doing these things for fun, or for
1:42
professional practical settings, I think you'll enjoy this new set of things you can now build.
1:50
So, just to wrap up, I want to thank you again for spending all this time with me,
1:55
and I hope you will take these skills, use them responsibly, and just go build cool stuff.