Agent Types
Four basic types in order of increasing generalization:
1. Simple reflex agents
2. Reflex agents with state/model
3. Goal-based agents
4. Utility-based agents
Simple Reflex Agent
Instead of specifying individual mappings in an explicit table, common input-output associations are recorded.
Requires processing of percepts to achieve some abstraction
Frequent method of specification is through condition-action rules
- if percept then action
- If car-in -font-is-braking then initiate-braking
Similar to innate reflexes or learned responses in humans
Efficient implementation, but limited power
- Environment must be fully observable
- Easily runs into infinite loops
function SIMPLE-REFLEX-AGENT (percept)
return action
- static : rules, a set of condition-action rules
- state← INTERNET-INPUT (percept)
- rule← RULE-MATCH (state,rules)
- action← RULE-ACTION [RULE]
- RETURN action
Which works by finding a rule whose condition matches the current situation and then doing the action associated with that rule
Reflex agents with state/model
Even a little bit of un observability can cause serious trouble.
The braking rule given earlier assumes that the condition car-in-front-is-braking can be determined from the current percept-the current video image.
More advanced techniques would require the maintenance of some kind of internal state to choose an action.
An internal state maintains important information from previous percepts
- Sensors only provide a partial picture of the environments
- Helps with some partially observable environments
The internal states reflects the agent's knowledge about the world
- This knowledge is called a model
- May contain information about changes in the world
Model-based reflex agents
Required information :
- How the world evolves independently of the agent ?
An overtaking car generally will be closer behind than it was a moment ago.
The current percept is combined with old internal state to generate the updated description of the current state.
function SIMPLE-REFLEX-AGENT (percept)
return action
- static: state, a description of the current world state rules, a set of condition -action rules action, the most recent action, initially none
- state← UPDATE-STATE (state, action, percept)
- rule← RULE-MATCH (state,rules)
- action← RULE-ACTION [rule]
- state← UPDATE-STATE (state, action)
- return action
Goal-based agent
Merely knowing about the current state of the environment is not always enough to decide what to do next.
The right decision depends on where the taxi is trying to get to.
So the goal information is also needed.
Goal-based agents are far more flexible.
- If it starts to rain, the agent adjusts itself to the changed circumstances, since it also looks at the way its actions would affect its goals (remember doing the right thing).
- For the reflex agent we would have to rewrite a large number of condition-action rules
Utility-based agents
Goals are not really enough to generate high-quality behavior.
There are many ways to reach the destination, but some are qualitatively better than others.
- More safe
- Shorter
- Less expensive
We say that if one world state is preferred to another, then it has higher utility for the agent.
Utility is a function that maps a state onto a real number
- state→ R
Any rational agent possesses a utility function.
Four basic types in order of increasing generalization:
1. Simple reflex agents
2. Reflex agents with state/model
3. Goal-based agents
4. Utility-based agents
Simple Reflex Agent
Instead of specifying individual mappings in an explicit table, common input-output associations are recorded.
Requires processing of percepts to achieve some abstraction
Frequent method of specification is through condition-action rules
- if percept then action
- If car-in -font-is-braking then initiate-braking
Similar to innate reflexes or learned responses in humans
Efficient implementation, but limited power
- Environment must be fully observable
- Easily runs into infinite loops
function SIMPLE-REFLEX-AGENT (percept)
return action
- static : rules, a set of condition-action rules
- state← INTERNET-INPUT (percept)
- rule← RULE-MATCH (state,rules)
- action← RULE-ACTION [RULE]
- RETURN action
Which works by finding a rule whose condition matches the current situation and then doing the action associated with that rule
Reflex agents with state/model
Even a little bit of un observability can cause serious trouble.
The braking rule given earlier assumes that the condition car-in-front-is-braking can be determined from the current percept-the current video image.
More advanced techniques would require the maintenance of some kind of internal state to choose an action.
An internal state maintains important information from previous percepts
- Sensors only provide a partial picture of the environments
- Helps with some partially observable environments
The internal states reflects the agent's knowledge about the world
- This knowledge is called a model
- May contain information about changes in the world
Model-based reflex agents
Required information :
- How the world evolves independently of the agent ?
An overtaking car generally will be closer behind than it was a moment ago.
The current percept is combined with old internal state to generate the updated description of the current state.
function SIMPLE-REFLEX-AGENT (percept)
return action
- static: state, a description of the current world state rules, a set of condition -action rules action, the most recent action, initially none
- state← UPDATE-STATE (state, action, percept)
- rule← RULE-MATCH (state,rules)
- action← RULE-ACTION [rule]
- state← UPDATE-STATE (state, action)
- return action
Goal-based agent
Merely knowing about the current state of the environment is not always enough to decide what to do next.
The right decision depends on where the taxi is trying to get to.
So the goal information is also needed.
Goal-based agents are far more flexible.
- If it starts to rain, the agent adjusts itself to the changed circumstances, since it also looks at the way its actions would affect its goals (remember doing the right thing).
- For the reflex agent we would have to rewrite a large number of condition-action rules
Utility-based agents
Goals are not really enough to generate high-quality behavior.
There are many ways to reach the destination, but some are qualitatively better than others.
- More safe
- Shorter
- Less expensive
We say that if one world state is preferred to another, then it has higher utility for the agent.
Utility is a function that maps a state onto a real number
- state→ R
Any rational agent possesses a utility function.
No comments:
Post a Comment