2

I have a hard time understanding how to combine a rule-based decision making approach for an agent in an agent-based model I try to develop.

The interface of the agent is a very simple one.

public interface IAgent
{
   public string ID { get; }

   public Action Percept(IPercept percept);
}

For the sake of the example, let's assume that the agents represent Vehicles which traverse roads inside a large warehouse, in order to load and unload their cargo. Their route (sequence of roads, from the start point until the agent's destination) is assigned by another agent, the Supervisor. The goal of a vehicle agent is to traverse its assigned route, unload the cargo, load a new one, receive another assigned route by the Supervisor and repeat the process.

The vehicles must also be aware of potential collisions, for example at intersection points, and give priority based on some rules (for example, the one carrying the heaviest cargo has priority).

As far as I can understand, this is the internal structure of the agents I want to build:

enter image description here

So the Vehicle Agent can be something like:

public class Vehicle : IAgent
{
  public VehicleStateUpdater { get; set; }

  public RuleSet RuleSet { get; set; }

  public VehicleState State { get; set; }

  public Action Percept(IPercept percept)
  {
    VehicleStateUpdater.UpdateState(VehicleState, percept);
    Rule validRule = RuleSet.Match(VehicleState);
    VehicleStateUpdater.UpdateState(VehicleState, validRule);
    Action nextAction = validRule.GetAction();
    return nextAction;
  }
}

For the Vehicle agent's internal state I was considering something like:

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }
}

For this example, 3 rules must be implemented for the Vehicle Agent.

  1. If another vehicle is near the agent (e.g. less than 50 meters), then the one with the heaviest cargo has priority, and the other agents must hold their position.
  2. When an agent reaches their destination, they unload the cargo, load a new one and wait for the Supervisor to assign a new route.
  3. At any given moment, the Supervisor, for whatever reason, might send a command, which the recipient vehicle must obey (Hold Position or Continue).

The VehicleStateUpdater must take into consideration the current state of the agent, the type of received percept and change the state accordingly. So, in order for the state to reflect that e.g. a command was received by the Supervisor, one can modify it as follows:

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }

  // Additional Property
  public RadioCommand ActiveCommand { get; set; }
}

Where RadioCommand can be an enumeration with values None, Hold, Continue.

But now I must also register in the agent's state if another vehicle is approaching. So I must add another property to the VehicleState.

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }

  public RadioCommand ActiveCommand { get; set; }

  // Additional properties
  public bool IsAnotherVehicleApproaching { get; set; }

  public Location ApproachingVehicleLocation { get; set; }
}

This is where I have a huge trouble understanding how to proceed and I get a feeling that I do not really follow the correct approach. First, I am not sure how to make the VehicleState class more modular and extensible. Second, I am not sure how to implement the rule-based part that defines the decision making process. Should I create mutually exclusive rules (which means every possible state must correspond to no more than one rule)? Is there a design approach that will allow me to add additional rules without having to go back-and-forth the VehicleState class and add/modify properties in order to make sure that every possible type of Percept can be handled by the agent's internal state?

I have seen the examples demonstrated in the Artificial Intelligence: A Modern Approach coursebook and other sources but the available examples are too simple for me to "grasp" the concept in question when a more complex model must be designed.

I would be grateful if someone can point me in the right direction concerning the implementation of the rule-based part.

I am writing in C# but as far as I can tell it is not really relevant to the broader issue I am trying to solve.

UPDATE:

An example of a rule I tried to incorporate:

public class HoldPositionCommandRule : IAgentRule<VehicleState>
{
    public int Priority { get; } = 0;

    public bool ConcludesTurn { get; } = false;


    public void Fire(IAgent agent, VehicleState state, IActionScheduler actionScheduler)
    {
        state.Navigator.IsMoving = false;
        //Use action scheduler to schedule subsequent actions...
    }

    public bool IsValid(VehicleState state)
    {
        bool isValid = state.RadioCommandHandler.HasBeenOrderedToHoldPosition;
        return isValid;
    }
}

A sample of the agent decision maker that I also tried to implement.

public void Execute(IAgentMessage message,
                    IActionScheduler actionScheduler)
{
    _agentStateUpdater.Update(_state, message);
    Option<IAgentRule<TState>> validRule = _ruleMatcher.Match(_state);
    validRule.MatchSome(rule => rule.Fire(this, _state, actionScheduler));
}
Vector Sigma
  • 194
  • 1
  • 4
  • 16
  • 1
    I have been thinking about this for a couple of days. I don't like having VehicleState contain information about other vehicles and their locations. Because then every bit of info (possibly a large amount of info) that the vehicle might be able to use in a rule is part of its state. Perhaps the VehicleStateUpdater should encapsulate the world state? Then the vehicle doesn't need to know the time of day, where other vehicles are, how much cargo they are carrying, etc - it only needs to know it's own location and cargo. – Jerry Jeremiah Jul 19 '21 at 21:33
  • 1
    Then the question changes from "How does the vehicle know where all the other vehicles are located?" to "How is the VehicleStateUpdater kept up to date with real-time world state info?" – Jerry Jeremiah Jul 19 '21 at 21:34
  • @JerryJeremiah thank you for your input. It is a good idea to let the StateUpdater encapsulate the world state (the Environment class, as it is called from an agent-based perspective). Perhaps then a rule can take into consideration both the agent's internal state and the world state (Environment). – Vector Sigma Jul 20 '21 at 12:38

1 Answers1

2

I see your question as containing two main sub-questions:

  • modeling flexibility, particularly on how to make it easier to add properties and rules to the system.
  • how to come up with the right set of rules and how to organize them so the agent works properly.

so let's go to each of them.

Modeling Flexibility

I think what you have now is not too bad, actually. Let me explain why.

You express the concern about there being "a design approach that will allow me to add additional rules without having to go back-and-forth the VehicleState class and add/modify properties".

I think the answer to that is "no", unless you follow the completely different path of having agents learning rules and properties autonomously (as in Deep Reinforcement Learning), which comes with its own set of difficulties.

If you are going to manually encode the agent knowledge as described in your question, then how would you avoid the need to introduce new properties as you add new rules? You could of course try to anticipate all properties you will need and not allow yourself to write rules that need new properties, but the nature of new rules is to bring new aspects of the problem, which will often require new properties. This is not unlike software engineering, which requires multiple iterations and changes.

Rule-based Modeling

There are two types of way of writing rules: imperative and declarative.

  • In imperative style, you write the conditions required to take an action. You must also take care of choosing one action over the other when both apply (perhaps with a priority system). So you can have a rule for moving along a route, and another for stopping when a higher-priority vehicle approaches. This seems to be the approach you are currently pursuing.

  • In declarative style, you declare what the rules of your environment are, how actions affect the environment, and what you care about (assigning utilities to particular states or sub-states), and let a system process all that to compute the optimal action for you. So here you declare how taking a decision to move affects your position, you declare how collisions happen, and you declare that reaching the end of your route is good and colliding is bad. Note that here you don't have rules making a decision; the system uses the rules to determine the action with the greatest value given a particular situation.

One intuitive way to understand the difference between imperative and declarative styles is to think about writing an agent that plays chess. In an imperative style, the programmer encodes the rules of chess, but also how to play chess, how to open the game, how to choose the best movement, and so on. That is to say, the system will reflect the chess skills of the programmer. In a declarative style, the programmer simply encodes the rules of chess, and how the system can explore those rules automatically and identify the best move. In this case, the programmer doesn't need to know how to play chess well for the program to actually play a decent game of chess.

The imperative style is simpler to implement, but less flexible, and can get really messy as the complexity of your system grows. You have to start thinking about all sorts of scenarios, like what to do when three vehicles meet, for example. In the chess example, imagine if we alter a rule of chess slightly; the whole system needs to be reviewed! In a way, there is little "artificial intelligence" and "reasoning" in an imperative style system, because it is the programmer who is doing all the reasoning in advance, coming up with all the solutions and encoding them. It is just a regular program, as opposed to an artificial intelligence program. This seems to be the sort of difficulty you are talking about.

The declarative style is more elegant and extensible. You don't need to figure out how to determine the best action; the system does it for you. In the chess example, you can easily alter one rule of chess in the code, and the system will use the new rule to find the best moves in the altered game. However, it requires an inference engine, the piece of software that knows how to take in a lot of rules and utilities and decide which is the best action. Such an inference engine is the "artificial intelligence" in the system. It automatically considers all possible scenarios (not necessarily one by one, as it will typically employ smarter techniques that consider classes of scenarios) and determines the best action in each of them. However, an inference engine is complex to implement or, if you use an existing one, it is probably very limited since those are typically research packages. I believe that when it comes to real practical applications using the declarative approach people pretty much write a bespoke system for their particular needs.

I found a couple of research open source projects along those lines (see below); that will give you an idea of what is available. As you can see, those are research projects and relatively limited in scope.

After all that, how to proceed? I don't know what your particular goals are. If you are developing a toy problem to practice, your current imperative style system may be enough. If you want to learn about declarative style, a deeper reading of the AIMA textbook would be good. The authors maintain an open source repository with implementations for some of the algorithms in the book, too.

https://www.jmlr.org/papers/v18/17-156.html

https://github.com/douthwja01/OpenMAS

https://smartgrid.ieee.org/newsletters/may-2021/multi-agent-opendss-an-open-source-and-scalable-distribution-grid-platform

user118967
  • 4,895
  • 5
  • 33
  • 54
  • 2
    Properties are generally used to record state of the agent, intent and the general environment. Regardless of your chosen style, these properties will change at more or less the same rate. If you get the right level of abstraction in your model definitions, you may not see much change at all. It helps to have most of your pseudo rules defined and from that identify the environmental and state properties you need to evaluate those rules. Your logic for the rules may change over time, but the environment is much less likely to do so, such a change would warrant new model changes anyway. – Chris Schaller Jul 25 '21 at 03:20
  • 1
    @user118967 I greatly appreciate your input. I wasn't aware that these two different approaches to the same problem exist, as far as a rule-based implementation is concerned. You are right that due to my limited knowledge, the imperative style was the first thing that came to my mind as the most straightforward. I can understand the limitation of this style but I still can not comprehend how to approach the problem from a declarative perspective. I have studied some related material from AI Modern Approach but no luck yet :-( Could point me to a particular chapter/example from these sources? – Vector Sigma Jul 25 '21 at 17:10
  • 1
    In sense the entire book is about the declarative style because that's kind of what AI is: model the environment and the agent's possible actions, but let it figure out how to act. Then different chapters talk about different approaches to this same thing: logic, probabilistic reasoning, etc. I suggest Sections 1.1, 1.3, Chapters 2, 7, particularly Section 7.7, Chapters 11, 16 and 17 (I'm looking at the 4th edition TOC, which you can find at http://aima.cs.berkeley.edu/contents.html, but older editions have corresponding chapters). – user118967 Jul 26 '21 at 03:58
  • 1
    To wrap your head around declarative style, I think the chess example is the best. We can write rules establishing how each piece moves, that players take turns, and how the game ends, without writing anything about how to *choose* a good move. That is what the declarative style is all about. – user118967 Jul 26 '21 at 04:00
  • 1
    To be clearer: of course we need to write code to choose a good move, but that code is done at a more abstract level that knows _nothing_ about chess, only about reading the rules defining a domain and using them to make decisions. If tomorrow I replace the rules defining chess by others defining checkers, the same decision-making code should still work and be able to play checkers. Now that's the ideal picture, and in practice things may get murkier. – user118967 Jul 26 '21 at 19:59
  • @user118967 Thank you for the additional clarifications. It seems apparent that this is an area of interest which requires deep understanding of the core concepts (rule-based, decision-making, artificial intelligence etc.) and thus I need more study in order to translate this knowledge to domain-specific code relevant to the example I describe in my question. Also, the question itself probably belongs more to software engineering stack exchange than stack overflow. I will make sure to mark your answer as accepted in order for you to get the full bounty reward. :-) – Vector Sigma Jul 26 '21 at 22:19
  • 1
    Thank you for the bounty reward. :-) Are you aware that there is an AI Stack Exchange site? That might have been the best place for the question, although it did have some software engineering aspects. – user118967 Jul 28 '21 at 08:09
  • 1
    @user118967 Thank you for the suggestion, I had no idea! *thumbs up* – Vector Sigma Jul 28 '21 at 09:42