Example 11.4 – Simple Agent to Agent Interaction
The last few examples with agents were looking at fixed steering points to influence the agent’s behavior. In this example, the agents themselves can influence the behavior of others. We will start with some very simple behaviors, and in future examples we will add a few more behaviors and play around with them. Bear in mind though, even changing the parameters of one simple behavior without completely rewriting the code can have a big effect on the resulting form.
In this example, we will imagine there are a group of agents walking around in a square room. They walk in a straight line until they encounter an obstacle. If this obstacle is a wall, they simply reflect themselves back into the room. This behavior of reflection is probably not what a person or animal would do in real life, but it is useful to know how reflect works since it can come in handy in many applications. It will also keep our agents from running away. 😉
The second behavior is an avoidance behavior. People perform this behavior all the time, often without realizing it. If you are walking through a crowd of people, you subtly alter your path to avoid collisions. In our case, the agents will, once a neighbor comes into a sphere of effect (their “comfort zone”), alter their vector to avoid the neighbor.
Step One – Initial Setup
This is a straightforward version of a setup I have used many times before, so i won’t explain in too much detail. The top piece produces a “Movement Increment” factor based on the overall dimensions. The other pieces produce a container of points, and another container of associated, semi-random vectors. These will go into the loop.
Step Two – First Behavior – Reflection at Walls
As the points start moving along their vectors, eventually they will come to the area’s outer boundary. I don’t want them to run away, so I am going to simply reflect them back into the playing field before they cross the line. I do this by finding the closest point along the edge curve, and testing the distance to the edge. If the distance is too low (less than the movement interval), the agents are reflected. To do this, I use the “Sift” component to temporarily take out of the list the subset of points and vectors that are of interest. After reflecting, I will bring the two lists back together with the “Combine” component.
The math for this was a bit harder than I thought, but I found an explanation of what was going on here. Basically, you need two vectors, the incoming vector to be reflected, and the Normal vector (the vector perpendicular to the surface of reflection). These two are magically summed together to get what is called the vector “Dot Product” using complex equations. These three things, the Vector, the Normal, and the Dot Product are then fed into an equation to get the reflected vector. If you don’t have the patience to figure out what is going on, just copy exactly what I did in the script above, using the right formula.
Note The formula box will turn red as there are null values going in from our sift operation…but we need to keep those nulls to recombine the data later…so just ignore the red. Once the reflected vectors are produced, the are recombined with the rest of the vectors, keeping their position in the list.
Step Three – Keeping A Point History
if I run the loop as done in step two, my points should be moving and reflecting correctly. It will only keep the latest point in the sequence, however. I want to keep track of the whole history of points, however, so I am going to add some data structuring and listing elements to my script per the image above. Pay close attention to the data structure! the basic concept is to use a combination of “List Item” to get the leading point in each list of points, to move this point in some direction based on a series of behaviors, and then to insert this moved point back into an ongoing list using “Insert item”.
Step Four – Avoidance Behavior
This is the most important concept in this example, introducing a behavior to steer the agents based on proximity to a neighbor. I am calling this the second behavior, but the way I set up the overall loop, you will notice it is actually happening first (before the reflection routine).
The image below explains graphically what is going on.
Basically, each Agent has an invisible bubble around it, a Comfort Zone. If no other agent is int the comfort zone. The current vector will continue unaltered. But when another agent is in the bubble…watch out! In this case, the Agent will try and move away from the neighbor who happens to be closest in this bubble. The way this is done is through simple vector addition, adding the current vector with the vector between the agent’s neighbor and back to itself, multiplied by a scaling factor. If the scaling factor is low, the avoidance will be slight and slow. If the factor is high, the current heading will have less importance and the Agent will try and get away from the neighbor as quickly as possible.
Note, if you were to reverse the vector between the agent and its neighbor, it would become an attraction behavior instead of avoidance behavior. Below is a detail of the “Avoidance” behavior for reference.
Step Five – Track Agent History through circle sizes
That was the basic agent process. This last step is just something I did after the loop to show the agent’s current location with a big circle, and its history with exponentially smaller circles. I won’t explain the logic, but you should try your own ways of representing the agents and their movement over time (fading circles?). You can decide how many rounds you want to show. Showing footprints from the last 50 iterations, 100, or 200. will influence the pattern.
In the variations below, I used a 20° starting angle for all examples except the first. I showed a history of 200 iterations. The only factors I played with were the “Area of Effect” and the “Strength of Turn”. Note in the first two examples both of these are “zero” which essentially turns the avoidance behavior off.
Note Some interesting patterns and personalities emerge by changing the two factors. When “Strength of Turn” is high, the agents seem anxious and nervous, but when this is low, they seem cool, calm, and collected.
When the radius of effect is high, combined with a high strength of turn, the agents crawl off into their own personal corners and cry, afraid of the world and everything around them. The most interesting and dynamic patterns emerge when the two factors are balanced somewhere in the middle.