I just published a few days back an introduction to a series of ‘space colonization’ algorithms. The term sounds a bit sexier than it is, evoking images of bases on Mars and interstellar travel, but actually it just means connecting a cloud of random points into a network using one or a combination of specific strategies. As mentioned in the previous post, Peter Stevens proposed in 1974 that there are essentially only four possible strategies for connecting a population of points in 2D space with a non-redundant network, i.e. a network where only one path is possible between any two given points in the network.

The goal of the current algorithm is to connect the points with a **meander** network, i.e. a network where a single line connects all points without crossing over itself, and with frequent and unpredictable shifts in direction, unlike the spiral which similarly connects all points with one line, but using a consistent rotational direction.

The algorithm proposed here is *not* perfect but the results are generally good and if perfection is called for, may need some manually tweaking in the end. I have some ideas for fixes, but these might come in a future post.

**Step One – Populate an Area, Calculate Cloud Density, and determine a Search Radius**

In this first step, we will draw a closed curve. The algorithm works well with rectangles and circles, and even some irregular shapes like trapezoids, but won’t give good results with shapes that are *too* irregular. We then turn this curve into a surface with the **Boundary Surface** component and populate it with a cloud of points using the **PopGeo** component. Next we want to find out the density of the point cloud by finding the average distance between any two random points. I do this by using the **Closest Points** component searching for four points, using **Cull Index** on item 0 since this is always a point’s self (the closest person to you is yourself, technically, but usually you don’t want to know this). I then used the **Average** component twice to find first the average distance for a single point to its three closest neighbors and then to get the average distance to the closest points for all points in the set. I will then multiply this ‘cloud density’ factor by a parameter to determine the meander’s search radius, which will affect the meander algorithm’s behavior in the next steps. Depending on cloud density, numbers between around 1.3 and 2.5 seem to give the best results

**Step Two – Set up the Loop**

For this algorithm we will be using the Anemone plugin. See Algorithm 8.1 for basics on Loop setup if your not familiar with these. For this algorithm, we will be using three Data Ports in the loop: D0, D1, and D2. We input the initial point cloud into D0 and will iteratively remove one point from this cloud at every step of the algorithm. D1 will track the current ‘active’ point, which is initially a user input point from Rhino, but which will be the last point removed from the point cloud in D0 at each further step. Finally, D2 will be a collection of our Meander line segments. Initially there is no data in the D2 stream, but by the end there will be a list of segments equal to the initial population of points input into D0.

**Step Three – Find the Next Point for the Growing Meander**

The loop performs three operations in each round. First it finds the next point in the meander, secondly it draws a line between the last segment and the identified point, thirdly it removes the identified point from the point cloud set of unused points and makes this the new ‘active point.’

In each round, to determine which point the meander will grow to next, the algorithm searches all potential points in the point cloud within a given search radius from the end of the current meander set of lines. It does this by drawing a **Circle** with a Radius equal to the Point Cloud Density * Search Radius Factor determined in the previous step. Next using the **Point in Curve **component together with an **Equal**ity test, potential candidate points are determined–in this case all points in or on the curve are candidates, so if their R value is either 2 or 1, they can be selected, hence the use of the R not equal to 0 output. Candidate points are then put in a set with the **Dispatch **component, after which using **Deconstruct Point, **the Y value for each point is obtained. Since in this case we want a meander moving from the bottom of our point population to the top, the Point with the Lowest Y Value is identified as the next point in the Meander. Note that for this to work, the meander start point needs to be at the bottom of the point set. You can start to the left or right of the set as well, but then you need to reconfigure the script slightly in each case. A meander moving from left to right will use the point with the lowest X value, a meander moving from right to left will use the point with the highest X value, while a meander moving from top to down will use the point with the highest Y value.

There are cases, however, when no potential points are identified within the search radius. What then? To avoid the script from getting stuck, my provisional solution is to simply move to the **Closest Point** in the overall set. This can create some funny behaviors and is not an ideal solution, but for now is an acceptable short-term hack to keep things moving along. Future iterations of the script will hopefully have an improved ‘escape’ case.

To choose which of the behaviors is activated, the algorithm uses a simple if then expression. The expression is <<if (x = 0, y, z)>>. In this case *x* is a determined through a **Mass Addition** of the relation output of the **Point in Curve **component. If no points are in or on the curve, the result should be 0. If x = 0, the point input into y is chosen (the output of the second behavior), *else* the standard behavior with a point input into z is chosen.

**Step Four – Draw Meander Geometry, Replace Active Point, Remove Identified Point from the Unused Point Set**

The final part of the loop does a few simple things. First a **Line** is drawn from the active point from the beginning of the round (the end of the growing meander) and the identified next point, this line is then added to a set of lines being fed through the D2 port of the Anemone loop. The identified point is then fed into the D1 input on the loop, effectively replacing the active point for the next round. Finally, the identified point needs to be removed from the set of unused points. This is done by using the **Closest Point** component to obtain the index for the point’s self, and then using **Cull Index** to remove this from the set.

**Variations**

Once everything is up and running, its time to try to see what the effect of changing various parameters. The results of changing the cloud density are shown at the top row, the results of changing the search radius factor in the second row, while changing the random seed produces results in the third row. The most important of these to note is changing the search radius. Where this value is low, the meander is well, a bit more meandering. making this high tends to create a much more striated series of meanders. If it is TOO low, however, the results will be unacceptable as the escape condition gets invoked too many times and the meander will jump a lot more.

Click Here to Download the GH File of the script

If this all makes sense, an added challenge might be to adapt the script to work through a cloud of varying densities. Here the point cloud density needs to be recalculated with every round of the loop, where the search radius adapts to the local density near the current active point. I was quite happy with the results however, with the line in the image below used to generate the rendering at the top of this post.

]]>

**This post reproduces content from Chapter 8 of my doctoral thesis and serves as an introduction to several forthcoming ‘space colonization’ algorithms. The first of these the meander is now available, with some others coming soon!*

In Harvard-trained architect Peter Steven’s 1974 book *Patterns in Nature*, the author proposed that in 2- and 3-dimensional physical spaces, there are only a very limited number of possible configurations of elements, and that consequently nature is forced to reuse the same basic formal strategies in diverse contexts. The variety and diversity we perceive in nature comes not through innovative formal structures, but through topological deformations and hybrid configurations of the limited palette of possible spatial arrangements. (Stevens, 3-4) For example, according to Stevens, in 2D space there are only four strategies to connect a random group of points to each other without any overlap between points and without any redundant connections, i.e. where there is only one possible path between any two random points. He also proposed that these graphs corresponded to four archetypes for pattern formation in nature, each with certain efficiencies and drawbacks. (Stevens 37-48) Some of the images from Stevens’ book can be found at Annette Millington’s blog here.

My own interpretation of Steven’s patterns are reflected in the introductory image to this post, but in summary Stevens four strategies are as follows:

**Explosions:** An explosion connects a single point directly to each other point in the set. (See Image Top, First Row)

**Spirals:** The spiral starts at a central point, and progressing in one of two directions, connects points in a rotating fashion. There is no branching in a spiral, with each point connected to only two other points. (See Image Top, Second Row) In the mathematical terms of graph theory, all points in a spiral have a degree structure of 2 (connected to only two other points), except for the start and end points, which have a degree structure of 1.

**Meanders:** A meander is similar to a spiral having an identical degree structure (1-2-2….2-2-1), but connects points in a much more flexible manner, with frequent changes of the direction of connections. Meanders and spirals share many similar properties.

**Branching Structures**: In a branching structure, nodes can have a degree higher than 2, and points are typically connected to their nearest neighbor regardless of whether other connections come into this point already. Branching structures usually start at a single point, and “grow” from this single point incrementally, adding connections to the nearest neighboring point in an iterative manner.

Stevens recognized that the limitations of space would cause similar organizational structures to appear even when different processes of formation where in play, but like György Kepes had recognised in his book *The New Landscape in art and science*, he observed general trends and commonalities in the behavior of these four networks, and proposed that processes of formation and function were closely related to the associated form. (Stevens, 37)

The simplest of the four patterns proposed by Stevens was the explosion, which also has the shortest average distance between any two points. This network prioritizes speed, but the cost of the network is extremely high in terms of overall length. The spiral, on the other hand, is generally very compact having a low overall length and can develop in a very ordered fashion, but as overall size increases, the distance between any two random points increases as a function of its length. The spiral can be seen as an orderly transfer of energy or material from one point to another. The meander is similar to the spiral in its properties, but in contrast to an ordered development, it can develop in a number of dynamic and chaotic ways. Depending on how it is formed, the meander can in some instances be even more compact than the spiral, but in others it has a far longer length. This network forms in spaces with energy transfer from one point to another, but where flows of matter or energy are highly variable. The last of the networks has perhaps the most surprising properties. The structure of a branching network, like the explosion, connects one point to every other point in the network in a fairly direct path. It is generally only slightly less efficient than the explosion in this way. It does this, however, in a very economical way, with a low overall length, even compared with spirals or meanders. Because of its economy and efficiency, it is no surprise that this type of network is the basis of the body plan of many organisms, including all vertebrates, some invertebrates (the others having a fundamentally spiral body plan), as well as most plants.

Most networks are not purely of one type or another, and hybrids exist between them. River basins, for example, generally follow a branching pattern connecting all points in the river basin to a single river mouth, but between nodes in the overall branching structure, meander patterns tend to form. Likewise, plants might have an overall branching structure, but this is deployed in a spiral fashion. (Stevens, 37-48, Bell, 20-27)

**Algorithms for Non-redundant Networks**

To test Steven’s propositions about these four basic non-redundant graph structures or patterns, I tried to create a series of algorithms for each of these strategies to test their properties for space colonisation and the overall efficiency of the networks created. The goal of this post is to summarise the findings of the four algorithms, each of which will be covered in a bit more detail later.

The simplest of the networks proposed by Stevens is the explosion. Here the algorithm 1) simply finds the point closest to the center of the point cloud, and then 2) connects this point to each other point.

The spiral also starts with a single point. Conceptually, the spiral can be said to start in the center of the cloud, but in order to avoid certain problems with the outer points, in this case the formation is reversed and the outer points are added to the spiral first. To do this the algorithm 1) Finds the point furthest from the center of the spiral and draws a circle through this point, with its center at the center of the spiral. 2) Next a line is drawn from the center of the spiral, through the last point added to the spiral, and continuing to the circle drawn in step one. 3) The intersection point is moved a small amount equal to the average distance between points in the point cloud, in a direction tangent to the circle in either a clockwise or counter-clockwise direction based on the vector of the last segment added (or randomly selected in the first round). 4) The closest point not yet in the network to the translated point from step 3 is added to the spiral.

Similar to the spiral, the meander represents a more chaotic flow of matter or energy and for me was the most difficult of the four basic patterns to model. I tried various algorithmic strategies but all had defects. The most effective strategy demonstrated here 1) starts the algorithm at the point with the smallest Y value. 2) A radius of a parametrically variable dimension is drawn at this point, and all points within this radius are considered to be added. In this example the circle has a radius 1.3 times larger than the average distance between points in the point cloud. The final decision on which point to add is 3) the point within this radius with the smallest Y value. The algorithm then 4) repeats this process with the radius always determined starting at the last value added. In general, the algorithm produces non-redundant meanders, but errors do appear, especially towards the algorithm’s final steps.

The final of the four non-redundant networks, the branching network, is in contrast to the meander very easy to model and error free. The algorithm starts by 1) ordering all the points in the point cloud based on their distance from the center of the point cloud. 2) It then selects the point closest to the center of the point cloud (the first item in the ordered list of points) and adds this to the network. 3)The algorithm then gradually goes through the list, taking the next ordered point, and drawing a line from this point to the closest node already added to the network. Once a node is added to the network, it is removed from the ordered list from step one, and moved to a second list—nodes already in the network.

**Tests on Network Properties**

Once all of these networks were created, I ran a series of tests using a shortest path algorithm on each network to determine the average distance between any two random nodes in the network, using only the associated edges for each network strategy. The results of these tests, comparing average network distance between nodes with the cumulative length of all the network’s edges, i.e. the total network length, are summarized in the image below. For the non-redundant networks, the explosion always maintains a lead in minimizing the average distance between any two random points, but its overall length increases much faster than the other networks as points are added. On the other hand, branching networks while having only slightly higher average distances than the explosion networks, are very economical in minimizing the total network length. The efficiency of this type of networks gives hints to as why it is found in so many natural systems.

In the coming days, I will be posting some scripts relating to colonizing space with these general strategies, starting with the meander. In the meantime, however, you could try writing an algorithm based on the logic explained from the images above, since your solutions may be better than mine!

**Sources**

Joseph Claghorn. *Algorithmic Landscapes: Computational Methods for the Mediation of Form, Information, and Performance in Landscape Architecture. *(doctoral thesis Leibniz University Hannover, 2018), 178, 184-189.

Peter Stevens. *Patterns in Nature. *(Boston: Little, Brown and Company, 1974), 3-4.

Simon Bell, *Landscape: Pattern Perception and Process. 2nd ed. *(London: Routledge, 2012), 20-27.

I’ve already posted a few examples of Cellular Automata but in hindsight, some of them were a bit complicated especially for those who don’t have any prior experience with this computational paradigm. I have a few more *even more *complicated ones I want to highlight in future blog postings, but I thought it might be useful to post an example of a much simpler one for those just encountering the topic for the first time. This particular example comes from a blog posting “The Cave Automaton Method for Cave Generation” from Jeremy Kun’s blog Math ∩ Programming and is perhaps the simplest example I have encountered. It is worth popping over there to read his description of the method before proceeding since he explains it quite well and there is no use to rewrite what has already been well-written.

In short, though, this CA resolves within a few rounds a random interior ‘cave-like’ structure from an initial random distribution of occupied, live cells (state ‘1’), and vacant, dead ones (state ‘0’). To do this, in each round, each cell is checked in relation to its neighbours (usually 8, but 5 or 3 if on the edges or corners) to determine if its state remains the same or if it changes. The conditions for a change are as follows:

Born – If the cell is ‘dead’ (state ‘0’), and 6 or more of the neighbours are ‘alive’ (state ‘1’), the state becomes ‘alive’ (state changed from ‘0’ to ‘1’)

Die – If the cell is ‘alive’ (state ‘1’) and fewer than 3 of its neighbours are ‘alive’ (state ‘1’), the state becomes ‘dead’ (state changed from ‘1’ to ‘0’)

Jeremy Kun uses the shorthand *B678/S345678* to describe this ruleset. (B = Born, S = Survive, i.e. not Die). So if there are 6,7, or 8 ‘live’ neighbours, a ‘dead’ cell is born, or if there are 3,4,5,6,7, or 8 neighbours, a ‘live’ cell ‘survives.’ The image below shows a few examples of this ruleset in action in a very simple 5×5 CA with an initial 50/50 distribution of live (grey) cells and dead (white) cells. The first three images show the initial pattern, and highlight 3 examples of the ruleset in action. The two images in the second row then show the first and second (and final) evolutions of this particular CA. It is not particularly interesting, but make sure the ruleset is clear before proceeding and setting up the simulation. Also try and answer the question, why does the CA stop evolving after the second round?

Got it? Good! Let’s move forward!

Before working on the core logic of this script, we just need to complete a few basic steps to setup our game board or playing field. As always, we want to start small until everything is working well, at which point we can expand the size of our simulation.

Here I used the **Square Grid** component with 20 x 20 cells, with the *(C)ell* output being flattened. I then measure this output with **List Length **to determine how many values I need to generate with my **Random **number generator, which by default will output numbers to six decimal places between 0 and 1. I then want to ‘convert’ these random values to one of two states, ‘1’ being ‘alive’ or ‘0’ being dead. I do this by adding an **Expression **comparing the random values to a parameter I created with a slider called ‘Percentage Live vs. Dead Start’. This slider has values between .40 and .60 since much more or less than that tends to generate an uninteresting, homogenous field in the end. The ‘x’ value input into the Expression is the list of random values. The ‘y’ value is my Percentage Live Parameter. The expression itself is *” if (x>y, 1, 0)”*which is Grasshopper’s syntax for saying *“if x(the random value on the list) is greater than y (the percentage parameter), then output the value ‘1’, otherwise output the value ‘0’.* ” The expression compares each random value against my parameter and outputs a list of 0’s and 1’s which are my initial states based on this.

The states can be previewed with a light or dark colour using the components shown in the top right of the image. Here I am using light for empty or ‘dead’ and dark for occupied or ‘live’ cells.

The next order of business in setting up our game board is to establish the topology of cell proximity. This is done here by finding the *(C)entre point* of each cell using the **Area **component and inputting these points into the *(P)oint* input of the **Proximity2D** component. In this particular case, and in contrast to the Fur Algorithm 12.1 which had fairly complicated proximity relations, the ‘neighbourhood’ is a very simple ‘Moore’ neighbourhood, which are the 8 cells in the compass directions N,NE,E,SE,S,SW,W,NW. To fix this neighbourhood I input the fixed value ‘8’ into the *(G) *input (which limits the number of relations), as well as the result of the **Expression ***1.5 * ‘Cell Size’* parameter to into the *(R+)* input which establishes a maximum radius to look for relations. The result is every internal cell, such as the one marked in red, has 8 relations, and the cells on the edges, such as the one marked in green, have 5 or 3 relations. This data is put into a data container tagged ‘Proximity Matrix Topology.’

In contrast to the previous two CA examples, for this particular simulation I want a ‘frame’ at the edges of the simulation where all the edge cells are always occupied or ‘live.’ To achieve this, I will find out how many neighbours each cell has by measuring the “Proximity Matrix Topology” with the **List Length** component, flattening the result, and then comparing the results with the ‘Initial Cell States List’ using the **Expression “***if (x=8, y, 1)”. *Again, in simpler English, this means *“if the value of ‘x’ (the number of neighbours for each cell) is equal to 8 (meaning it is internal, and not at the edges or corners), keep the cell state as the input ‘y’ (which is the initial cell List), otherwise set the cell state to ‘1’).* You can preview the result of this operation with the components shown in the top right of the image above, as in step 1.

We finally get to the heart of the simulation, where we set up our looping procedure to ‘evolve’ our Cellular Automaton. You will need the Anemone plugin–please see Algorithm 8.1 for hints on setting up a loop if you need them.

First we will implement the born rule, where a cell with a state ‘0’ changes to state ‘1’ if 6 or more neighbours are active. To do this we will resort to the proximity matrix established in step two. Here for each cell index, the index of all neighbours is provided. To understand this, it helps to zoom into one cell to see what is going on.

Cell 78 has neighbours at cell index numbers 57, 58, 59, 77, 79, 97, 98, and 99. The ‘data’ package produced by the **Proximity **component has associated these values together. We now want to use these indices to *list *the cell state values (0 or 1) of 57, 58, 59, 77, 79, 97, 98, and 99. We do this by using **List Item, **with our evolving cell-state list plugged into the *(L)ist* input and ‘Proximity Matrix Topology’ data container plugged into *(i)ndex*. We can then use the **Mass Addition **component to sum all the 0s and 1s from cells 57, 58, 59, 77, 79, 97, 98, and 99 to get the total number of ‘active’ neighbours. Flatten the *(R)esult* output from Mass addition to get this result for each List Item. In this case there are ‘7’ active neighbours for cell ’78,’ whose value at the start of the round was ‘0’ or dead. According to our rule, then, this cell should change its state to ‘1’.

We achieve this by using the **Expression **component with the instruction *if (y>=6, 1, x)*. In plainer English, this means again that for each list item *“if the value of y (the sum state of the neighbours) is greater than or equal to six, then return the result ‘1’, otherwise return the result x (which is current cell state, either ‘0’ or ‘1’, i.e. the state is unchanged.)” *To examine if this is working, it might be helpful to preview the interim result and look at a sample of cells with panels before and after the expression is executed.

Hopefully the logic of the expression we just set up is clear as we are now going to set up a very similar **Expression ** for the ‘Die’ rule. Here we input the list of updated states from the ‘Born’ rule into the ‘x’ input, and the same results of the **Mass Addition** of neighbours into the ‘y’ input. The only difference is our expression is now “i*f (y>=3, x, 0)*“. Hopefully by now the syntax is becoming clear. Try to understand what the expression is doing here and again check the panels to see if the expected results are being produced from a few sample cells.

Finally, we have one last **Expression **to add to keep our border always in a ‘live’ state. The logic here is identical to the logic employed in Step 3 so return to this step if you need to. The results of our rules being employed are then fed into the D0 input of our **Loop End** component.

Now that the hard work is done, it is time to reap the rewards (or the frustration if you made a mistake!). You can now increase the iteration count on the Anemone loop to watch the CA ‘evolve.’ As mentioned previously, this particular ruleset produces stable results very quickly, and in my initial 20×20 grid, I reached a stable state after about 5 rounds. If everything seems to be running smoothly, you can now increase the size of the simulation. Depending on your preview settings, this needs to be handled with care! Below is an example of a 100×100 grid. Note the time to achieve a stable state goes up, but it still resolves fairly quickly, in this case after about 10 rounds.

You can also play with different initial percentages at this point to see how the results change.

Depending on your design goals, you may not want to output the results of the CA in a ‘raw’ state and you may want to do a bit of post-processing to get rid of the pixely feel. Jeremy Kun recommends on his blog running further CA’s on top of the initial one to iteratively ‘smooth’ the structure, but here I used a simpler smoothing operation. First I **Dispatch **the cells into two groups based on their initial state, and then using the **Region Union **Boolean operator, the ‘dead’ areas are brought together into closed Polylines. I then rounded out the edges with the **Fillet **component.

Below are two examples of a near final result using various initial parameters and by changing the random number seed.

From here, you can do other operations. Extrude them into solids and send them to a 3D printer…

Or put the voids into Algorithm 4.7 to create an archipelago of Islands.

If you are having trouble getting it setup, you can Download the GH File Here.

]]>

Generative or Algorithmic Art goes back to the very earliest days of computer graphics and some of the key pioneers of this movement produced work before computer screens were even a thing. It was necessary for them to come up with a clear logic, program an algorithm, and hope for the best when the plotter spit out the results. Some of these earliest pioneers continue as a source of inspiration to algorithmic art to this day, and their early experiments continue to be useful for those learning to code or design algorithms to this day.

One of these early pioneers is the French-Hungarian artist Vera Molnár. She and other generative artists took important ideas from abstract art and Minimalism, which also flourished in the 1960s when many early experiments were done. For more, her biography on Wikipedia: https://en.wikipedia.org/wiki/Vera_Molnár.

While many of her artworks were generated with algorithms, her piece *(Dés) Ordres *from 1974 stood out to me as being both very beautiful and relatively easy to code. In this case, less truly is more.

The logic is fairly simple. A regular grid of squares is offset multiple times towards the squares’ centres. Some of the squares are then randomly reduced, after which the four corner vertices are slightly jiggled in the X and Y directions.

Below is a simple script recreating for the most part the logic of the pattern.

**Step One: Draw a regular Square Grid and Offset Cells towards Centres**

First drop a **Square Grid** component. This takes a couple of parameters. The first is the ‘cell size’ which is the size of the outermost square in our grid. Secondly, the ‘number of cells’ needs to be inputed for both the x and y directions. In this series, Molnár uses a 17 x 17 grid of cells. While setting up the script we will rely on a 5 x 5 grid for clarity, after which we can expand to 17 x 17 or larger.

Before offsetting the outermost square towards the centre, we need to know the total amount to offset, and then divide that amount by the number of offsets we would like. To do this, the ‘cell size’ is **Divided** by 2.01 to approximate the distance to the centre of the square without being exactly halved. This is then **Divided** by the parameter ‘number of offsets.’ The number of offsets in the example above is ‘5’ but actually there are only ‘4’ actual offsets since the first offset amount in our series is ‘0.’ The number 5 goes into the *(C)ount* input on the **Series** component, while the result of our **Division** operation goes into the *i(N)terval* input on the Series component. Then, I using **Cull Index** I remove the first item (Index 0) in the series since I don’t want to keep the line which is offset by the amount ‘0’.

Since I want the offsets to go towards the inside of my square cells, I then make the series **Negative** using the appropriate component. The series is then ‘*grafted*‘ into the *(D)istance* input of the **Offset** component.

**Step Two: Randomly Reduce the number of Squares. **

After setting up the initial ordered grid of squares, it is time to introduce some randomness. In this case, I simply use the **Random Reduce** component on the *flattened* list of squares. To know how many values to remove, use the **List Length** component to measure this list, and **Multiply** the list length by a decimal percentage–in the example above this is ‘.35’ removing 35% of squares generated at the end of step one. This result is input into the number of values to *(R)educe* input on the **Random Reduce **component, and a random number *(S)eed* is input as well. Note other components such as culls or dispatches could be used as well.

**Step Three: Identify Corner Vertices for Remaining Squares**

Here I use the **Control Points **component on a flattened list of the remaining squares. Note that for a four sided square, *five* control points are produced. This is because the starting and ending point is duplicated. For our purposes, we only want this point once, so again we use **Cull Index** to remove everything at index ‘0’ (input into parameter input i) We could also have culled index item 4 (the endpoint) but just choose one.

**Step Four: ‘Jiggle’ the Four Vertices**

This is a very similar procedure to the one used in Jittery Rectangles – Example 1.3 so I won’t go into detail here. You can either ‘hard’ input the amount of Jitter for each domain, or you can make a domain which scales the amount of jitter as a percentage of the initial ‘cell size’ parameter depending on whether you think you may scale the pattern up or down at a later point. I chose to do that latter in this example.

One key difference to note from example 1.3 is the points need to be jittered independently of which square they are in–that is vertex 1 in square 1 should have a unique jitter independent of vertex 1 in square 5, for example–but later the vertices need to be restructured so as to ‘remember’ which square they originally were in. To do this, you need to **Flatten** the list of vertices at the beginning of this step, move the vertices randomly in the X and Y directions, and then **Unflatten **the list using the original list of points as a guide to restore this data structure.

**Step 5 – Reconstitute the Squares – Adjust Parameters**

If everything is structured correctly up to this point, the squares can be reconstituted by simply inputing the **unflattened** list of points into the (V)ertices input of the **Polyline** component. Also important is the **Polyline** needs to be closed and this will only happen if we input the Boolean ‘True’ into the (C)losed? input. Now is the time to adjust parameters to achieve the desired results. In the image above, the initial Jitter amount was very strong, so I decided to tone this back to get something closer to what Vera Molnár showed in her work. Now is also the time to play with scaling up the number of cells in the grid, trying various percentage values for random reduce, etc. A few examples of results can be seen below.

Below is an image of the completed GH script for reference.

]]>

I have received a lot of positive feedback on the blog but as it became more popular, it became increasingly hard to keep it managed especially since I was getting deeper into my doctoral research and the algorithms I was looking at didn’t lend themselves very well to quick blog posts. I had a lot of ideas, but never the time to formulate them into simple tutorials, and all my writing efforts had to be directed elsewhere.

Anyways, I am happy to report the thesis is done, I have started a new position at the University of Sheffield in Northern England, and now that I am getting my other responsibilities under control, I will have a bit more time to dedicate to adding new content. I will probably clean up the index and restructure a lot of the sections at some point later this summer, and will add posts on more complicated algorithms I looked at for my thesis that aren’t full ‘tutorials’ in the coming months, but I will try and add some new easy tutorials as well. Regardless, I have committed to updating the blog on average once per month. I am also going to start an instagram account for the blog soon. Updates will follow. Finally, I will try and slowly add the *.gh files to some of the more popular and complex scripts since many have requested them but I wasn’t able to provide them. Anyways, there will be at least two new posts in April with hopefully with more regular postings after that!

]]>A recent source of inspiration has been some of the work done by the developer of a procedural world generator Miguel Cepero of Voxel Farm/Voxel Studio and documented at his blog Procedural World. I’ve recently experiment with a few different grasshopper scripts based on some of the concepts he discusses, and I wanted to show a couple of these here on this blog. The first is a script based on an extremely well-known fractal know as the Cantor Set and here on proc-world translated into 3D. a fractal known as “Cantor Dust”.

**Step One – Setup a Basic Cantor Set Script**

Setting up a 2D Cantor set is a very straightforward process if you’ve already tried setting up a few of your own recursive loops in Grasshopper using Anemone. If you haven’t done so, I would refer you to a few of the earlier examples in this blog under sections 8 and 9. Here I’m showing the entire script for a 2D Cantor set from which we will build our 3D script.

All we are doing here is Taking a single line segment, imported from Rhino, and then using the “Shatter” component breaking it into 3 equal segments. The middle segment is discarded, and the other two segments, retrieved through the “List Item” component are then Moved a small distance upwards. They are also looped back to be shattered again (and again). Like many recursive fractals, even this small script will crash your computer if you let it run for too long, but after 4 or 5 rounds the geometry gets so small as to almost disappear into “dust” anyway. I also have a second process looping through channel D1 to save all of my old geometry.” This step can be eliminated if you use the “record” function of Anemone, but I like to keep the geometry around in containers for future use.

Even at this early stage, you’ll notice that if we change where the line is shattered, the script will give different results. Below are tests showing different potential shatter patterns by changing the values in the panel and the results after 4 recursions.

**Step Two – Adding Randomness to the Standard Cantor Script**

Before going into the 3D version, we are going to make just a couple of more variations to show the principles we will be using going forward. Here two random number generators are introduced, one to randomize the division points, and one to randomize the vertical distance moved.

The first random number generator, pictured above, generates a value between .15 and .49 to determine the first division point, and then subtracts this value from 1 to determine the second division point. This will always lead to a symmetrical division. The generator is tied to the counter (to which I add a small value to avoid a constant “0” seed) and a number slider.

A second random number generator can be used to determine the amount of movement. Simple Enough.

**Step 3 – Standard 3D Cantor Set**

We will forget the random number generator for a minute and will just try and modify our script to do a standard, 3D Cantor set. The first modification is we will start by inputing a surface into our loop instead of a line. For now we will input a simple square surface. Next, instead of using the “Shatter” component to split the line, we will use Isotrim together with Divide Domain2, splitting our surface into 9 subsurfaces (3×3). Finally, we list the four corner surfaces (0, 2, 6, 8) for further subdivisions. When these surfaces are moved, we should also go ahead and change this to a “Z” vector instead of the “Y” we used in the previous script. By now the script should look something like what is shown below.

One further addition to our script will be to add an “Extrude” component to give us solid geometry, extruding our geometry an amount equal to the amount moved in the Vertical direction. but we still need to keep the un-extruded, moved surfaces, as these will be recursively looped and subdivided, not the extruded geometry.

**Step Four – Irregular Surface Divisions**

It was pretty easy in our 2D version to use our random number generator to produce values for shattering our line. It is much much MUCH more complicated in this 3D example, as there isn’t any kind of simple component for irregularly dividing surfaces. Furthermore, we want at each recursion to assign different random values to each surface at each round, so that they are each acting independently of each other. This will require careful structuring of data. In short, instead of our simple Surface => Divide Domain2 => Isotrim routine, we are replacing it with spaghetti salad. :0

This will not be easy, but don’t panic. I will try and explain. OK, maybe you can panic now and just download the completed script at the end of this post, but if you want to walk through it, I’ll do my best.

We’ll start by dividing our surface into 4 sections using the standard Isotrim before the looping starts. I am creating the surface in a bit of an awkward way, exploding the curve, then using the first and third segments of my rectangle to create my surface using the “Edge Surface” component.

You could use boundary surface at the beginning and it will work at first, but to increase the script’s flexibility for running the Cantor set on *multiple irregular polygons*, which I will do at the very end, you need to construct your surface in a way that will produce what is called an “untrimmed surface”. The boundary surface component creates a “trimmed surface” which can cause problems in some instances. I’m only telling you this because I was hitting my head against the desk for several hours trying to figure out why my script wasn’t working with *multiple irregular shapes* until I stumbled upon a solution to the problem.

OK, moving on. You can use your own rectangle for now, but I am using just one 10 x 12 unit rectangle for this example. Once the four initial subsurfaces pass into the loop, you need to make sure they are *grafted* into its own branch so that each subsurface can be treated independently, and get its own random number set. Next, we use the Deconstruct Domain2 (Not Divide Domain2) to get the “U” and “V” values for each surface. U in this case corresponds to the Y axis and V to the X, but this has to do with how I created my surface, not at all to do with X/Y coordinates. Rotate the shape and you will see the U and V values remain the same regardless of the orientation of the rectangle.

The Deconstruct Domain2 component gives a U0 and a U1, as well as V0 and V1 value for each surface. This can be seen as the start and end value for the domains, *relative to the surface. *I then want to create some new U and V values, two to be precise, at random values *between* each U0/U1 and V0/V1 pairing. This will be similar to how we created the random values in the 2D cantor set. First, we find out the bounds of each pair by subtracting the start value from the end value. This value is then multiplied by one of a set of random numbers generated. You need as many random numbers as you have items, and then the random numbers need to be grafted to match the data structure of the surfaces. I used a lot of panels here to show what is going on.

In the next part of this step, we are going to collate our numbers and construct new domains corresponding to each of our individual subsurfaces

Below is the top half of this construct, just for the U values. We are using the “Merge” component to merge first the U start value(U0), then the location of the 1st cut (U0+Random Number), then the location of the 2nd cut (U1 – Random Number), and finally the U end value (U1). This will create a small sublist corresponding to each subsurface from the previous part of this step. While you won’t see the surface divisions yet, hopefully you can see how the values in the panel correspond to the U divisions shown in the image to the left, that we are looking for.

These sublists now just need to be converted to domains. To do this, Use Shift list, followed by Construct Domain to get a domain spanning between each value in our list, and then Cull the last item, using Cull Index, since this is “junk” that we don’t need (the domain between the last value and the first value). to get the right index, I used a formula, but it might be safe to just say cull Item 3.

Once this is set up, do the same for the V values, here shown without the panels.

Lastly, we need to do a bit more gymnastics to weave, so to speak, the two linear domain sets of domains together, into one squared domain. If we simply plug the values together with the Construct Domain2 Component, however, we will not get what we are looking for, since you will notice from the last step, we had 3 domains for each subsurface (in this case 12 domains total). This is not enough, and will only split the surface into 3 subsurfaces, once for each domain. To solve this, we need to duplicate our list of domains 3 times using the “Duplicate Data” component(which will repeat each data component 3 times, but only in its own sublist, and then use”Partition List” to get the three duplicates into their own separate lists. Then we can construct our squared domain with “Construct Domain2”

Finally, although not altogether obvious, we need to use “Trim Tree” to get rid of the outermost branch without flattening our data all the way. In the end, we want just four sublists to correspond to our original four subsurfaces. Once this is done, plug into the Isotrim component to (hopefully!) get the surface division to work.

**Step 5 – Test Looping and Make Additional Modifications as Desired**

So now that the hard part is behind us, we can carefully increase our number of iterations, and if that is working we can modify the script and adjust parameters to get it to behave more like what we’ve envisioned.

This particular script doesn’t seem to bring much after about 4 loops…except system crashes. After looking at its behavior, I decided I didn’t like for the really tiny pieces to get as much vertical extrusion as the bigger pieces. I decided having a component of each shapes size added to the move and extrusion height equations might help.

So with this minor modification, the results are a bit different.

**A few Variations**

If all is working well, you can input multiple outlines at once and it will perform the algorithm faithfully. It **should* *work with any four sided closed polygon, although you may need to “flip” the direction of the line in some cases if you are getting unexpected results. The image above is of a 4×4 starting grid.

An this is from an irregular field of polygons I drew. Each polygon is four sides.

Once rendered It looks a little like the death star surface…

OK, well, if you have trouble figuring this out, click here to download the GH file

]]>In example 4.1 I mentioned a custom VB component I had used to analyze the flow of water across a surface. I recently tried to recreate this using the Anemone looping component to use with meshes (for various reasons) and it was actually very easy to do. The logic is similar in some ways to example 8.5 which I used to find a path through the landscape, but in some ways this example is pretty simple.

I will be using meshes this time instead of surfaces, partly because I haven’t talked about them too much, but meshes do have some advantages (and disadvantages) over surfaces which I will not get into here. To create this particular mesh, I imported topographic data from SRTM using Elk, and then used the point output from Elk to create a Delauney mesh.

**Step One – Populate Geometry and setup a loop**

To get started, we will use the populate Geometry component, and initially I will use only one point, but we will scale this up to around 2000 points by the end. What’s important is for the loop to work properly at the end is that the output of populate geometry be GRAFTED. While not 100% necessary, you should also simplify, otherwise you will get messy indexing at the end.

While we are at it, we will set up a basic loop using Anemone as explained in prior examples.

**Step Two – Find curve for possible movement directions**

The logic of this loop is after each round, we want to find out if what direction water would flow in if it were at a specific point on site. Using a similar logic to the last example, we will intersect two shapes together to find a curve of possible directions of movement. We will then identify one point on this curve for the actual direction of movement.

To accomplish this, I first draw a mesh sphere with a radius equal to a “Step Size”. Decreasing the step size will increase accuracy at the expense of looping time. You will need to find an appropriate step size based on the overall size of the landscape you are analyzing. In this case I have a fairly large area (around 8km x 8km) so I am using an 80m step size. I find around 1% of the overall dimensions of the landscape usually gives scale appropriate results. This can be changed later if you want more accuracy. If you are testing this on a smaller model, you will need to adjust appropriately.

I then add a Mesh | Mesh Intersection component, which outputs a curve of where the water could possibly go if it flows 80 m in any direction. this is basically a circle sketched on the surface of the mesh.

**Step Three – ****Find lowest point on curve to determine actual water movement direction**

So you probably already know where the water will go, but you might not know how to get there. If there is any doubt, water is an agent, a very dumb agent, but it has one goal. To follow gravity to get to the ocean as fast as possible. So it will always flow down. Well, there are minor exceptions if you take forces like momentum, cohesion, and friction into account, but we won’t do that today

To find this point, we need to know the “lowest point” on the curve we just drew in the last step. There is no such component in grasshopper, but we can use “curve closest point” and then use a point at sea level, or the center of the earth, as a comparison point.

In this case, I deconstruct my sphere’s center point, and reconstruct it with a “Z” value equal to zero. If I am working close to sea-level (in this case I am 1000 m up) it may make sense to set the “Z” value with “Construct Point” to a negative number, like -1000 m (or the center of the earth if you like).

I then use this point together with the Intersect curve from the last step to find the “lowest point.” This is where the water will head next.

**Step Four – Finish the loop and draw a connecting line**

So this is an image of the whole loop. I use the “Insert Item” component to reinsert the new “Lowest Point” into the list, which after 0 rounds is 1 item long. This is why I use the “Counter + 1” expression to determine the insertion index. once the item is added, I can plug this list into the end of my loop. You may want to use “the simplify toggle” to keep your list clean. Pay attention to where I placed these in the image. Last, I add an “Interpolate Curve” component at the end.

Once the loop is complete, you want to increase the looping counter gradually to see if everything is working. Run 1, then 5, then 10 rounds, etc. to get started. While it doesn’t look impressive yet, if you get a series of points and a connecting line going downhill and following the valleys, everything should be fine once you scale up!

**Step Five – Scale Up!**

So go big or go home the boss says? Well, all we need to do is add a few more points to our populate geometry and we’ll have a nice stormwater runoff analysis. First try 3-5 points, not too big. If this isn’t working, maybe you forgot to graft? If its working, scale up quickly. Here I have 200 points, run over 20 rounds.

Looking at it from above, you will notice that even after a couple of rounds, the initially random cloud of points will find a structure. By 20 rounds, almost all the water has accumulated into resting points. This is where the script stops really working. We know the water actually keeps flowing, but in this case, our data isn’t precise enough to account for the horizontal transport of water in rivers, where water might only drop a couple of meters over the course of many many kilometers. But it IS good at showing how water moves on steeper sites.

You can speculate about where the rivers are, however, based on your data. If you have a series of still clusters or beads, bets are good that there is a river connecting them. Above I have the GH output, and below I sketched in the river lines in Photoshop.

Anyways, from this basic analysis, all sorts of further analyses can be done. More on that soon…

]]>

I wanted to take the time to show an example of using Grasshopper to work with data imported from a source outside of Rhino, such as a spreadsheet developed in Excel. Importing data from outside sources is also fundamental to more advanced interactions, such as having the program communicate with remote machines or sensors.

In this example, I wanted to make a diagram of a river’s watershed abstracting the spatial relationship of the river’s tributaries and showing how much each tributary contributes to the overall river’s flow. The technical name for this type of diagram is a “Sankey Diagram”. I actually drew one of these initially in Illustrator, which is superior to rhino/Grasshopper in many ways for representation, but it was a very time consuming process, and if I wanted to create a similar diagram for another watershed, I would have to start from scratch. Another drawback of drawing this in Illustrator is if a datapoint or datapoints change, it can be a time-consuming process to update this. It is also a static representation, and as we all know, a river’s flow is dynamic and changing. Having a representation or diagram that can automatically update with changing values, in this case flow in the individual tributaries, can be a very powerful form of representation.

There are a number of tools and plugins that can deal with importing data into grasshopper, but for this example I will use one of the native tools to the program, and then draw some geometry based on the dataset.

**Preparation – Collect and Organize Data**

The first step is probably the most time-consuming, to actually collect data that could be useful for your diagram. In this case, I researched using Wikipedia all of the tributaries of the River Leine in central Germany, which happens to flow right behind my house. I was able to get the length and watershed area for each tributary, and measured at what river kilometer each tributary branched. Further, I noted whether it was a left- or a right-branching tributary. I was able to get the average discharge of some of the branches, but not all, so I decided I would estimate discharge based on the area of collection, for the purposes of this example.

I compiled all of this data into an Excel file. There are some plugins that can import Excel tables (e.g. Howl + Firefly), but maybe a simpler way is to Export your Excel file as a *.csv file (Comma separated variable), and then to save this file again using a text editor as a *.txt file.

If you would like to follow along in this example, you can copy the following and save it as a *.txt file

R,Grosse Beeke ,12,26,5,30

R,Juersenbach,18.9,26,6,49

R,Auter,24,26,10,113

L,Totes Moor,59.7,26,8,56

L,Westaue,72.2,35,38,600

L,Foesse,94.5,53,8,20

L,Ihme,99.5,48,16,110

R,Innerste,121.5,58,99.7,1264

L,Gestorfer Beeke,125.4,58,8,13

R,Roessingbach,125.5,58,14,36.3

L,Haller,132.8,70,20,124

L,Saale,138.5,73,25,202

R,Despe,142.1,74,12,47

L,Glene,153.1,74,11.7,40

R,Warnebach,156,74,8,27

L,Wispe,161.7,74,22,74

R,Gande,175.6,74,41,114

R,Aue,177.7,103,23,113

L,Ilme,186.5,105,32.6,393

L,Boelle ,191.9,110,10,21

R,Rhume,192.8,116,48,1193

L,Moore,198,118,11,43

R,Beverbach,206.9,120,14,35

L,Espolde,207.9,126,16.1,65

R,Rodebach,208.1,130,8,20

R,Weende,208.9,135,9.2,18.6

L,Harste,209.9,138,8.6,29

L,Grone,211.6,140,6,26

R,Lutter,211.7,144,8.1,38

L,Rase,219.3,150,9,23.8

R,Garte,219.4,152,23,87.2

R,Wendebach,223.4,162,16.2,36

L,Dramme,225,161,14.4,53

L,Molle,230,182,7,10

R,Schleierbach,232.4,191,6,15

R,Rustebach,236.8,210,8,13

L,Steinsbach,238.1,215,5,15

L,Lutter,244.7,233,7,21

R,Beber,245.4,237,7,30

L,Geislede,249.6,260,19,52

R,Steinbach,255.3,276,6,14

R,Etzelsbach,258.4,293,5,13

R,Liene,264.2,337,7,18

What you’ll notice is each line has a series of values, separated by a comma, which would correspond to each individual “cell” in Excel. Once this is done, you can move onto the next step.

**Step One – Import Data**

To import the data, we will use three components. The first is the “File Path” parameter, which feeds into the “Read File” component, in this case set to “Per Line”. Each line will get its own Index in GH. Then we use the “Split Text” component, with a simple comma symbol as the second input, which further structures our data splitting each line at each comma. I put panels behind the components for reference.

**Step Two – Sort Data**

What you do next is entirely situational, but before you start drawing geometry, you may need to reorganize and/or restructure data so it will be useful to you. In this case, there is not too much restructuring necessary, I just wanted to split my dataset into two subsets based on whether the tributaries head left or right, since we will be drawing those differently. In this case , I list the first Item 0, and then distribute the list based on whether the data is in a Left Branch or a Right Branch. The distribute component needs a true/false value, so to get around this problem, I simply replaced my R’s and L’s with True’s and False’s. In this case I needed to also remove empty branches using the “Remove Branch” component.

The general idea, however, is you may need to play around with your inputs and/or data structure to get something which is most helpful to you.

**Step Three – Draw Basic Skeleton**

Before we get too crazy, it is useful to draw only the basic skeleton of our system based on our data. Basically you will be using a lot of “List Item” components to call out your data, and then draw geometry in GH based on this. I recommend grouping your list Items and labelling them to help you keep track of what is what, otherwise you will soon be left with a confusing mass of spaghetti. Well the spaghetti might be inevitable, but labelling always helps when you need to make some changes!

**Step Four – First Refinements **

Once we have our basic skeleton, its time to start gradually refining the process. In this case, I eventually want to show each tributary with a varying thickness based on how much water it contributes to the river system. As mentioned previously, this will be a factor of “watershed area” as a rough approximation of water volume contributed. I first list watershed area for each tributary, divide by a factor, and then want to progressively move the tributaries towards the Right (i known, they are left tributaries, but left in the sense of a boat traveling downstream…if you are traveling upstream, which we are in this case, it would be on your right. hope i’m not confusing you. Think Left Bank in Paris if that helps).

The mass addition component comes in very helpful here for both calculating the total area of all the branches, and also progressively telling you how they add up. One small thing we need to do is in order to get the branches to move correctly, we need to subtract the Step values from the Total value so the branches will get the proper “X” vector.

**Step Five, and so on…. Further Refinements…**

I won’t explain all what’s going on here…These are all simple operations, to improve the graphic quality of our lines. I am doing a few arcs, but also placing text to label my diagram.

Once we get close to what we want for the Left branches, we can copy and paste for the right branches. Notice we have to change a few of the vectors (positive to negative) to get the geometry to move and draw in the correct direction.

How far you want to go is up to you. Here I gave a line thickness using the “Sweep” command, sized the text proportionally based on tributary size (with a minimum text size for the smallest streams), and also made the arc radius proportional to the branch thickness. This is all pretty simple to do, but the GH script can get a bit messy.

**Further Steps – Using Script with a New Data Set and Changing Values**

Once you have a working process setup, you can plug in new datasets, as long as they are structured the same as the dataset you used to create your script, to do another graphic diagram. Here I researched the same values for the Ems River (on the border between Germany and Netherlands). The research took hours. Plugging the new values into GH and generating this diagram took less than five seconds.

You can also update values, and the diagram will change. Say you wanted to compare a river’s discharge at different times of the year, or even have a diagram that updated based on real time sensors. This is possible, and when the file GH is reading is re-saved, the diagram updates automatically, even without you doing anything in GH. Here I randomly changed some of the values of the tributaries of the Aller River in Germany (of which the Leine, which we previously diagrammed, is the largest tributary) and you can see how the diagram updates in real time.

Anyways, this is just meant as an introduction to the topic, but if you anticipate doing a drawing that you may need to replicate again in the future, are dealing with changing data values, or if you are simply toying around with the representation of a large dataset, a scripted environment may be a good way to approach this task.

]]>

It’s been a while since I’ve posted any new content, but I decided to finally add a bit more about agents. This is actually something I started working on a while ago, and which I alluded to in Example 8.5, but it is a method to analyze a topographical surface to find potential corridors of movement, and also areas of inaccessibility.

The basic premise is fairly simple. Anyone who has spent any amount of time studying site design will know that you really shouldn’t have any paths steeper than 1:20. Sure, you can have paths 1:12 with landings every 10 meters, but that just looks ugly. The reason for this 1:20 rule is to make paths that are comfortable for people in wheel chairs and older people. But these paths are also more comfortable for everyone else as well!

Based on this regulation, I decided to create a script that would send a swarm of agents–old ladies and people in wheelchairs–across a landscape, and from this analysis, a designer could then perhaps better understand potential access and barrier points.

The script will follow two rules.

1 – Agents are limited in each “step” to movement uphill and/or downhill that does not exceed a specific gradient, in this case 1:20 (although this can be changed) This is very similar again to Example 8.5 and will use some of the same techniques.

2-Agents will tend to move in the same direction as their current direction. Nobody likes switchbacks. Unlike Example 8.5, there is no “destination” per se, the agents will just keep moving in one direction unless there are no good options in that direction, in which case they will turn to a new general direction.

In addition to analyzing sites for barrier free movement, this logic may be useful for modeling ecosystems as well. Most animals, like most people, also don’t like super steep slopes, and will follow lower gradients when possible. Sure, it IS possible to go straight up hill, but in the interest of conserving energy, in the long term lower gradients will be followed. With a bit more scientific rigor, this method of modeling may show potential migration corridors in larger landscapes, and also pinch points, where potential predators might like to hang out! And places that are inaccessible to most animals, might just be a good place for an animal without teeth to carve out a new ecological niche (mountain goats?) So enough of that, on the script.

**Step One**

First, you will need a surface. In this case, I used Elk to create an 8.6 x 8.6 km area of an interesting landscape southeast of Alfeld, Germany. Any landscape with some topographical variation will do. I then use the “Populate Geometry” component to put some starting agents on the surface. I will keep it low for now, just two, but can increase this later.

The second important thing here is to set up a “Step Size”, the distance the agents will cover in each round. Since I want the script to work for smaller and larger sites, I use a bit of math to make the step size proportional to the overall surface dimensions. Note that for clarity I use a rather large step size at first, but I will reduce this later to get more accurate results.

**Step Two**

At each random point, I draw a circle with a radius equal to the “Step Size.” I then move this circle once up and once down based on the maximum amount an agent may move either up or down in each step. This is proportional to the gradient, in this case 1:20. My step in this case is 260 m (this will later be reduce for more accurate results) . That means with the 1:20 gradient I may not move up any more than 13m, or down more than 13m. A loft is drawn between the minimum and maximum circle, and this is then intersected (BREP | BREP Intersection component) with the surface to generate a curve or set of curves of possible vectors of movement. This is again exactly like Example 8.5 which you can refer to for additional explanation.

Note that the top right agent has only one curve of possible movement, while the bottom right agent has two. Once we start looping, a point along the curve in the current direction of movement will be privileged, but for now, the agent at rest could venture off in either direction.

**Step Three**

Here I use the “List” component to give me only the first potential movement curve for each agent point. I then use “Curve Closest Point” to find the closest point on this curve–the agent’s destination–to the agent’s current position. I then add this new point into a list just after the current point.

Please pay attention to the data structuring, that is, the grafting and simplification. The goal is to get the initial point as point “0” on your list, while the second point becomes point “1”

For reference, to this point the overall script should look like the image below

**Step Four**

Now we are going to go big and make the loop all at once! It looks like a lot but it is basically just repeating much of what we did before.

First, we use “List Component” along with the Round Counter to extract the last two points from our list. Right now the list only has two points for each agent, but this will quickly grow!

We then do exactly like step Two above, drawing circles at the current agent position (Point 1 in this case) with a radius based on the step size, and then finding curves of potential movement based on the maximum allowable gradient.

Instead of using list Item to select the first of these potential movement curves, we are now going to do it a little differently. We first find the current vector of movement based on the vector between the next to the last point (Point 0) and the last point (Point 1). We then draw a “Tentative” movement point, in this case at half the total movement, and then run a “Curve Closest Point” test between this “Tentative” point and the potential movement curves.

There could be one, two, three or even more potential movement curves…but there is always at least one. If all else fails the agent will go back to where he came from. Anyways, we then do one more “Closest Point” component to find which of these one, two, or 3+ Closest points on the Individual curves is the closest of the whole set. This is the next destination. If it doesn’t make sense, just copy EXACTLY what I did above and it should work.

I then merge this new agent current position into the ongoing list of agent positions.

**Step Five – Running the Loop**

Once this hard work is done, its smooth sailing–hopefully. In the image above I am labelling the points with their index number for clarity, but you can start to see how the agents are behaving. If it is working, slowly increase the number of iterations, and also now would be a good time to go back to the start and reduce the step size in the interest of more accurate results.

**Step Six – Continue Looping and Play with Representation of Agents.**

You may also want to increase the number of starting agents, by adding a few points to the initial “PopGeo” component. If all is well, it should be able to handle a few more. Lastly, you may want to make the agent trails look a little better. You can add a “Interpolate” curve or a “Nurbs Curve” between the points in the list to track the agents without the red “X’s”. You may also consider, AFTER the loop is finished, adding a “Dash” component. Be careful with this though, and make sure to disable/delete it if you decide you want to run a few more rounds!

There are may other Representation options. In the first image of this post, the agent paths are colored with a gradient based on how far ranging they are. Agents that are confined to topography to their local neighborhood are Orangeish, while agents that wander far from home get colored green. This wasn’t too hard to figure out, but I’ll leave that for you to figure out on your own, if you’d like.

By now, hopefully some patterns are starting to emerge. If this were a park landscape, you may start to see where pedestrian paths would be feasible, or where they could be difficult to construct. If a particular point needs to be accessed, you can also see potential ways to get there with accessible paths.

If this were an ecosystem simulation, you’ll start to see where would be a good place to hang out if you were a mountain lion looking for passing livestock, and might even see where the mountain goats would hang out. Also note that the edge boundaries have a huge effect on agent behavior towards the edges. This is a common problem with computer simulations, since the real world doesn’t have such hard boundaries, but you could image that if a fence were erected around this landscape to create a protection area or such, what the implications might be.

**Optional Step**

The script can now be fine-tuned / altered / improved in any number of ways. Here, as an example, a bit of randomness is added to the path of the agents by rotating the vector of “Tentative” movement. This frees up the agents to wander a bit more, but they still will be constrained by the gradient rules.

**Comparison with the Actual Landscape Condition**

Just out of curiosity, I decided to compare what I learned about the landscape from the agent modeling to the actual landscape condition.

The image to the left is taken from open street maps, the images to the right are the versions with agents strictly going to the closest point in the current direction (above) and the more wandering agents (below).

I’ll let you draw your own conclusions, but remember, topography isn’t the only thing shaping this landscape. Also, some of the information towards the edges is skewed because of the boundary problem discussed earlier.

Anyways, hope this helps as a good start to seeing how agent modeling can be useful in landscape surface analysis and design! As a last image, I just wanted to show a quick test I did of the same agents walking through the Iberian Peninsula. A more careful analysis could start to yield some insight into historical routes of movement through the Peninsula, which in turn informed Spain’s historical development.

]]>

I wasn’t sure where to put this example exactly, since it came as a follow up to Example 8.4, but the general scripting is less complex so I decided to put it a bit earlier. The general problem and solution has many applications beyond topography as well, but for landscape architects, maybe its most ready application would be in the creation of landforms. It could also be used to generate generalized roof profiles for buildings in some cases.

If you already looked at Example 8.4, the recursive offsetting of base curves to create a topography, you may have tried a similar process going inward. Offsetting towards the exterior sometimes, but rarely causes problems with changes in topology, a mathematical term describing the form of a shape, but offsetting towards the inside is often a very different matter. If you are offsetting contour lines for a landform, for example, which is somewhat irregular in form, you will probably get to a point eventually where the landform “splits” into separate contour lines, or separate “peaks”. If you have an automated process in grasshopper, similar to Example 8.4 for example, going towards the inside, this can create problems.

Fortunately, there is a fairly simple solution for describing the topology of a shape through what is called the “medial axis,” and using this description in turn to create a landform out of any arbitrary closed shape or closed set of shapes. The logic of this script using Voronoi cells to find the “medial axis” is explained on the Space Symmetry Syntax blog by Daniel Piker, but here the definition is reworked to work with the latest versions of grasshopper, and also extended a bit at the end. This definition is designed to work with any number of input curves, but you will have to pay attention to the data structure, particularly the “Grafted” elements throughout for it to work properly.

**Step One – Use Voronoi Cells to describe typology of shape**

The script starts here with three arbitrary curves, in this case boomerangs. These curves are divided into a regular number of points, and these division points in turn are used to create a Voronoi diagram. If you look at the diagram, the boundary between the cells corresponds closely to the elements that can be described as the “Ridge” and “Hips” of our landform. You will have to increase the number of curve division points to make this line increasingly more precise, while not overwhelming your computer. Finally, we use the “Trim Region” command to trim the Voronoi cells, and we will only go forward with the pieces of geometry that are inside our region curves.

**Step Two – Extract Medial Axis and “Veins” from Voronoi cells **

Once we have the cells inside our shapes, we can explode the cells. We now divide the remaining geometry into two classes. The pieces of geometry which touch the edge curve always run perpendicular to the slope of our landforms, and we will call these “veins” (like veins on a leaf) going forward. The pieces which do not touch the edges comprise the topological skeleton of our shape. To separate these, we will use the “Collision One” component to return a true/False value for our shapes to see if they touch the outside edge curve. These two sets of Geometry are then dispatched.

Notice also what I did with the data structure. I used the “Trim Tree” component to remove all levels of data structure except for the last one. This is because I don’t care what cell the lines used to be associated with, but I still do care which of the three starting lines each line is associated with. If I flatten all the way, it will not work properly.

**Step Three – Move Topological skeleton vertically to define landform**

In the next steps, I will use the geometry I generated to develop a landform and a mesh. I can use either the medial axis to define this mesh, or I could use the veins. In the image above, I use the veins. In the image below, I use the Medial Axis.

The general principle in both is the same. The endpoints of each piece of geometry are extracted, and then moved vertically based on their distance from the edge curve. The amount of movement is scalable based on the desired overall slope. Once these points are moved, the lines can be redrawn.

**Step Four – Create Mesh and Contour Lines**

Here I am using the endpoints of each of the “veins” to define a mesh, from which I will derive contour lines.

**Step Five – Optional Lofts for the Veins**

You could also draw some geometry with the veins, but this is a totally optional step.

**Variations**

This definition *should** work with any number of closed shapes of any size and form. You will only need to adjust the number of initial curve divisions to get results that are more or less precise. You can also adjust the height scaling factor to get various landform slopes. Below are just two examples of possiblities, based on a complex, curvilinear form, and one based on simpler triangular shapes. Note it works well in both cases!