A recent source of inspiration has been some of the work done by the developer of a procedural world generator Miguel Cepero of Voxel Farm/Voxel Studio and documented at his blog Procedural World. I’ve recently experiment with a few different grasshopper scripts based on some of the concepts he discusses, and I wanted to show a couple of these here on this blog. The first is a script based on an extremely well-known fractal know as the Cantor Set and here on proc-world translated into 3D. a fractal known as “Cantor Dust”.

**Step One – Setup a Basic Cantor Set Script**

Setting up a 2D Cantor set is a very straightforward process if you’ve already tried setting up a few of your own recursive loops in Grasshopper using Anemone. If you haven’t done so, I would refer you to a few of the earlier examples in this blog under sections 8 and 9. Here I’m showing the entire script for a 2D Cantor set from which we will build our 3D script.

All we are doing here is Taking a single line segment, imported from Rhino, and then using the “Shatter” component breaking it into 3 equal segments. The middle segment is discarded, and the other two segments, retrieved through the “List Item” component are then Moved a small distance upwards. They are also looped back to be shattered again (and again). Like many recursive fractals, even this small script will crash your computer if you let it run for too long, but after 4 or 5 rounds the geometry gets so small as to almost disappear into “dust” anyway. I also have a second process looping through channel D1 to save all of my old geometry.” This step can be eliminated if you use the “record” function of Anemone, but I like to keep the geometry around in containers for future use.

Even at this early stage, you’ll notice that if we change where the line is shattered, the script will give different results. Below are tests showing different potential shatter patterns by changing the values in the panel and the results after 4 recursions.

**Step Two – Adding Randomness to the Standard Cantor Script**

Before going into the 3D version, we are going to make just a couple of more variations to show the principles we will be using going forward. Here two random number generators are introduced, one to randomize the division points, and one to randomize the vertical distance moved.

The first random number generator, pictured above, generates a value between .15 and .49 to determine the first division point, and then subtracts this value from 1 to determine the second division point. This will always lead to a symmetrical division. The generator is tied to the counter (to which I add a small value to avoid a constant “0” seed) and a number slider.

A second random number generator can be used to determine the amount of movement. Simple Enough.

**Step 3 – Standard 3D Cantor Set**

We will forget the random number generator for a minute and will just try and modify our script to do a standard, 3D Cantor set. The first modification is we will start by inputing a surface into our loop instead of a line. For now we will input a simple square surface. Next, instead of using the “Shatter” component to split the line, we will use Isotrim together with Divide Domain2, splitting our surface into 9 subsurfaces (3×3). Finally, we list the four corner surfaces (0, 2, 6, 8) for further subdivisions. When these surfaces are moved, we should also go ahead and change this to a “Z” vector instead of the “Y” we used in the previous script. By now the script should look something like what is shown below.

One further addition to our script will be to add an “Extrude” component to give us solid geometry, extruding our geometry an amount equal to the amount moved in the Vertical direction. but we still need to keep the un-extruded, moved surfaces, as these will be recursively looped and subdivided, not the extruded geometry.

**Step Four – Irregular Surface Divisions**

It was pretty easy in our 2D version to use our random number generator to produce values for shattering our line. It is much much MUCH more complicated in this 3D example, as there isn’t any kind of simple component for irregularly dividing surfaces. Furthermore, we want at each recursion to assign different random values to each surface at each round, so that they are each acting independently of each other. This will require careful structuring of data. In short, instead of our simple Surface => Divide Domain2 => Isotrim routine, we are replacing it with spaghetti salad. :0

This will not be easy, but don’t panic. I will try and explain. OK, maybe you can panic now and just download the completed script at the end of this post, but if you want to walk through it, I’ll do my best.

We’ll start by dividing our surface into 4 sections using the standard Isotrim before the looping starts. I am creating the surface in a bit of an awkward way, exploding the curve, then using the first and third segments of my rectangle to create my surface using the “Edge Surface” component.

You could use boundary surface at the beginning and it will work at first, but to increase the script’s flexibility for running the Cantor set on *multiple irregular polygons*, which I will do at the very end, you need to construct your surface in a way that will produce what is called an “untrimmed surface”. The boundary surface component creates a “trimmed surface” which can cause problems in some instances. I’m only telling you this because I was hitting my head against the desk for several hours trying to figure out why my script wasn’t working with *multiple irregular shapes* until I stumbled upon a solution to the problem.

OK, moving on. You can use your own rectangle for now, but I am using just one 10 x 12 unit rectangle for this example. Once the four initial subsurfaces pass into the loop, you need to make sure they are *grafted* into its own branch so that each subsurface can be treated independently, and get its own random number set. Next, we use the Deconstruct Domain2 (Not Divide Domain2) to get the “U” and “V” values for each surface. U in this case corresponds to the Y axis and V to the X, but this has to do with how I created my surface, not at all to do with X/Y coordinates. Rotate the shape and you will see the U and V values remain the same regardless of the orientation of the rectangle.

The Deconstruct Domain2 component gives a U0 and a U1, as well as V0 and V1 value for each surface. This can be seen as the start and end value for the domains, *relative to the surface. *I then want to create some new U and V values, two to be precise, at random values *between* each U0/U1 and V0/V1 pairing. This will be similar to how we created the random values in the 2D cantor set. First, we find out the bounds of each pair by subtracting the start value from the end value. This value is then multiplied by one of a set of random numbers generated. You need as many random numbers as you have items, and then the random numbers need to be grafted to match the data structure of the surfaces. I used a lot of panels here to show what is going on.

In the next part of this step, we are going to collate our numbers and construct new domains corresponding to each of our individual subsurfaces

Below is the top half of this construct, just for the U values. We are using the “Merge” component to merge first the U start value(U0), then the location of the 1st cut (U0+Random Number), then the location of the 2nd cut (U1 – Random Number), and finally the U end value (U1). This will create a small sublist corresponding to each subsurface from the previous part of this step. While you won’t see the surface divisions yet, hopefully you can see how the values in the panel correspond to the U divisions shown in the image to the left, that we are looking for.

These sublists now just need to be converted to domains. To do this, Use Shift list, followed by Construct Domain to get a domain spanning between each value in our list, and then Cull the last item, using Cull Index, since this is “junk” that we don’t need (the domain between the last value and the first value). to get the right index, I used a formula, but it might be safe to just say cull Item 3.

Once this is set up, do the same for the V values, here shown without the panels.

Lastly, we need to do a bit more gymnastics to weave, so to speak, the two linear domain sets of domains together, into one squared domain. If we simply plug the values together with the Construct Domain2 Component, however, we will not get what we are looking for, since you will notice from the last step, we had 3 domains for each subsurface (in this case 12 domains total). This is not enough, and will only split the surface into 3 subsurfaces, once for each domain. To solve this, we need to duplicate our list of domains 3 times using the “Duplicate Data” component(which will repeat each data component 3 times, but only in its own sublist, and then use”Partition List” to get the three duplicates into their own separate lists. Then we can construct our squared domain with “Construct Domain2”

Finally, although not altogether obvious, we need to use “Trim Tree” to get rid of the outermost branch without flattening our data all the way. In the end, we want just four sublists to correspond to our original four subsurfaces. Once this is done, plug into the Isotrim component to (hopefully!) get the surface division to work.

**Step 5 – Test Looping and Make Additional Modifications as Desired**

So now that the hard part is behind us, we can carefully increase our number of iterations, and if that is working we can modify the script and adjust parameters to get it to behave more like what we’ve envisioned.

This particular script doesn’t seem to bring much after about 4 loops…except system crashes. After looking at its behavior, I decided I didn’t like for the really tiny pieces to get as much vertical extrusion as the bigger pieces. I decided having a component of each shapes size added to the move and extrusion height equations might help.

So with this minor modification, the results are a bit different.

**A few Variations**

If all is working well, you can input multiple outlines at once and it will perform the algorithm faithfully. It **should* *work with any four sided closed polygon, although you may need to “flip” the direction of the line in some cases if you are getting unexpected results. The image above is of a 4×4 starting grid.

An this is from an irregular field of polygons I drew. Each polygon is four sides.

Once rendered It looks a little like the death star surface…

OK, well, if you have trouble figuring this out, click here to download the GH file

]]>

In example 4.1 I mentioned a custom VB component I had used to analyze the flow of water across a surface. I recently tried to recreate this using the Anemone looping component to use with meshes (for various reasons) and it was actually very easy to do. The logic is similar in some ways to example 8.5 which I used to find a path through the landscape, but in some ways this example is pretty simple.

I will be using meshes this time instead of surfaces, partly because I haven’t talked about them too much, but meshes do have some advantages (and disadvantages) over surfaces which I will not get into here. To create this particular mesh, I imported topographic data from SRTM using Elk, and then used the point output from Elk to create a Delauney mesh.

**Step One – Populate Geometry and setup a loop**

To get started, we will use the populate Geometry component, and initially I will use only one point, but we will scale this up to around 2000 points by the end. What’s important is for the loop to work properly at the end is that the output of populate geometry be GRAFTED. While not 100% necessary, you should also simplify, otherwise you will get messy indexing at the end.

While we are at it, we will set up a basic loop using Anemone as explained in prior examples.

**Step Two – Find curve for possible movement directions**

The logic of this loop is after each round, we want to find out if what direction water would flow in if it were at a specific point on site. Using a similar logic to the last example, we will intersect two shapes together to find a curve of possible directions of movement. We will then identify one point on this curve for the actual direction of movement.

To accomplish this, I first draw a mesh sphere with a radius equal to a “Step Size”. Decreasing the step size will increase accuracy at the expense of looping time. You will need to find an appropriate step size based on the overall size of the landscape you are analyzing. In this case I have a fairly large area (around 8km x 8km) so I am using an 80m step size. I find around 1% of the overall dimensions of the landscape usually gives scale appropriate results. This can be changed later if you want more accuracy. If you are testing this on a smaller model, you will need to adjust appropriately.

I then add a Mesh | Mesh Intersection component, which outputs a curve of where the water could possibly go if it flows 80 m in any direction. this is basically a circle sketched on the surface of the mesh.

**Step Three – ****Find lowest point on curve to determine actual water movement direction**

So you probably already know where the water will go, but you might not know how to get there. If there is any doubt, water is an agent, a very dumb agent, but it has one goal. To follow gravity to get to the ocean as fast as possible. So it will always flow down. Well, there are minor exceptions if you take forces like momentum, cohesion, and friction into account, but we won’t do that today

To find this point, we need to know the “lowest point” on the curve we just drew in the last step. There is no such component in grasshopper, but we can use “curve closest point” and then use a point at sea level, or the center of the earth, as a comparison point.

In this case, I deconstruct my sphere’s center point, and reconstruct it with a “Z” value equal to zero. If I am working close to sea-level (in this case I am 1000 m up) it may make sense to set the “Z” value with “Construct Point” to a negative number, like -1000 m (or the center of the earth if you like).

I then use this point together with the Intersect curve from the last step to find the “lowest point.” This is where the water will head next.

**Step Four – Finish the loop and draw a connecting line**

So this is an image of the whole loop. I use the “Insert Item” component to reinsert the new “Lowest Point” into the list, which after 0 rounds is 1 item long. This is why I use the “Counter + 1” expression to determine the insertion index. once the item is added, I can plug this list into the end of my loop. You may want to use “the simplify toggle” to keep your list clean. Pay attention to where I placed these in the image. Last, I add an “Interpolate Curve” component at the end.

Once the loop is complete, you want to increase the looping counter gradually to see if everything is working. Run 1, then 5, then 10 rounds, etc. to get started. While it doesn’t look impressive yet, if you get a series of points and a connecting line going downhill and following the valleys, everything should be fine once you scale up!

**Step Five – Scale Up!**

So go big or go home the boss says? Well, all we need to do is add a few more points to our populate geometry and we’ll have a nice stormwater runoff analysis. First try 3-5 points, not too big. If this isn’t working, maybe you forgot to graft? If its working, scale up quickly. Here I have 200 points, run over 20 rounds.

Looking at it from above, you will notice that even after a couple of rounds, the initially random cloud of points will find a structure. By 20 rounds, almost all the water has accumulated into resting points. This is where the script stops really working. We know the water actually keeps flowing, but in this case, our data isn’t precise enough to account for the horizontal transport of water in rivers, where water might only drop a couple of meters over the course of many many kilometers. But it IS good at showing how water moves on steeper sites.

You can speculate about where the rivers are, however, based on your data. If you have a series of still clusters or beads, bets are good that there is a river connecting them. Above I have the GH output, and below I sketched in the river lines in Photoshop.

Anyways, from this basic analysis, all sorts of further analyses can be done. More on that soon…

]]>

I wanted to take the time to show an example of using Grasshopper to work with data imported from a source outside of Rhino, such as a spreadsheet developed in Excel. Importing data from outside sources is also fundamental to more advanced interactions, such as having the program communicate with remote machines or sensors.

In this example, I wanted to make a diagram of a river’s watershed abstracting the spatial relationship of the river’s tributaries and showing how much each tributary contributes to the overall river’s flow. The technical name for this type of diagram is a “Sankey Diagram”. I actually drew one of these initially in Illustrator, which is superior to rhino/Grasshopper in many ways for representation, but it was a very time consuming process, and if I wanted to create a similar diagram for another watershed, I would have to start from scratch. Another drawback of drawing this in Illustrator is if a datapoint or datapoints change, it can be a time-consuming process to update this. It is also a static representation, and as we all know, a river’s flow is dynamic and changing. Having a representation or diagram that can automatically update with changing values, in this case flow in the individual tributaries, can be a very powerful form of representation.

There are a number of tools and plugins that can deal with importing data into grasshopper, but for this example I will use one of the native tools to the program, and then draw some geometry based on the dataset.

**Preparation – Collect and Organize Data**

The first step is probably the most time-consuming, to actually collect data that could be useful for your diagram. In this case, I researched using Wikipedia all of the tributaries of the River Leine in central Germany, which happens to flow right behind my house. I was able to get the length and watershed area for each tributary, and measured at what river kilometer each tributary branched. Further, I noted whether it was a left- or a right-branching tributary. I was able to get the average discharge of some of the branches, but not all, so I decided I would estimate discharge based on the area of collection, for the purposes of this example.

I compiled all of this data into an Excel file. There are some plugins that can import Excel tables (e.g. Howl + Firefly), but maybe a simpler way is to Export your Excel file as a *.csv file (Comma separated variable), and then to save this file again using a text editor as a *.txt file.

If you would like to follow along in this example, you can copy the following and save it as a *.txt file

R,Grosse Beeke ,12,26,5,30

R,Juersenbach,18.9,26,6,49

R,Auter,24,26,10,113

L,Totes Moor,59.7,26,8,56

L,Westaue,72.2,35,38,600

L,Foesse,94.5,53,8,20

L,Ihme,99.5,48,16,110

R,Innerste,121.5,58,99.7,1264

L,Gestorfer Beeke,125.4,58,8,13

R,Roessingbach,125.5,58,14,36.3

L,Haller,132.8,70,20,124

L,Saale,138.5,73,25,202

R,Despe,142.1,74,12,47

L,Glene,153.1,74,11.7,40

R,Warnebach,156,74,8,27

L,Wispe,161.7,74,22,74

R,Gande,175.6,74,41,114

R,Aue,177.7,103,23,113

L,Ilme,186.5,105,32.6,393

L,Boelle ,191.9,110,10,21

R,Rhume,192.8,116,48,1193

L,Moore,198,118,11,43

R,Beverbach,206.9,120,14,35

L,Espolde,207.9,126,16.1,65

R,Rodebach,208.1,130,8,20

R,Weende,208.9,135,9.2,18.6

L,Harste,209.9,138,8.6,29

L,Grone,211.6,140,6,26

R,Lutter,211.7,144,8.1,38

L,Rase,219.3,150,9,23.8

R,Garte,219.4,152,23,87.2

R,Wendebach,223.4,162,16.2,36

L,Dramme,225,161,14.4,53

L,Molle,230,182,7,10

R,Schleierbach,232.4,191,6,15

R,Rustebach,236.8,210,8,13

L,Steinsbach,238.1,215,5,15

L,Lutter,244.7,233,7,21

R,Beber,245.4,237,7,30

L,Geislede,249.6,260,19,52

R,Steinbach,255.3,276,6,14

R,Etzelsbach,258.4,293,5,13

R,Liene,264.2,337,7,18

What you’ll notice is each line has a series of values, separated by a comma, which would correspond to each individual “cell” in Excel. Once this is done, you can move onto the next step.

**Step One – Import Data**

To import the data, we will use three components. The first is the “File Path” parameter, which feeds into the “Read File” component, in this case set to “Per Line”. Each line will get its own Index in GH. Then we use the “Split Text” component, with a simple comma symbol as the second input, which further structures our data splitting each line at each comma. I put panels behind the components for reference.

**Step Two – Sort Data**

What you do next is entirely situational, but before you start drawing geometry, you may need to reorganize and/or restructure data so it will be useful to you. In this case, there is not too much restructuring necessary, I just wanted to split my dataset into two subsets based on whether the tributaries head left or right, since we will be drawing those differently. In this case , I list the first Item 0, and then distribute the list based on whether the data is in a Left Branch or a Right Branch. The distribute component needs a true/false value, so to get around this problem, I simply replaced my R’s and L’s with True’s and False’s. In this case I needed to also remove empty branches using the “Remove Branch” component.

The general idea, however, is you may need to play around with your inputs and/or data structure to get something which is most helpful to you.

**Step Three – Draw Basic Skeleton**

Before we get too crazy, it is useful to draw only the basic skeleton of our system based on our data. Basically you will be using a lot of “List Item” components to call out your data, and then draw geometry in GH based on this. I recommend grouping your list Items and labelling them to help you keep track of what is what, otherwise you will soon be left with a confusing mass of spaghetti. Well the spaghetti might be inevitable, but labelling always helps when you need to make some changes!

**Step Four – First Refinements **

Once we have our basic skeleton, its time to start gradually refining the process. In this case, I eventually want to show each tributary with a varying thickness based on how much water it contributes to the river system. As mentioned previously, this will be a factor of “watershed area” as a rough approximation of water volume contributed. I first list watershed area for each tributary, divide by a factor, and then want to progressively move the tributaries towards the Right (i known, they are left tributaries, but left in the sense of a boat traveling downstream…if you are traveling upstream, which we are in this case, it would be on your right. hope i’m not confusing you. Think Left Bank in Paris if that helps).

The mass addition component comes in very helpful here for both calculating the total area of all the branches, and also progressively telling you how they add up. One small thing we need to do is in order to get the branches to move correctly, we need to subtract the Step values from the Total value so the branches will get the proper “X” vector.

**Step Five, and so on…. Further Refinements…**

I won’t explain all what’s going on here…These are all simple operations, to improve the graphic quality of our lines. I am doing a few arcs, but also placing text to label my diagram.

Once we get close to what we want for the Left branches, we can copy and paste for the right branches. Notice we have to change a few of the vectors (positive to negative) to get the geometry to move and draw in the correct direction.

How far you want to go is up to you. Here I gave a line thickness using the “Sweep” command, sized the text proportionally based on tributary size (with a minimum text size for the smallest streams), and also made the arc radius proportional to the branch thickness. This is all pretty simple to do, but the GH script can get a bit messy.

**Further Steps – Using Script with a New Data Set and Changing Values**

Once you have a working process setup, you can plug in new datasets, as long as they are structured the same as the dataset you used to create your script, to do another graphic diagram. Here I researched the same values for the Ems River (on the border between Germany and Netherlands). The research took hours. Plugging the new values into GH and generating this diagram took less than five seconds.

You can also update values, and the diagram will change. Say you wanted to compare a river’s discharge at different times of the year, or even have a diagram that updated based on real time sensors. This is possible, and when the file GH is reading is re-saved, the diagram updates automatically, even without you doing anything in GH. Here I randomly changed some of the values of the tributaries of the Aller River in Germany (of which the Leine, which we previously diagrammed, is the largest tributary) and you can see how the diagram updates in real time.

Anyways, this is just meant as an introduction to the topic, but if you anticipate doing a drawing that you may need to replicate again in the future, are dealing with changing data values, or if you are simply toying around with the representation of a large dataset, a scripted environment may be a good way to approach this task.

]]>

It’s been a while since I’ve posted any new content, but I decided to finally add a bit more about agents. This is actually something I started working on a while ago, and which I alluded to in Example 8.5, but it is a method to analyze a topographical surface to find potential corridors of movement, and also areas of inaccessibility.

The basic premise is fairly simple. Anyone who has spent any amount of time studying site design will know that you really shouldn’t have any paths steeper than 1:20. Sure, you can have paths 1:12 with landings every 10 meters, but that just looks ugly. The reason for this 1:20 rule is to make paths that are comfortable for people in wheel chairs and older people. But these paths are also more comfortable for everyone else as well!

Based on this regulation, I decided to create a script that would send a swarm of agents–old ladies and people in wheelchairs–across a landscape, and from this analysis, a designer could then perhaps better understand potential access and barrier points.

The script will follow two rules.

1 – Agents are limited in each “step” to movement uphill and/or downhill that does not exceed a specific gradient, in this case 1:20 (although this can be changed) This is very similar again to Example 8.5 and will use some of the same techniques.

2-Agents will tend to move in the same direction as their current direction. Nobody likes switchbacks. Unlike Example 8.5, there is no “destination” per se, the agents will just keep moving in one direction unless there are no good options in that direction, in which case they will turn to a new general direction.

In addition to analyzing sites for barrier free movement, this logic may be useful for modeling ecosystems as well. Most animals, like most people, also don’t like super steep slopes, and will follow lower gradients when possible. Sure, it IS possible to go straight up hill, but in the interest of conserving energy, in the long term lower gradients will be followed. With a bit more scientific rigor, this method of modeling may show potential migration corridors in larger landscapes, and also pinch points, where potential predators might like to hang out! And places that are inaccessible to most animals, might just be a good place for an animal without teeth to carve out a new ecological niche (mountain goats?) So enough of that, on the script.

**Step One**

First, you will need a surface. In this case, I used Elk to create an 8.6 x 8.6 km area of an interesting landscape southeast of Alfeld, Germany. Any landscape with some topographical variation will do. I then use the “Populate Geometry” component to put some starting agents on the surface. I will keep it low for now, just two, but can increase this later.

The second important thing here is to set up a “Step Size”, the distance the agents will cover in each round. Since I want the script to work for smaller and larger sites, I use a bit of math to make the step size proportional to the overall surface dimensions. Note that for clarity I use a rather large step size at first, but I will reduce this later to get more accurate results.

**Step Two**

At each random point, I draw a circle with a radius equal to the “Step Size.” I then move this circle once up and once down based on the maximum amount an agent may move either up or down in each step. This is proportional to the gradient, in this case 1:20. My step in this case is 260 m (this will later be reduce for more accurate results) . That means with the 1:20 gradient I may not move up any more than 13m, or down more than 13m. A loft is drawn between the minimum and maximum circle, and this is then intersected (BREP | BREP Intersection component) with the surface to generate a curve or set of curves of possible vectors of movement. This is again exactly like Example 8.5 which you can refer to for additional explanation.

Note that the top right agent has only one curve of possible movement, while the bottom right agent has two. Once we start looping, a point along the curve in the current direction of movement will be privileged, but for now, the agent at rest could venture off in either direction.

**Step Three**

Here I use the “List” component to give me only the first potential movement curve for each agent point. I then use “Curve Closest Point” to find the closest point on this curve–the agent’s destination–to the agent’s current position. I then add this new point into a list just after the current point.

Please pay attention to the data structuring, that is, the grafting and simplification. The goal is to get the initial point as point “0” on your list, while the second point becomes point “1”

For reference, to this point the overall script should look like the image below

**Step Four**

Now we are going to go big and make the loop all at once! It looks like a lot but it is basically just repeating much of what we did before.

First, we use “List Component” along with the Round Counter to extract the last two points from our list. Right now the list only has two points for each agent, but this will quickly grow!

We then do exactly like step Two above, drawing circles at the current agent position (Point 1 in this case) with a radius based on the step size, and then finding curves of potential movement based on the maximum allowable gradient.

Instead of using list Item to select the first of these potential movement curves, we are now going to do it a little differently. We first find the current vector of movement based on the vector between the next to the last point (Point 0) and the last point (Point 1). We then draw a “Tentative” movement point, in this case at half the total movement, and then run a “Curve Closest Point” test between this “Tentative” point and the potential movement curves.

There could be one, two, three or even more potential movement curves…but there is always at least one. If all else fails the agent will go back to where he came from. Anyways, we then do one more “Closest Point” component to find which of these one, two, or 3+ Closest points on the Individual curves is the closest of the whole set. This is the next destination. If it doesn’t make sense, just copy EXACTLY what I did above and it should work.

I then merge this new agent current position into the ongoing list of agent positions.

**Step Five – Running the Loop**

Once this hard work is done, its smooth sailing–hopefully. In the image above I am labelling the points with their index number for clarity, but you can start to see how the agents are behaving. If it is working, slowly increase the number of iterations, and also now would be a good time to go back to the start and reduce the step size in the interest of more accurate results.

**Step Six – Continue Looping and Play with Representation of Agents.**

You may also want to increase the number of starting agents, by adding a few points to the initial “PopGeo” component. If all is well, it should be able to handle a few more. Lastly, you may want to make the agent trails look a little better. You can add a “Interpolate” curve or a “Nurbs Curve” between the points in the list to track the agents without the red “X’s”. You may also consider, AFTER the loop is finished, adding a “Dash” component. Be careful with this though, and make sure to disable/delete it if you decide you want to run a few more rounds!

There are may other Representation options. In the first image of this post, the agent paths are colored with a gradient based on how far ranging they are. Agents that are confined to topography to their local neighborhood are Orangeish, while agents that wander far from home get colored green. This wasn’t too hard to figure out, but I’ll leave that for you to figure out on your own, if you’d like.

By now, hopefully some patterns are starting to emerge. If this were a park landscape, you may start to see where pedestrian paths would be feasible, or where they could be difficult to construct. If a particular point needs to be accessed, you can also see potential ways to get there with accessible paths.

If this were an ecosystem simulation, you’ll start to see where would be a good place to hang out if you were a mountain lion looking for passing livestock, and might even see where the mountain goats would hang out. Also note that the edge boundaries have a huge effect on agent behavior towards the edges. This is a common problem with computer simulations, since the real world doesn’t have such hard boundaries, but you could image that if a fence were erected around this landscape to create a protection area or such, what the implications might be.

**Optional Step**

The script can now be fine-tuned / altered / improved in any number of ways. Here, as an example, a bit of randomness is added to the path of the agents by rotating the vector of “Tentative” movement. This frees up the agents to wander a bit more, but they still will be constrained by the gradient rules.

**Comparison with the Actual Landscape Condition**

Just out of curiosity, I decided to compare what I learned about the landscape from the agent modeling to the actual landscape condition.

The image to the left is taken from open street maps, the images to the right are the versions with agents strictly going to the closest point in the current direction (above) and the more wandering agents (below).

I’ll let you draw your own conclusions, but remember, topography isn’t the only thing shaping this landscape. Also, some of the information towards the edges is skewed because of the boundary problem discussed earlier.

Anyways, hope this helps as a good start to seeing how agent modeling can be useful in landscape surface analysis and design! As a last image, I just wanted to show a quick test I did of the same agents walking through the Iberian Peninsula. A more careful analysis could start to yield some insight into historical routes of movement through the Peninsula, which in turn informed Spain’s historical development.

]]>

I wasn’t sure where to put this example exactly, since it came as a follow up to Example 8.4, but the general scripting is less complex so I decided to put it a bit earlier. The general problem and solution has many applications beyond topography as well, but for landscape architects, maybe its most ready application would be in the creation of landforms. It could also be used to generate generalized roof profiles for buildings in some cases.

If you already looked at Example 8.4, the recursive offsetting of base curves to create a topography, you may have tried a similar process going inward. Offsetting towards the exterior sometimes, but rarely causes problems with changes in topology, a mathematical term describing the form of a shape, but offsetting towards the inside is often a very different matter. If you are offsetting contour lines for a landform, for example, which is somewhat irregular in form, you will probably get to a point eventually where the landform “splits” into separate contour lines, or separate “peaks”. If you have an automated process in grasshopper, similar to Example 8.4 for example, going towards the inside, this can create problems.

Fortunately, there is a fairly simple solution for describing the topology of a shape through what is called the “medial axis,” and using this description in turn to create a landform out of any arbitrary closed shape or closed set of shapes. The logic of this script using Voronoi cells to find the “medial axis” is explained on the Space Symmetry Syntax blog by Daniel Piker, but here the definition is reworked to work with the latest versions of grasshopper, and also extended a bit at the end. This definition is designed to work with any number of input curves, but you will have to pay attention to the data structure, particularly the “Grafted” elements throughout for it to work properly.

**Step One – Use Voronoi Cells to describe typology of shape**

The script starts here with three arbitrary curves, in this case boomerangs. These curves are divided into a regular number of points, and these division points in turn are used to create a Voronoi diagram. If you look at the diagram, the boundary between the cells corresponds closely to the elements that can be described as the “Ridge” and “Hips” of our landform. You will have to increase the number of curve division points to make this line increasingly more precise, while not overwhelming your computer. Finally, we use the “Trim Region” command to trim the Voronoi cells, and we will only go forward with the pieces of geometry that are inside our region curves.

**Step Two – Extract Medial Axis and “Veins” from Voronoi cells **

Once we have the cells inside our shapes, we can explode the cells. We now divide the remaining geometry into two classes. The pieces of geometry which touch the edge curve always run perpendicular to the slope of our landforms, and we will call these “veins” (like veins on a leaf) going forward. The pieces which do not touch the edges comprise the topological skeleton of our shape. To separate these, we will use the “Collision One” component to return a true/False value for our shapes to see if they touch the outside edge curve. These two sets of Geometry are then dispatched.

Notice also what I did with the data structure. I used the “Trim Tree” component to remove all levels of data structure except for the last one. This is because I don’t care what cell the lines used to be associated with, but I still do care which of the three starting lines each line is associated with. If I flatten all the way, it will not work properly.

**Step Three – Move Topological skeleton vertically to define landform**

In the next steps, I will use the geometry I generated to develop a landform and a mesh. I can use either the medial axis to define this mesh, or I could use the veins. In the image above, I use the veins. In the image below, I use the Medial Axis.

The general principle in both is the same. The endpoints of each piece of geometry are extracted, and then moved vertically based on their distance from the edge curve. The amount of movement is scalable based on the desired overall slope. Once these points are moved, the lines can be redrawn.

**Step Four – Create Mesh and Contour Lines**

Here I am using the endpoints of each of the “veins” to define a mesh, from which I will derive contour lines.

**Step Five – Optional Lofts for the Veins**

You could also draw some geometry with the veins, but this is a totally optional step.

**Variations**

This definition *should** work with any number of closed shapes of any size and form. You will only need to adjust the number of initial curve divisions to get results that are more or less precise. You can also adjust the height scaling factor to get various landform slopes. Below are just two examples of possiblities, based on a complex, curvilinear form, and one based on simpler triangular shapes. Note it works well in both cases!

]]>

Cellular Automata are used in many applications to understand and simplify complex natural phenomena. Sand dunes, braided river networks, and ecosystems are just a few of the things that authors have attempted to translate into simple rules and which in the end generate complex results.

This script is based on a well-known cellular automata known as the “Forest-Fire Model” and can be used to model patterns of disturbance in ecosystems. While this could be used to model a fire, the results seemed to slow moving to be a true fire…that is, new growth sprouted up too quickly in the wake of the fire. So to me it seemed more like a slowly, but relentlessly spreading disease or parasite, which can devastate natural systems sometimes much worse than a fire. By adjusting parameters such as growth-rate and chance of spontaneous outbreak, lessons can be learned about how real ecosystems might function.

**Step One – Initial Setup**

The setup here will be similar to the previous script, except this time instead of using a regular grid of cells, we will use a random population of points, scaled based on an average “Area per tree.” I did a quick mesurement of the spacing of trees in a mature beech forest, and determined 300 square meters per tree is a reasonable figure. This is used to determine the geometry in “Cell Centers”

Like the previous example, we also will do a proximity test to determine the vectors along which the disease can spread. In this case, we limit the potential spread to 20 meters. You could use a higher number here later, which will impact how easily and how quickly disease can spread. The “T”apology output of the “Proximity2D” component goes into a data container for later use, but you can see in the image these topological relationships generated in the “L” output.

The last thing we do here is a list of random cell states. For this script, we will have three states. 0 (vacant/dead), 1 (alive), and 2 (infected/dying). We start with only 0’s and 1’s.

To see these results visually, we draw a circle at each center point and use the random number data to color the cell based on its state. Like in the last script, we will move the coloring process after the loop once we create it in the next step.

**Step Two – Loop Procedures Three and Four**

Like the last script, the loop only recalculates a list of numbers called the “Cell State.” This can be 0, 1, or 2 as previously explained. Every time the loop runs, four basic operations will be performed to determine if the cell state changes, and what it changes to. The operations, in this order, are:

1-Cells that are infected die. That is *If* the cell-state is equal to 2, it will now be reset to 0

2-Living Cells that are in the “neighborhood” of a cell that was infected in the previous round, become infected. That is, *If *the cell state is equal to *1* AND** **at least one of the cells in that cell’s neighborhood (determined by the T output of the Proximity component) was equal to 2 at the start of the round, then the cell becomes infected, going from 1 to 2.

3-A new plant has a chance to sprout in each vacant cell. This is determined by comparing a random list of values to a probability test. If this chance is 5%, then in each round, about 5% of the cells will randomly go from 0 to 1.

4-Test for “spontaneous” outbreak. There is a chance that a living cell will spontaneously become infected, despite not being near a neighbor. In nature, spontaneous outbreaks of disease can be caused by introduction of a foreign pathogen to a new environment, by mutation of a previously benign version of a disease, and other causes, but these are by nature, very rare. For our first example, we will have an infection probability of 0.02% to see what happens. But in the rare cases of spontaneous infection the cell would go from 1 to 2.

How does this translate into a code? Since there are no “2’s” at the start, we will not worry about coding the first two steps quite yet. We will focus on three and four since they are set up in almost the exact same way. The most important thing here is to generate and structure a random list of numbers. All in all, we will need many Many MANY random numbers. The precise number is the number of rounds we will be looping, multiplied by the total number of objects or “Cells”. So if we are doing 200 rounds, and have 2000 trees, that is a whopping 400,000 random values! Don’t worry, the computer can handle it More importantly, we only want it to have access to 2000 (or whatever the number of “Cells” is) of those random values at a time. To get this, we use the “Partition” list component with the size of the partitions based on the “List Length” or the number of cells (2000 in our example). We then use “Flip Matrix” and “List Item” so that in round 0, we will get access to each item 0, in round 1, each item 1, etc…. This was a lot of number gymnastics!! But if we get it to work with our first list, we simply copy and paste to get a second list that will work for Procedure 4. The results of this are in the image below.

Once we have these, we now script the procedures 3 and 4 themselves. We again use if/then expressions as explained in Example 12.1. In this case the expression is “if (x>y,1,0)” which translates into “if x is greater than y, then the result is equal to 1, or else if not, it is equal to 0.” X will only be greater y than 5% of the time based on how we wrote this (see below). This is then compared to the existing list of values with the “Max” component.

Once this is done, the values go into the final test, to see if a spontaneous outbreak will occur. This is scripted in exactly the same way.

Once we let this run a couple of rounds, the empty cells will slowly start filling up with living trees (zeros becoming 1s). This could go on forever if we didn’t have the disease procedure 4. Unfortunately for our forest, after a few rounds, finally one of the random values didn’t pass the if/then test, and has now become infected!! It is now time to write a procedure for what happens if disease breaks out.

**Step Three ****– Refining the Loop / Procedures 1 and 2**

To see the fate and destruction of our once flourishing forest, we will script two procedures. The first is very simple. The second a bit more complex. The first is an if then expression “if (x=2,0,x)” This translates to “if x is equal to 2 (infected), then it now is equal to 0 (dead), and if not, it remains x.” So if it was 0 or 1 before, it will remain 0 or 1, but if it were 2, it is now 0. Got it?!

The second is a bit more complicated. We use our typology relationship determined at the start (finally!) and use list item to list, for each cell, the value of all the cells that are in its neighborhood at the beginning of the round, before we killed them in the last step, otherwise there would be no infected cells left. Some cells have rather small neighborhoods (2 or 3 neighbors), while for others it is a bit larger. We then use the “Bounds” component to get the minimum and maximum value in each set. If there is no infected neighbor, the bounds will be between 0 and 0 or 0 and 1. In both these cases, things are looking good for the cell in question. If One neighbor is infected, the bounds will be between 0 and 2. This means things are not good, and the cell in question would now go from a 1 to a 2.

Scripting this requires a tricky if/then/and equation. The syntax is quite difficult at first. We bring in two variables, X and Y. “X” was our current cell state “Y” is the highest value in the neighborhood. First I will write out the expression precisely, and then will translate it.

“if (x=1, if (y=2,2,x),x)”

>:-( What that means is “if x is equal to one, AND if y is equal to 2, then the result is equal to 2, and if not, it is equal to x…in both cases.” Don’t worry if you don’t understand it exactly at first, but if you do, you are smarter than me! Remember, in an if/then expression, if the condition is true, it does whatever is after the first comma. “if (x=1**, if (y=2,2,x)**,x)” If the condition is not true, it does whatever is after the second comma. So if X is not equal to 1, it immediately skips the second if/then expression, jumping to the second comma, and producing the value X as a result.

Anyways, the results of this altogether can be seen in the images below.

You can see that the cells that are yellow in each round become white in the next round, while all neighbors become yellow in that round.

If the density of living cells is very high, the infection will spread relentlessly in a big wave across the forest. If the density of living cells is low, the infection will tend to peter out, having no living neighbors to jump to.

**Playing with Variables**

By playing with the variables, growth rate and infection rate, certain patterns tend to emerge, although the landscape will always be in flux. If infections are very rare, with high growth rate, a very dense forest will tend to emerge, and when an infection does break out, the devastation is complete and catastrophic. If infection rates are high, sometimes a low growth rate will actually do better in managing this in the long term. In other words, sometimes it is better to bounce back slowly after disturbance than to bounce back too fast while the infection is still raging. Otherwise, the disease will become endemic.

Below are some images of two scenarios where the script is extended. Note that once the size gets pretty big, the patterns will be much more interesting, but the computer will also slow down quite a bit.

**Taking it Further**

I played around with this quite a bit to try and improve the results. I won’t show my coding, but a few things you can play around with include introducing the probability that an infected cell can survive and go onto live another day, and also scaling the cells down (through a second data stream) to show growth. In other words, new cells come in at size “1” and increase every round until they reach a maximum size.

]]>

I decided to come back to vector fields with one more example. First, I’ve set the goal on this blog to have six posts in each category, and Vector Fields has been at five for a long time, despite being one of my favorite things! I also wanted to come up with a new starting logic for example 11.3 where agents are steered through a field. I was quite happy with 11.3, but sometimes not pleased with the sudden change of direction when the vectors move from one cell to another.

In this script, a vector field is controlled through lines drawn in Rhino. The vectors at any given point are an average of several nearby vectors. The closer you are to a particular drawn line, the more influence that particular line will have over nearby conditions. The field though changes gradually and not too suddenly, as it does in example 11.3. The field is then used at the end to draw particular geometry, in this case, egg-like shapes.

**Step One – Initial Setup**

Before going into grasshopper, a few pieces of geometry are drawn in Rhino. The first is a closed “Field Boundary Curve”. Then several lines are drawn which will be used to identify the general direction of the field in a particular region. Note, the direction or order in which these lines are drawn will be important in determining how the field works.

Once this is done, I do the basic setup of my script. The objects in the field will be anchored to a random population of 1000 points generated by PopGeo (grey X’s). A second step in the setup will be to translate the linear geometry in Rhino into vector information. This is done by using the “Endpoints” component to get the start and end of each line, and then using “Vector2Pt” to find the vector between the start and the end.

The last part of the initial setup is to “Merge” the start points and the endpoints into one point list. The vectors are merged in the same way to make a list of identical length. If you duplicate the image above, it should work, but what is important is that each item in the vector list has an item index which corresponds to the same item index of its associated point.

**Step Two – Associate Nearby Vectors with each point from PopGeo**

This step is the heart of the script, where each of the 1000 points generated by popGeo gets a vector assigned to it. This would be very hard to show graphically, so for this step I temporarily reduced popGeo to 40 points and hopefully it will make graphic sense. The script uses the closest point component to find the 6 closest start and endpoints of my vector lines to each of the PopGeo points. This number doesn’t have to be six, but based on trial and error this seemed to work. Fewer than four doesn’t really generate the results I want, and more than six doesn’t seem to improve the results. This can be changed later though. Anyways, the six closest points are found. The “Closest Points” identifies the item index of these six points, but these also correspond to the item index of the vectors associated with those six closest points (if I set it up right in the previous step). I use “List Item” to identify these six vectors, in these images shown anchored to each of the points in pop geo. I want to sum these vectors together to find an average, but before doing this, I am going to scale the vectors down based on their distance from my PopGeo point. In other words, far away vectors have less weight in the summation than closer vectors. To do this, I use the “VectorLength” component to get the strength of each vector, and then I divide this by the distance, which was also conveniently generated by “ClosestPoints”. Now that the distances are scaled down, I rebuild my vectors with the “Amplitude” component, where the Vector Direction remains the same, but where the Amplitude (vector strength) is reset with the scaled down “Vector Length”. These much smaller scaled down vectors are represented by the little red arrows in the second image above. Finally, I use the “Mass Addition” component to sum the six vectors associated with each point, giving me a resultant vector (shown in black). I put the results of “Mass Addition” into a final “Vector” parameter container. Note these need to then be flattened for the next step.

**Step Three – Draw Geometry based on Resultant Vectors**

Note, even if you went through all these steps, you won’t *see* anything yet since vectors are forces, not geometry. You can use “VectorPreview” to visualize what they are doing, but in the end, we want to translate them into some sort of geometric expression. There are many possibilities, but in this case, I am going to draw some eggs. The process is pretty straightforward. First, I start by drawing a Line with the “Line SDL” (Start/Direction/Length) component. The start are the points from PopGeo (which are now back up to 1000 in this image), the “D” direction is governed by my vectors, and the “L” is determined by multiplying the “Vector Length” by a scaling factor.

The eggs are finished by the script above. I won’t explain the details, but it is using components which already should be familiar to you.

**Varying the Pattern**

There are a few parameters you can change to vary the pattern, but the most important way to change it is to go back to your curves drawn in Rhino, and to edit them by moving the control points around. You can also add or delete curves. Below are two variations of curves drawn in Rhino (shown in Blue) and the resulting field conditions.

Another way to vary the script is to change the initial point population, change the amount that geometry is scaled with “LineSDL”, etc. You can also introduce a “Cull” to get rid of geometry that is either too small or too large. Below are a few possibilities.

It wasn’t my intention while making the script, but in the end it looked a bit like one of my favorite landform phenomena, the “Drumlin Swarm. You can read bit more about it on this page here.

]]>

1 – A regular system of cells. The most basic cellular automata are based on a simple grid, but other systems work as well, such as hexagonal matrices, irregular grids, voronoi cells, etc.

2-Each cell has a “state.” Many cellular automata have only two states, on/off, dead/alive, etc, although more than two states are also possible.

3-Each cell also has a “neighborhood” that will affect its behavior in the next round. Again, the neighborhood could be only four adjacent cells, (left/right/up/down), all adjacent cells including the diagonals (8 neighbors), or other configurations.

4-Production Rule which determines how a cell’s state changes each round. This change is almost always based on its relationship to the neighbors, and sometimes also affected by its current state.

In this example, we will look at a system known as an Activator/Inhibitor System. The algorithm is based, with very minor modifications, on an algorithm from the “Netlogo” program, (which can be downloaded and tested for free) and which was developed by Uri Wilensky. From the Netlogo page description… “Does a single mechanism underlies such diverse patterns such as the stripes on a zebra, the spots on a leopard, and the blobs on a giraffe? This model is a possible explanation of how the patterns on animals’ skin self-organize. It was first proposed by Alan Turing. If the model is right, then even though the animals may appear to have altogether different patterns, the rules underlying the formation of these patterns are the same and only some of the parameters (the numbers that the rules work on) are slightly different.” I won’t try and explain the science behind this. It is still somewhat speculative if this is the actual mechanism for pattern formation in animals, but it is based on a series of real-life observations. There is actually a pretty good description of the process in Philip Ball’s book “Shapes” which I highly recommend for anyone interested in form creation in nature. Anyways, while the algorithm runs quicker and it is easier to test variations in NetLogo, and I would recommend doing so to get a feel for the system, putting it into Rhino/Grasshopper allows you to spatialize the system and adapt it to various forms, and also change the initial structure. I will show some of these at the end. But for now, onto the script!

**Step 01 – Initial Setup** The initial setup is fairly straightforward. In this case, I setup a basic grid of cells, although I will show at the end some variations where other grid sizes are used. From this grid, I extract three sets of information. The first set of information is the list of boundary surfaces for each cell, which I will not put through the loop, but which will be assigned at the end of the loop with one of two colors, based on the current “state” of the cell. The second list, which will be modified in our loop, is a simple list of “cell states”, which in this case are either 0 or 1, “dead” or “alive” (which will be colored black or yellow at the end of the loop. Here I am showing how this is done, but will later move the colors and custom preview to the end of the loop. It is much faster to loop simple data, such as 0s and 1s, rather than geometry. Finally, I extract the center point of each cell, which will be used in the next step to determine the proximity to adjacent cells, the “neighborhood.”

**Step 02 – Proximity Tests** For this next step, I am using a component I just recently discovered called “Proximity 2D”. This has various inputs and settings, here I am most interested in the R- and R+ inputs. (Minimum and Maximum Radius). You will note that I put in the number “1000” into G which is the number of relations to test, and since I don’t want this to play in the calculations, I put a sufficiently high number here, although with the radius tests I will never get to this number. The essential logic of this script is each cell changes its state to “alive” or “dead” based on two sets of “chemicals”, activators and inhibitors. If the number of activators close to the cell is greater than the number of inhibitors away from the cell, the cell will be “alive” or a “1”. If inhibitors dominate, the cell will be “dead” or a “0”. Note that ONLY currently “alive” cells contribute to the calculation, that is, only alive cells secrete chemicals that contribute to the process. In the example above, the inner circle, based on the activator radius, which is 3.25 units, has 36 cells in its neighborhood. Of these, 19 are “alive”. The second neighborhood, the inhibitor ring, has a thickness of 2 units, with 32 cells in the neighborhood. Of these 18 are “alive”. So in this case, cell 750, which is currently “dead” would become “alive” in the next round since the activators in the activation ring are outnumbering the activators in the inhibitor ring. But I’m a little ahead of myself. These calculations will only happen in the next step. The Prox component tells you nothing about the state, etc. All it tells you is the “Item Index” of all the cells in the respective rings. The indices are listed in the “T” output. We will use these indices in the next step.

**Step Three – Calculate New Cell State** So now we finally get into the meat of the algorithm, and insert it into the Anemone loop. From the previous step, I have the Item Indices for all the cells in each neighborhood. I use “List Item” to retrieve the current cell state (either 0 or 1) from my data container, the one I am looping. I then use the “Mass Addition” component to sum up all the “1s”. This is done once for the activator region, and once again for the inhibitor ring. I then put these into a simple “Expression” component. The text in the component is ” If (x>y,1,0) ” What does that mean? I thought you told us we didn’t have to do programming!! I lied. Kind of. This is Grasshopper’s way of doing an “If/then” expression. If you are somewhat familiar with programming, this should be clear, but the commas in the expression stand for “then” and “else”. So what it means essentially is ” If **x** *is greater than* **y** *then the result will be* **1** *or else if is not greater than y then the result will be* **0**“** **The syntax the computer understands, however, must be much abbreviated, and shorter, and *precise*. Hence the commas. And the parentheses. (the parentheses allow you to do IF/AND/THEN/ELSE statements, which we will see in the next example.) Anyways, once you get past that, we are home free. I put panels on the components so you can see what is going on in the first few cells in the list, but it does this for all 1225 cells in this case. If you are keeping track, the first three cells all have dominant inhibitors, so they are “Dead/0”. The next three have dominant activators, so they are “Alive/1”. ..etc… Now the new Cell States get fed into the D0 port on Anemone, and these will be used instead of our random noise in the next few rounds. If you are curious. You will see the results of the script after the first 5 rounds, along with the initial state, in the image below. Note that the initial “Noise” quickly disappears and a pattern emerges fairly quickly, and soon becomes consolidated. Once consolidated, it remains fairly stable, and won’t change too much, even if you run it for 100s of rounds.

If you play with the random seed to change the initial noise, the final pattern will change, but only a little bit. The image below shows three variations using the settings just described.

If you want to get real variation at this point, you can change the radius of the activator or inhibitor rings to see how the pattern changes. In the previous example, the central Activator Region had a radius of 3.25. If I change this to 4.25, the pattern will look like the images below. Looks kind of like Hebrew letters to me…

**Step Four – Optional Weight Ratio for Activators / Inhibitors**

While you can control the pattern a lot by controlling the rings, a last control would be to control the ratio, or relative strength of the Activators vis-a-vis the inhibitors. You can insert this between the “Mass Addition” and the If/Then expression. **Variations** Once you get it working, you can use a bigger grid, play with the Radii, the Ratios, and even maybe use a different cell proportion…

**Applying to Other Shapes**

Anyways, as said previously, the algorithm works better in a program such as NetLogo, but once it is in Rhino, you can play with other typologies rather than square grids. One of the theories of the pattern formation in animals is that the size and shape of spots isn’t only dependent on the chemical reaction system, but also on the form of the animal, and the animals growth process. To test this out, I applied the script to a quick sketch of a 2D animal skin…. Honestly, I was hoping for more topological reaction, but you can already start to see in very narrow areas, such as tails, only stripes can form since there isn’t enough room for spots. Also, this test doesn’t account for growth processes. The theory is the spots or stripes are laid down when the animal is still an embryo, when the torso is relatively small, and then this gets bigger, so bigger spots in some areas come after a process of growth/transformation/deformation.

**So what does this have to do with Landscape???**

Apart from being an interesting form making system, and an introduction to cellular automata, the logic of activator/inhibitor system has some uses in ecology, especially in ecosystems in semi-arid climates. If you are curious, you can read more on the Wikipedia page on “Tiger Bush” although here is an image that also communicates the idea. This pattern also emerges in people’s lawns when you don’t water them enough. Of course, few clients will want you to design a activator/inhitor lawn for them, but maybe people will start having to investigate this soon in California. Apart from that, it may have potential as a form-giver on projects with a more contemporary aesthetic, I’m not sure if the people at Stoss LU were thinking about Cellular Automata when they designed the Erie Street Plaza in Milwaukee, but it is certainly possible. The bands of pavement/vegetation certainly have a feel of a CA, and Chris Reed and company have been known to tinker with Grasshopper in the past, so it is certainly possible! Anyways, I had some of my own experiments with this applied to some sketch projects, but I think I’ll keep those to myself for now…just in case the right competition comes along to enter them into

]]>

This example is inspired by a project by the Dutch Landscape Architecture Firm Karres en Brands in Copenhagen, Denmark, the pedestrian street Købmagergade. I saw the project in its initial stages during a visit in 2009 but haven’t yet seen the final results. The concept is fairly simple. The designer wanted to create an effect like walking through a field of stars. This is done through a random distribution of 3 different colors of pavers. The effect is enhanced as the distribution is not uniform throughout the pedestrian street. It gets very light in some parts, and mostly black in others. The “Kultorovet” plaza, for example, is largely black, referncing an earlier history of coal trade. In the areas described as “the Milky Way”, light paving dominates.

I’m not sure Grasshopper or a parametric process was involved in their design process, but I do know that Karrens en Brands have used digital methods and parametric processes in some of their projects in the past, especially in some of their urban design projects. Anyways, this was an idea for a script I came up with while looking at their project.

This script basically combines the logic of two fairly simple processes. The first is using a random dispatch to assign pavers to different color classes as explained in example 1.5. The second is using attractor points to change the ratios that are assigned to dark pavers, light pavers, medium pavers, etc. The logic of attractor points is explained in example 2.1 and more thoroughly in example 2.2. These two processes are then bought together to create a factor that will assign a color to a paver.

**Step One – Setting up Surface Divisions**

The first step is to take a surface and divide it using the “Isotrim” component and “Divide Domain2”. Since I now want to offset every other row of pavers, I need to give the unstructured results of Isotrim a Grid-like data structure. This is done by using “Partition” list with the size of the partitions being equal to the “V” values in the “Isotrim” command.

**Step Two – Offset Every other Row**

I could skip this step if I want pavers without offset joints, but in this case I want to use them. This is very similar to what I did in the beginning of Example 2.3. if my surface is oriented directly up and down, this will work with a simple X vector plugged into the Move component. If you are using more complex surfaces, or surfaces with other orientations, however, we need to make the script a bit more intelligent. Let’s assume for a minute I have a surface rotated 30° If I run the script above, the results will look like the image below.

This can be a problem going forward, so I am going to adapt the script a bit to solve this problem. The basic idea is to find the vector 90° to the surface orientation. There are a number of ways to go about this, but my solution is to divide the surface into just two parts, to find the two counterpoints of these divisions (using “Area”, and to then base the vector on this using the “Vector2Pt” component.

The vector between the two points is then rescaled using the “Amplitude” component, with the new Amplitude equal to the amount of paver offset. The overall results look like the image below.

**Step Three – Cull Pavers outside Boundary**

If we have a simple rectangular surface, this step would not be necessary, but in this case I want to make it so my script can deal with irregular paved areas. lets assume for a minute the area I am paving is shaped based on the purple area in the image above. When I use “Divide Domain 2” and “Isotrim”, the component produces a rectangular grid of surfaces equal to the “U” and “V” value. So a surface based on the purple curve above would have divisions equal to both the pink and the grey pavers. To get rid of the necessary pavers, I can use the “PointInCurve” component as explained in example 4.1. Here the Containment Curve is determined by taking the edges of a referenced surface, joining the edges together using a “Join” component, the running the conatinment test and Culling the pavers whose center points fall outside of the edges.

**Step Four – First Dispatch**

I am going to continue the script assuming a rectangular surface for now, but with the previous two steps, this should also work with any irregular planar surface. This step is at the core of what this example is trying to show, however, combining the logic of the Attractors with that of a Random Dispatch. This is achieved in this case by simply adding together the results of a random number generator which contributes one value, and the results of the “Closest Point” component, and its associated distance. The distances are remapped and weighted to be added in with the results of the random number generator. Then a dispatch is done to separate Color 1 pavers from a second set of pavers (which will become color 2 or Color 3).

**Step Five – Second Dispatch**

For colors two and three, the exact process is run a second time. here I just copied step four, pasted, and did a little re-wiring.

**Variations**

Once it is up and working, I can play with the parameters to create variations. I can make the attractor dominante, as in the second image, make the random distribution dominate, as in the third image, or add new attractor points, as in the fourth image.

**Applying it to our site**

If you followed all the previous steps, you can now test it out in the “site”. Here I created a simple model of the Købmagergade using data from open street map explained here. One thing to be aware of, you can’t model Every.Single.Paver There are simply too many, approaching the millions probably. You will have to be a bit abstract. One strategy is to take small segments, as in the image above where I took just one part of the alley, and to run a test there.

Another thing you can do is take larger areas and to scale appropriately so you can explain the concept and experience the effect, but not so much that you crash your computer. The image below shows a bit larger part of the Købmagergade, but maintains the look and feel of the script from smaller scales of exploration. You can compare the image below to the image from the Karres en Brands site. It isn’t exact, but is close. I already have some ideas to improve the script to get better results, but this should be enough to get you started!

]]>

A not uncommon task for Landscape designers is to draw paths through the landscape. If you are working on a “flat site” (something which doesn’t exist) or if you are just deciding to ignore topography completely, you can just draw the paths anywhere in any configuration you want. This will lead to problems.

Anyone who has a basic training in site design will know that a path cannot legally exceed a slope of 1:12 for general accessibility (and then only with occasional, unsightly flat spots). A better rule of thumb is to have no path exceed 1:20 as this is a maximum slope that is generally comfortable for the largest range of the population. An unpaved trail can have a slope up to 1:10 to allow use by an off-road wheelchair, and in special cases, trails which are not accessible may exceed this gradient, but ideally not for prolonged distances as prolonged walking becomes difficult.

I had tried out several variations of a script that could generate a range of paths that meet these certain requirements, and one of the better solutions I’ve come up with so far uses a recursive process to figure out an acceptable path. I was inspired by a story I heard about ancient road builders who would determine where to put a path or road by setting an animal loose (a goat maybe) and letting it find the ideal gradient across a pass, etc. So this script makes the process a bit more efficient and a bit less romantic, but uses a similar logic to create a possible path in incremental steps.

**Step One – Initial Setup**

Before starting the script, you will need a topographic surface. This could be a small site, or a large scale landscape. We will allow the script to scale itself to the landscape in question. In this particular case, I generated a 5km x 5km terrain using the process described in Example 20.2 of an area around Grünenplan, Niedersachsen in Germany, although this isn’t important… any area will do. I then draw two points in 2D Space in Rhino, one for the Start of the path and one for the end.

The initial grasshopper steps are also straightforward. First I am measuring the dimensions of my surface and multiplying this by a factor to determine my growth interval. The resulting number “Step Size” is put into its own container that I can copy to future steps. Having a step sized as a proportion of the overall surface size will allow the script to better adapt to different sizes of terrain. The other thing I need to do in Grasshopper is to use “Project Point” to project my Geometry, the Start and Destination points, to the surface.

**Step Two – Starting the Loop and Finding a Range of Possible Path Directions**

Whenever you set up a loop, several questions need to be answered. What do I want to achieve, and how do I structure my data to achieve this? There are always multiple possibilities, and with experience you will tend to favor certain ways over others. In this particular case, I will use the D0 data stream for the “active” or leading point in my growing path, do some operations to determine the next point in my path, and will then “archive” previous points in the path in a growing list of points kept in the D1 data stream. From the beginning, however, I will insert my starting point into both the D0 and D1 streams since I want to use this point as both my active point, as well as archive it. This should make sense later.

Once I have my data streams setup, there are two basic steps the loop will execute. In this first step, the script will find a range of possible path directions based on an allowable path rise/fall that is adjustable. This is achieved by dividing the variable “Step Size” by a factor. So for a 1:12 slope (8.3%) if the path is traveling 20m in the horizontal direction, it can travel only 1.67m either up or down and to stay within the acceptable range. We are going to solve this geometrically. First we create a circle circle with its center point at our current path endpoint (the position of our wandering goat), and with a radius equal to our “Step Size”. We then copy this circle both up and down with a distance equal to our allowable rise/fall. We then create a “Loft” between the upper and lower circles. Finally, we use the Brep to Brep Intersect component (BBX) to find where the Terrain Surface and the Surface between the circles intersects. This generates one or more curves which represent acceptable locations for the next point on our path. In the particular example above, this curve is represented with a white dashed line, and nearly every point on the circle, except for a small pie slice, could produce a path with an acceptable slope.

**Step Three – Deciding on the “Best” possible Next Point for the Path**

If the Curve output from the “BREP to BREP Intersect” component represents the range of possible or allowable next steps in the path, we need a criteria to decide which of these possible points we actually want to use. In this case, since we have a destination we are trying to get to, we will assume that the point closest to this ultimate destination is the “best”. To get this point, the script uses two closest point components. The first of these, “Curve Closest Point” finds the point on each of the curves output from “BBX”. If there is only one curve, as in the image from Step two, only one point would be output from the component and we could move on. For this reason, I’ve advanced the loop a few steps to show an instance where BBX outputs two distinct curves. (more than 2 curves are also possible) Again, the grey pie slices represent allowable path directions, the range in which the path would not rise or fall too much. When the two curves associated with these grey slides are tested with “Crv CP”, two points are produced.

We now need one more component, “Closest Point” to decide which of these two components if indeed the closest to the final destination point. In this case, the point indicated in green, to the left, happens to be the closest point, as opposed to the red point on the right. This is a propitious moment, as the path will now go around the central mountain in a clockwise direction, as opposed to the other possibility of a counterclockwise heading.

**Step Four – Finishing the Loop**

The next point for our path is now determined, labeled “Path End Point” and I will add this point both to the end of my growing list of collected points kept in the D1 data stream, and will also input it into the D0 stream to replace the initial point placed in D0 at the start. D0 will therefore always only have one point in it, while the list length of D1 will be equal to the number of steps the loop runs to get from the start to the destination. I don’t know yet how many steps this will be, but once I get there, I want my loop to stop otherwise the list will keep senselessly growing. That is why I have an escape test, which tests the distance between the “Path End Point” and the “Destination Point Projected.” If this distance is less than our overall “Step Size”, we have arrived, or rather our goat has arrived, and everyone can celebrate. A “True” value is returned from the Boolean test and the loop ends.

After the loop, the list of points generated in D1, the overall journey of our goat, is stitched together with a polyline to create the path. The image above represents this completed path.

In the test phase, I was using a rather large step size (.05), or 5% of the overall dimension of the study area to draw my path. For a smoother path, I might want to use a smaller step size. The second image above represents a path drawn between the two points with a 1% step size. While the overall vector remains similar, you’ll notice the smaller step size also creates smaller switchbacks.

**Variations and Other Possibilities**

The image above represents a few things you can now try out with the script. The first thing is to change the path’s allowable slope. The top three images represent 3 maximum slope possibilities.

The second thing to do is simply change the start and destination points. The pathfinder in each case tries to find an acceptable course. You can link several of these studies together to start to create a path network.

One last thing to note, determining where the path “Starts” vs. “Ends” will have an effect on the path’s final form. In image 5 above, all the paths start at a summit and work their way to a series of outer points. In Image 6, the paths start at the outer points, and work their way to the summit. You’ll notice in the top right path, when it starts at the summit, the pathfinder uses the natural ridge to work its way down and towards its destination. When the pathfinder is reversed, the opportunity to use this natural ridge is lost since the pathfinder was a bit short sighted, and now has no other choice but to approach the summit through a grueling set of switchbacks up the North Face.

]]>