I’ve already posted a few examples of Cellular Automata but in hindsight, some of them were a bit complicated especially for those who don’t have any prior experience with this computational paradigm. I have a few more *even more *complicated ones I want to highlight in future blog postings, but I thought it might be useful to post an example of a much simpler one for those just encountering the topic for the first time. This particular example comes from a blog posting “The Cave Automaton Method for Cave Generation” from Jeremy Kun’s blog Math ∩ Programming and is perhaps the simplest example I have encountered. It is worth popping over there to read his description of the method before proceeding since he explains it quite well and there is no use to rewrite what has already been well-written.

In short, though, this CA resolves within a few rounds a random interior ‘cave-like’ structure from an initial random distribution of occupied, live cells (state ‘1’), and vacant, dead ones (state ‘0’). To do this, in each round, each cell is checked in relation to its neighbours (usually 8, but 5 or 3 if on the edges or corners) to determine if its state remains the same or if it changes. The conditions for a change are as follows:

Born – If the cell is ‘dead’ (state ‘0’), and 6 or more of the neighbours are ‘alive’ (state ‘1’), the state becomes ‘alive’ (state changed from ‘0’ to ‘1’)

Die – If the cell is ‘alive’ (state ‘1’) and fewer than 3 of its neighbours are ‘alive’ (state ‘1’), the state becomes ‘dead’ (state changed from ‘1’ to ‘0’)

Jeremy Kun uses the shorthand *B678/S345678* to describe this ruleset. (B = Born, S = Survive, i.e. not Die). So if there are 6,7, or 8 ‘live’ neighbours, a ‘dead’ cell is born, or if there are 3,4,5,6,7, or 8 neighbours, a ‘live’ cell ‘survives.’ The image below shows a few examples of this ruleset in action in a very simple 5×5 CA with an initial 50/50 distribution of live (grey) cells and dead (white) cells. The first three images show the initial pattern, and highlight 3 examples of the ruleset in action. The two images in the second row then show the first and second (and final) evolutions of this particular CA. It is not particularly interesting, but make sure the ruleset is clear before proceeding and setting up the simulation. Also try and answer the question, why does the CA stop evolving after the second round?

Got it? Good! Let’s move forward!

Before working on the core logic of this script, we just need to complete a few basic steps to setup our game board or playing field. As always, we want to start small until everything is working well, at which point we can expand the size of our simulation.

Here I used the **Square Grid** component with 20 x 20 cells, with the *(C)ell* output being flattened. I then measure this output with **List Length **to determine how many values I need to generate with my **Random **number generator, which by default will output numbers to six decimal places between 0 and 1. I then want to ‘convert’ these random values to one of two states, ‘1’ being ‘alive’ or ‘0’ being dead. I do this by adding an **Expression **comparing the random values to a parameter I created with a slider called ‘Percentage Live vs. Dead Start’. This slider has values between .40 and .60 since much more or less than that tends to generate an uninteresting, homogenous field in the end. The ‘x’ value input into the Expression is the list of random values. The ‘y’ value is my Percentage Live Parameter. The expression itself is *” if (x>y, 1, 0)”*which is Grasshopper’s syntax for saying *“if x(the random value on the list) is greater than y (the percentage parameter), then output the value ‘1’, otherwise output the value ‘0’.* ” The expression compares each random value against my parameter and outputs a list of 0’s and 1’s which are my initial states based on this.

The states can be previewed with a light or dark colour using the components shown in the top right of the image. Here I am using light for empty or ‘dead’ and dark for occupied or ‘live’ cells.

The next order of business in setting up our game board is to establish the topology of cell proximity. This is done here by finding the *(C)entre point* of each cell using the **Area **component and inputting these points into the *(P)oint* input of the **Proximity2D** component. In this particular case, and in contrast to the Fur Algorithm 12.1 which had fairly complicated proximity relations, the ‘neighbourhood’ is a very simple ‘Moore’ neighbourhood, which are the 8 cells in the compass directions N,NE,E,SE,S,SW,W,NW. To fix this neighbourhood I input the fixed value ‘8’ into the *(G) *input (which limits the number of relations), as well as the result of the **Expression ***1.5 * ‘Cell Size’* parameter to into the *(R+)* input which establishes a maximum radius to look for relations. The result is every internal cell, such as the one marked in red, has 8 relations, and the cells on the edges, such as the one marked in green, have 5 or 3 relations. This data is put into a data container tagged ‘Proximity Matrix Topology.’

In contrast to the previous two CA examples, for this particular simulation I want a ‘frame’ at the edges of the simulation where all the edge cells are always occupied or ‘live.’ To achieve this, I will find out how many neighbours each cell has by measuring the “Proximity Matrix Topology” with the **List Length** component, flattening the result, and then comparing the results with the ‘Initial Cell States List’ using the **Expression “***if (x=8, y, 1)”. *Again, in simpler English, this means *“if the value of ‘x’ (the number of neighbours for each cell) is equal to 8 (meaning it is internal, and not at the edges or corners), keep the cell state as the input ‘y’ (which is the initial cell List), otherwise set the cell state to ‘1’).* You can preview the result of this operation with the components shown in the top right of the image above, as in step 1.

We finally get to the heart of the simulation, where we set up our looping procedure to ‘evolve’ our Cellular Automaton. You will need the Anemone plugin–please see Algorithm 8.1 for hints on setting up a loop if you need them.

First we will implement the born rule, where a cell with a state ‘0’ changes to state ‘1’ if 6 or more neighbours are active. To do this we will resort to the proximity matrix established in step two. Here for each cell index, the index of all neighbours is provided. To understand this, it helps to zoom into one cell to see what is going on.

Cell 78 has neighbours at cell index numbers 57, 58, 59, 77, 79, 97, 98, and 99. The ‘data’ package produced by the **Proximity **component has associated these values together. We now want to use these indices to *list *the cell state values (0 or 1) of 57, 58, 59, 77, 79, 97, 98, and 99. We do this by using **List Item, **with our evolving cell-state list plugged into the *(L)ist* input and ‘Proximity Matrix Topology’ data container plugged into *(i)ndex*. We can then use the **Mass Addition **component to sum all the 0s and 1s from cells 57, 58, 59, 77, 79, 97, 98, and 99 to get the total number of ‘active’ neighbours. Flatten the *(R)esult* output from Mass addition to get this result for each List Item. In this case there are ‘7’ active neighbours for cell ’78,’ whose value at the start of the round was ‘0’ or dead. According to our rule, then, this cell should change its state to ‘1’.

We achieve this by using the **Expression **component with the instruction *if (y>=6, 1, x)*. In plainer English, this means again that for each list item *“if the value of y (the sum state of the neighbours) is greater than or equal to six, then return the result ‘1’, otherwise return the result x (which is current cell state, either ‘0’ or ‘1’, i.e. the state is unchanged.)” *To examine if this is working, it might be helpful to preview the interim result and look at a sample of cells with panels before and after the expression is executed.

Hopefully the logic of the expression we just set up is clear as we are now going to set up a very similar **Expression ** for the ‘Die’ rule. Here we input the list of updated states from the ‘Born’ rule into the ‘x’ input, and the same results of the **Mass Addition** of neighbours into the ‘y’ input. The only difference is our expression is now “i*f (y>=3, x, 0)*“. Hopefully by now the syntax is becoming clear. Try to understand what the expression is doing here and again check the panels to see if the expected results are being produced from a few sample cells.

Finally, we have one last **Expression **to add to keep our border always in a ‘live’ state. The logic here is identical to the logic employed in Step 3 so return to this step if you need to. The results of our rules being employed are then fed into the D0 input of our **Loop End** component.

Now that the hard work is done, it is time to reap the rewards (or the frustration if you made a mistake!). You can now increase the iteration count on the Anemone loop to watch the CA ‘evolve.’ As mentioned previously, this particular ruleset produces stable results very quickly, and in my initial 20×20 grid, I reached a stable state after about 5 rounds. If everything seems to be running smoothly, you can now increase the size of the simulation. Depending on your preview settings, this needs to be handled with care! Below is an example of a 100×100 grid. Note the time to achieve a stable state goes up, but it still resolves fairly quickly, in this case after about 10 rounds.

You can also play with different initial percentages at this point to see how the results change.

Depending on your design goals, you may not want to output the results of the CA in a ‘raw’ state and you may want to do a bit of post-processing to get rid of the pixely feel. Jeremy Kun recommends on his blog running further CA’s on top of the initial one to iteratively ‘smooth’ the structure, but here I used a simpler smoothing operation. First I **Dispatch **the cells into two groups based on their initial state, and then using the **Region Union **Boolean operator, the ‘dead’ areas are brought together into closed Polylines. I then rounded out the edges with the **Fillet **component.

Below are two examples of a near final result using various initial parameters and by changing the random number seed.

From here, you can do other operations. Extrude them into solids and send them to a 3D printer…

Or put the voids into Algorithm 4.7 to create an archipelago of Islands.

If you are having trouble getting it setup, you can Download the GH File Here.

]]>

Generative or Algorithmic Art goes back to the very earliest days of computer graphics and some of the key pioneers of this movement produced work before computer screens were even a thing. It was necessary for them to come up with a clear logic, program an algorithm, and hope for the best when the plotter spit out the results. Some of these earliest pioneers continue as a source of inspiration to algorithmic art to this day, and their early experiments continue to be useful for those learning to code or design algorithms to this day.

One of these early pioneers is the French-Hungarian artist Vera Molnár. She and other generative artists took important ideas from abstract art and Minimalism, which also flourished in the 1960s when many early experiments were done. For more, her biography on Wikipedia: https://en.wikipedia.org/wiki/Vera_Molnár.

While many of her artworks were generated with algorithms, her piece *(Dés) Ordres *from 1974 stood out to me as being both very beautiful and relatively easy to code. In this case, less truly is more.

The logic is fairly simple. A regular grid of squares is offset multiple times towards the squares’ centres. Some of the squares are then randomly reduced, after which the four corner vertices are slightly jiggled in the X and Y directions.

Below is a simple script recreating for the most part the logic of the pattern.

**Step One: Draw a regular Square Grid and Offset Cells towards Centres**

First drop a **Square Grid** component. This takes a couple of parameters. The first is the ‘cell size’ which is the size of the outermost square in our grid. Secondly, the ‘number of cells’ needs to be inputed for both the x and y directions. In this series, Molnár uses a 17 x 17 grid of cells. While setting up the script we will rely on a 5 x 5 grid for clarity, after which we can expand to 17 x 17 or larger.

Before offsetting the outermost square towards the centre, we need to know the total amount to offset, and then divide that amount by the number of offsets we would like. To do this, the ‘cell size’ is **Divided** by 2.01 to approximate the distance to the centre of the square without being exactly halved. This is then **Divided** by the parameter ‘number of offsets.’ The number of offsets in the example above is ‘5’ but actually there are only ‘4’ actual offsets since the first offset amount in our series is ‘0.’ The number 5 goes into the *(C)ount* input on the **Series** component, while the result of our **Division** operation goes into the *i(N)terval* input on the Series component. Then, I using **Cull Index** I remove the first item (Index 0) in the series since I don’t want to keep the line which is offset by the amount ‘0’.

Since I want the offsets to go towards the inside of my square cells, I then make the series **Negative** using the appropriate component. The series is then ‘*grafted*‘ into the *(D)istance* input of the **Offset** component.

**Step Two: Randomly Reduce the number of Squares. **

After setting up the initial ordered grid of squares, it is time to introduce some randomness. In this case, I simply use the **Random Reduce** component on the *flattened* list of squares. To know how many values to remove, use the **List Length** component to measure this list, and **Multiply** the list length by a decimal percentage–in the example above this is ‘.35’ removing 35% of squares generated at the end of step one. This result is input into the number of values to *(R)educe* input on the **Random Reduce **component, and a random number *(S)eed* is input as well. Note other components such as culls or dispatches could be used as well.

**Step Three: Identify Corner Vertices for Remaining Squares**

Here I use the **Control Points **component on a flattened list of the remaining squares. Note that for a four sided square, *five* control points are produced. This is because the starting and ending point is duplicated. For our purposes, we only want this point once, so again we use **Cull Index** to remove everything at index ‘0’ (input into parameter input i) We could also have culled index item 4 (the endpoint) but just choose one.

**Step Four: ‘Jiggle’ the Four Vertices**

This is a very similar procedure to the one used in Jittery Rectangles – Example 1.3 so I won’t go into detail here. You can either ‘hard’ input the amount of Jitter for each domain, or you can make a domain which scales the amount of jitter as a percentage of the initial ‘cell size’ parameter depending on whether you think you may scale the pattern up or down at a later point. I chose to do that latter in this example.

One key difference to note from example 1.3 is the points need to be jittered independently of which square they are in–that is vertex 1 in square 1 should have a unique jitter independent of vertex 1 in square 5, for example–but later the vertices need to be restructured so as to ‘remember’ which square they originally were in. To do this, you need to **Flatten** the list of vertices at the beginning of this step, move the vertices randomly in the X and Y directions, and then **Unflatten **the list using the original list of points as a guide to restore this data structure.

**Step 5 – Reconstitute the Squares – Adjust Parameters**

If everything is structured correctly up to this point, the squares can be reconstituted by simply inputing the **unflattened** list of points into the (V)ertices input of the **Polyline** component. Also important is the **Polyline** needs to be closed and this will only happen if we input the Boolean ‘True’ into the (C)losed? input. Now is the time to adjust parameters to achieve the desired results. In the image above, the initial Jitter amount was very strong, so I decided to tone this back to get something closer to what Vera Molnár showed in her work. Now is also the time to play with scaling up the number of cells in the grid, trying various percentage values for random reduce, etc. A few examples of results can be seen below.

Below is an image of the completed GH script for reference.

]]>

I have received a lot of positive feedback on the blog but as it became more popular, it became increasingly hard to keep it managed especially since I was getting deeper into my doctoral research and the algorithms I was looking at didn’t lend themselves very well to quick blog posts. I had a lot of ideas, but never the time to formulate them into simple tutorials, and all my writing efforts had to be directed elsewhere.

Anyways, I am happy to report the thesis is done, I have started a new position at the University of Sheffield in Northern England, and now that I am getting my other responsibilities under control, I will have a bit more time to dedicate to adding new content. I will probably clean up the index and restructure a lot of the sections at some point later this summer, and will add posts on more complicated algorithms I looked at for my thesis that aren’t full ‘tutorials’ in the coming months, but I will try and add some new easy tutorials as well. Regardless, I have committed to updating the blog on average once per month. I am also going to start an instagram account for the blog soon. Updates will follow. Finally, I will try and slowly add the *.gh files to some of the more popular and complex scripts since many have requested them but I wasn’t able to provide them. Anyways, there will be at least two new posts in April with hopefully with more regular postings after that!

]]>A recent source of inspiration has been some of the work done by the developer of a procedural world generator Miguel Cepero of Voxel Farm/Voxel Studio and documented at his blog Procedural World. I’ve recently experiment with a few different grasshopper scripts based on some of the concepts he discusses, and I wanted to show a couple of these here on this blog. The first is a script based on an extremely well-known fractal know as the Cantor Set and here on proc-world translated into 3D. a fractal known as “Cantor Dust”.

**Step One – Setup a Basic Cantor Set Script**

Setting up a 2D Cantor set is a very straightforward process if you’ve already tried setting up a few of your own recursive loops in Grasshopper using Anemone. If you haven’t done so, I would refer you to a few of the earlier examples in this blog under sections 8 and 9. Here I’m showing the entire script for a 2D Cantor set from which we will build our 3D script.

All we are doing here is Taking a single line segment, imported from Rhino, and then using the “Shatter” component breaking it into 3 equal segments. The middle segment is discarded, and the other two segments, retrieved through the “List Item” component are then Moved a small distance upwards. They are also looped back to be shattered again (and again). Like many recursive fractals, even this small script will crash your computer if you let it run for too long, but after 4 or 5 rounds the geometry gets so small as to almost disappear into “dust” anyway. I also have a second process looping through channel D1 to save all of my old geometry.” This step can be eliminated if you use the “record” function of Anemone, but I like to keep the geometry around in containers for future use.

Even at this early stage, you’ll notice that if we change where the line is shattered, the script will give different results. Below are tests showing different potential shatter patterns by changing the values in the panel and the results after 4 recursions.

**Step Two – Adding Randomness to the Standard Cantor Script**

Before going into the 3D version, we are going to make just a couple of more variations to show the principles we will be using going forward. Here two random number generators are introduced, one to randomize the division points, and one to randomize the vertical distance moved.

The first random number generator, pictured above, generates a value between .15 and .49 to determine the first division point, and then subtracts this value from 1 to determine the second division point. This will always lead to a symmetrical division. The generator is tied to the counter (to which I add a small value to avoid a constant “0” seed) and a number slider.

A second random number generator can be used to determine the amount of movement. Simple Enough.

**Step 3 – Standard 3D Cantor Set**

We will forget the random number generator for a minute and will just try and modify our script to do a standard, 3D Cantor set. The first modification is we will start by inputing a surface into our loop instead of a line. For now we will input a simple square surface. Next, instead of using the “Shatter” component to split the line, we will use Isotrim together with Divide Domain2, splitting our surface into 9 subsurfaces (3×3). Finally, we list the four corner surfaces (0, 2, 6, 8) for further subdivisions. When these surfaces are moved, we should also go ahead and change this to a “Z” vector instead of the “Y” we used in the previous script. By now the script should look something like what is shown below.

One further addition to our script will be to add an “Extrude” component to give us solid geometry, extruding our geometry an amount equal to the amount moved in the Vertical direction. but we still need to keep the un-extruded, moved surfaces, as these will be recursively looped and subdivided, not the extruded geometry.

**Step Four – Irregular Surface Divisions**

It was pretty easy in our 2D version to use our random number generator to produce values for shattering our line. It is much much MUCH more complicated in this 3D example, as there isn’t any kind of simple component for irregularly dividing surfaces. Furthermore, we want at each recursion to assign different random values to each surface at each round, so that they are each acting independently of each other. This will require careful structuring of data. In short, instead of our simple Surface => Divide Domain2 => Isotrim routine, we are replacing it with spaghetti salad. :0

This will not be easy, but don’t panic. I will try and explain. OK, maybe you can panic now and just download the completed script at the end of this post, but if you want to walk through it, I’ll do my best.

We’ll start by dividing our surface into 4 sections using the standard Isotrim before the looping starts. I am creating the surface in a bit of an awkward way, exploding the curve, then using the first and third segments of my rectangle to create my surface using the “Edge Surface” component.

You could use boundary surface at the beginning and it will work at first, but to increase the script’s flexibility for running the Cantor set on *multiple irregular polygons*, which I will do at the very end, you need to construct your surface in a way that will produce what is called an “untrimmed surface”. The boundary surface component creates a “trimmed surface” which can cause problems in some instances. I’m only telling you this because I was hitting my head against the desk for several hours trying to figure out why my script wasn’t working with *multiple irregular shapes* until I stumbled upon a solution to the problem.

OK, moving on. You can use your own rectangle for now, but I am using just one 10 x 12 unit rectangle for this example. Once the four initial subsurfaces pass into the loop, you need to make sure they are *grafted* into its own branch so that each subsurface can be treated independently, and get its own random number set. Next, we use the Deconstruct Domain2 (Not Divide Domain2) to get the “U” and “V” values for each surface. U in this case corresponds to the Y axis and V to the X, but this has to do with how I created my surface, not at all to do with X/Y coordinates. Rotate the shape and you will see the U and V values remain the same regardless of the orientation of the rectangle.

The Deconstruct Domain2 component gives a U0 and a U1, as well as V0 and V1 value for each surface. This can be seen as the start and end value for the domains, *relative to the surface. *I then want to create some new U and V values, two to be precise, at random values *between* each U0/U1 and V0/V1 pairing. This will be similar to how we created the random values in the 2D cantor set. First, we find out the bounds of each pair by subtracting the start value from the end value. This value is then multiplied by one of a set of random numbers generated. You need as many random numbers as you have items, and then the random numbers need to be grafted to match the data structure of the surfaces. I used a lot of panels here to show what is going on.

In the next part of this step, we are going to collate our numbers and construct new domains corresponding to each of our individual subsurfaces

Below is the top half of this construct, just for the U values. We are using the “Merge” component to merge first the U start value(U0), then the location of the 1st cut (U0+Random Number), then the location of the 2nd cut (U1 – Random Number), and finally the U end value (U1). This will create a small sublist corresponding to each subsurface from the previous part of this step. While you won’t see the surface divisions yet, hopefully you can see how the values in the panel correspond to the U divisions shown in the image to the left, that we are looking for.

These sublists now just need to be converted to domains. To do this, Use Shift list, followed by Construct Domain to get a domain spanning between each value in our list, and then Cull the last item, using Cull Index, since this is “junk” that we don’t need (the domain between the last value and the first value). to get the right index, I used a formula, but it might be safe to just say cull Item 3.

Once this is set up, do the same for the V values, here shown without the panels.

Lastly, we need to do a bit more gymnastics to weave, so to speak, the two linear domain sets of domains together, into one squared domain. If we simply plug the values together with the Construct Domain2 Component, however, we will not get what we are looking for, since you will notice from the last step, we had 3 domains for each subsurface (in this case 12 domains total). This is not enough, and will only split the surface into 3 subsurfaces, once for each domain. To solve this, we need to duplicate our list of domains 3 times using the “Duplicate Data” component(which will repeat each data component 3 times, but only in its own sublist, and then use”Partition List” to get the three duplicates into their own separate lists. Then we can construct our squared domain with “Construct Domain2”

Finally, although not altogether obvious, we need to use “Trim Tree” to get rid of the outermost branch without flattening our data all the way. In the end, we want just four sublists to correspond to our original four subsurfaces. Once this is done, plug into the Isotrim component to (hopefully!) get the surface division to work.

**Step 5 – Test Looping and Make Additional Modifications as Desired**

So now that the hard part is behind us, we can carefully increase our number of iterations, and if that is working we can modify the script and adjust parameters to get it to behave more like what we’ve envisioned.

This particular script doesn’t seem to bring much after about 4 loops…except system crashes. After looking at its behavior, I decided I didn’t like for the really tiny pieces to get as much vertical extrusion as the bigger pieces. I decided having a component of each shapes size added to the move and extrusion height equations might help.

So with this minor modification, the results are a bit different.

**A few Variations**

If all is working well, you can input multiple outlines at once and it will perform the algorithm faithfully. It **should* *work with any four sided closed polygon, although you may need to “flip” the direction of the line in some cases if you are getting unexpected results. The image above is of a 4×4 starting grid.

An this is from an irregular field of polygons I drew. Each polygon is four sides.

Once rendered It looks a little like the death star surface…

OK, well, if you have trouble figuring this out, click here to download the GH file

]]>In example 4.1 I mentioned a custom VB component I had used to analyze the flow of water across a surface. I recently tried to recreate this using the Anemone looping component to use with meshes (for various reasons) and it was actually very easy to do. The logic is similar in some ways to example 8.5 which I used to find a path through the landscape, but in some ways this example is pretty simple.

I will be using meshes this time instead of surfaces, partly because I haven’t talked about them too much, but meshes do have some advantages (and disadvantages) over surfaces which I will not get into here. To create this particular mesh, I imported topographic data from SRTM using Elk, and then used the point output from Elk to create a Delauney mesh.

**Step One – Populate Geometry and setup a loop**

To get started, we will use the populate Geometry component, and initially I will use only one point, but we will scale this up to around 2000 points by the end. What’s important is for the loop to work properly at the end is that the output of populate geometry be GRAFTED. While not 100% necessary, you should also simplify, otherwise you will get messy indexing at the end.

While we are at it, we will set up a basic loop using Anemone as explained in prior examples.

**Step Two – Find curve for possible movement directions**

The logic of this loop is after each round, we want to find out if what direction water would flow in if it were at a specific point on site. Using a similar logic to the last example, we will intersect two shapes together to find a curve of possible directions of movement. We will then identify one point on this curve for the actual direction of movement.

To accomplish this, I first draw a mesh sphere with a radius equal to a “Step Size”. Decreasing the step size will increase accuracy at the expense of looping time. You will need to find an appropriate step size based on the overall size of the landscape you are analyzing. In this case I have a fairly large area (around 8km x 8km) so I am using an 80m step size. I find around 1% of the overall dimensions of the landscape usually gives scale appropriate results. This can be changed later if you want more accuracy. If you are testing this on a smaller model, you will need to adjust appropriately.

I then add a Mesh | Mesh Intersection component, which outputs a curve of where the water could possibly go if it flows 80 m in any direction. this is basically a circle sketched on the surface of the mesh.

**Step Three – ****Find lowest point on curve to determine actual water movement direction**

So you probably already know where the water will go, but you might not know how to get there. If there is any doubt, water is an agent, a very dumb agent, but it has one goal. To follow gravity to get to the ocean as fast as possible. So it will always flow down. Well, there are minor exceptions if you take forces like momentum, cohesion, and friction into account, but we won’t do that today

To find this point, we need to know the “lowest point” on the curve we just drew in the last step. There is no such component in grasshopper, but we can use “curve closest point” and then use a point at sea level, or the center of the earth, as a comparison point.

In this case, I deconstruct my sphere’s center point, and reconstruct it with a “Z” value equal to zero. If I am working close to sea-level (in this case I am 1000 m up) it may make sense to set the “Z” value with “Construct Point” to a negative number, like -1000 m (or the center of the earth if you like).

I then use this point together with the Intersect curve from the last step to find the “lowest point.” This is where the water will head next.

**Step Four – Finish the loop and draw a connecting line**

So this is an image of the whole loop. I use the “Insert Item” component to reinsert the new “Lowest Point” into the list, which after 0 rounds is 1 item long. This is why I use the “Counter + 1” expression to determine the insertion index. once the item is added, I can plug this list into the end of my loop. You may want to use “the simplify toggle” to keep your list clean. Pay attention to where I placed these in the image. Last, I add an “Interpolate Curve” component at the end.

Once the loop is complete, you want to increase the looping counter gradually to see if everything is working. Run 1, then 5, then 10 rounds, etc. to get started. While it doesn’t look impressive yet, if you get a series of points and a connecting line going downhill and following the valleys, everything should be fine once you scale up!

**Step Five – Scale Up!**

So go big or go home the boss says? Well, all we need to do is add a few more points to our populate geometry and we’ll have a nice stormwater runoff analysis. First try 3-5 points, not too big. If this isn’t working, maybe you forgot to graft? If its working, scale up quickly. Here I have 200 points, run over 20 rounds.

Looking at it from above, you will notice that even after a couple of rounds, the initially random cloud of points will find a structure. By 20 rounds, almost all the water has accumulated into resting points. This is where the script stops really working. We know the water actually keeps flowing, but in this case, our data isn’t precise enough to account for the horizontal transport of water in rivers, where water might only drop a couple of meters over the course of many many kilometers. But it IS good at showing how water moves on steeper sites.

You can speculate about where the rivers are, however, based on your data. If you have a series of still clusters or beads, bets are good that there is a river connecting them. Above I have the GH output, and below I sketched in the river lines in Photoshop.

Anyways, from this basic analysis, all sorts of further analyses can be done. More on that soon…

]]>

I wanted to take the time to show an example of using Grasshopper to work with data imported from a source outside of Rhino, such as a spreadsheet developed in Excel. Importing data from outside sources is also fundamental to more advanced interactions, such as having the program communicate with remote machines or sensors.

In this example, I wanted to make a diagram of a river’s watershed abstracting the spatial relationship of the river’s tributaries and showing how much each tributary contributes to the overall river’s flow. The technical name for this type of diagram is a “Sankey Diagram”. I actually drew one of these initially in Illustrator, which is superior to rhino/Grasshopper in many ways for representation, but it was a very time consuming process, and if I wanted to create a similar diagram for another watershed, I would have to start from scratch. Another drawback of drawing this in Illustrator is if a datapoint or datapoints change, it can be a time-consuming process to update this. It is also a static representation, and as we all know, a river’s flow is dynamic and changing. Having a representation or diagram that can automatically update with changing values, in this case flow in the individual tributaries, can be a very powerful form of representation.

There are a number of tools and plugins that can deal with importing data into grasshopper, but for this example I will use one of the native tools to the program, and then draw some geometry based on the dataset.

**Preparation – Collect and Organize Data**

The first step is probably the most time-consuming, to actually collect data that could be useful for your diagram. In this case, I researched using Wikipedia all of the tributaries of the River Leine in central Germany, which happens to flow right behind my house. I was able to get the length and watershed area for each tributary, and measured at what river kilometer each tributary branched. Further, I noted whether it was a left- or a right-branching tributary. I was able to get the average discharge of some of the branches, but not all, so I decided I would estimate discharge based on the area of collection, for the purposes of this example.

I compiled all of this data into an Excel file. There are some plugins that can import Excel tables (e.g. Howl + Firefly), but maybe a simpler way is to Export your Excel file as a *.csv file (Comma separated variable), and then to save this file again using a text editor as a *.txt file.

If you would like to follow along in this example, you can copy the following and save it as a *.txt file

R,Grosse Beeke ,12,26,5,30

R,Juersenbach,18.9,26,6,49

R,Auter,24,26,10,113

L,Totes Moor,59.7,26,8,56

L,Westaue,72.2,35,38,600

L,Foesse,94.5,53,8,20

L,Ihme,99.5,48,16,110

R,Innerste,121.5,58,99.7,1264

L,Gestorfer Beeke,125.4,58,8,13

R,Roessingbach,125.5,58,14,36.3

L,Haller,132.8,70,20,124

L,Saale,138.5,73,25,202

R,Despe,142.1,74,12,47

L,Glene,153.1,74,11.7,40

R,Warnebach,156,74,8,27

L,Wispe,161.7,74,22,74

R,Gande,175.6,74,41,114

R,Aue,177.7,103,23,113

L,Ilme,186.5,105,32.6,393

L,Boelle ,191.9,110,10,21

R,Rhume,192.8,116,48,1193

L,Moore,198,118,11,43

R,Beverbach,206.9,120,14,35

L,Espolde,207.9,126,16.1,65

R,Rodebach,208.1,130,8,20

R,Weende,208.9,135,9.2,18.6

L,Harste,209.9,138,8.6,29

L,Grone,211.6,140,6,26

R,Lutter,211.7,144,8.1,38

L,Rase,219.3,150,9,23.8

R,Garte,219.4,152,23,87.2

R,Wendebach,223.4,162,16.2,36

L,Dramme,225,161,14.4,53

L,Molle,230,182,7,10

R,Schleierbach,232.4,191,6,15

R,Rustebach,236.8,210,8,13

L,Steinsbach,238.1,215,5,15

L,Lutter,244.7,233,7,21

R,Beber,245.4,237,7,30

L,Geislede,249.6,260,19,52

R,Steinbach,255.3,276,6,14

R,Etzelsbach,258.4,293,5,13

R,Liene,264.2,337,7,18

What you’ll notice is each line has a series of values, separated by a comma, which would correspond to each individual “cell” in Excel. Once this is done, you can move onto the next step.

**Step One – Import Data**

To import the data, we will use three components. The first is the “File Path” parameter, which feeds into the “Read File” component, in this case set to “Per Line”. Each line will get its own Index in GH. Then we use the “Split Text” component, with a simple comma symbol as the second input, which further structures our data splitting each line at each comma. I put panels behind the components for reference.

**Step Two – Sort Data**

What you do next is entirely situational, but before you start drawing geometry, you may need to reorganize and/or restructure data so it will be useful to you. In this case, there is not too much restructuring necessary, I just wanted to split my dataset into two subsets based on whether the tributaries head left or right, since we will be drawing those differently. In this case , I list the first Item 0, and then distribute the list based on whether the data is in a Left Branch or a Right Branch. The distribute component needs a true/false value, so to get around this problem, I simply replaced my R’s and L’s with True’s and False’s. In this case I needed to also remove empty branches using the “Remove Branch” component.

The general idea, however, is you may need to play around with your inputs and/or data structure to get something which is most helpful to you.

**Step Three – Draw Basic Skeleton**

Before we get too crazy, it is useful to draw only the basic skeleton of our system based on our data. Basically you will be using a lot of “List Item” components to call out your data, and then draw geometry in GH based on this. I recommend grouping your list Items and labelling them to help you keep track of what is what, otherwise you will soon be left with a confusing mass of spaghetti. Well the spaghetti might be inevitable, but labelling always helps when you need to make some changes!

**Step Four – First Refinements **

Once we have our basic skeleton, its time to start gradually refining the process. In this case, I eventually want to show each tributary with a varying thickness based on how much water it contributes to the river system. As mentioned previously, this will be a factor of “watershed area” as a rough approximation of water volume contributed. I first list watershed area for each tributary, divide by a factor, and then want to progressively move the tributaries towards the Right (i known, they are left tributaries, but left in the sense of a boat traveling downstream…if you are traveling upstream, which we are in this case, it would be on your right. hope i’m not confusing you. Think Left Bank in Paris if that helps).

The mass addition component comes in very helpful here for both calculating the total area of all the branches, and also progressively telling you how they add up. One small thing we need to do is in order to get the branches to move correctly, we need to subtract the Step values from the Total value so the branches will get the proper “X” vector.

**Step Five, and so on…. Further Refinements…**

I won’t explain all what’s going on here…These are all simple operations, to improve the graphic quality of our lines. I am doing a few arcs, but also placing text to label my diagram.

Once we get close to what we want for the Left branches, we can copy and paste for the right branches. Notice we have to change a few of the vectors (positive to negative) to get the geometry to move and draw in the correct direction.

How far you want to go is up to you. Here I gave a line thickness using the “Sweep” command, sized the text proportionally based on tributary size (with a minimum text size for the smallest streams), and also made the arc radius proportional to the branch thickness. This is all pretty simple to do, but the GH script can get a bit messy.

**Further Steps – Using Script with a New Data Set and Changing Values**

Once you have a working process setup, you can plug in new datasets, as long as they are structured the same as the dataset you used to create your script, to do another graphic diagram. Here I researched the same values for the Ems River (on the border between Germany and Netherlands). The research took hours. Plugging the new values into GH and generating this diagram took less than five seconds.

You can also update values, and the diagram will change. Say you wanted to compare a river’s discharge at different times of the year, or even have a diagram that updated based on real time sensors. This is possible, and when the file GH is reading is re-saved, the diagram updates automatically, even without you doing anything in GH. Here I randomly changed some of the values of the tributaries of the Aller River in Germany (of which the Leine, which we previously diagrammed, is the largest tributary) and you can see how the diagram updates in real time.

Anyways, this is just meant as an introduction to the topic, but if you anticipate doing a drawing that you may need to replicate again in the future, are dealing with changing data values, or if you are simply toying around with the representation of a large dataset, a scripted environment may be a good way to approach this task.

]]>

It’s been a while since I’ve posted any new content, but I decided to finally add a bit more about agents. This is actually something I started working on a while ago, and which I alluded to in Example 8.5, but it is a method to analyze a topographical surface to find potential corridors of movement, and also areas of inaccessibility.

The basic premise is fairly simple. Anyone who has spent any amount of time studying site design will know that you really shouldn’t have any paths steeper than 1:20. Sure, you can have paths 1:12 with landings every 10 meters, but that just looks ugly. The reason for this 1:20 rule is to make paths that are comfortable for people in wheel chairs and older people. But these paths are also more comfortable for everyone else as well!

Based on this regulation, I decided to create a script that would send a swarm of agents–old ladies and people in wheelchairs–across a landscape, and from this analysis, a designer could then perhaps better understand potential access and barrier points.

The script will follow two rules.

1 – Agents are limited in each “step” to movement uphill and/or downhill that does not exceed a specific gradient, in this case 1:20 (although this can be changed) This is very similar again to Example 8.5 and will use some of the same techniques.

2-Agents will tend to move in the same direction as their current direction. Nobody likes switchbacks. Unlike Example 8.5, there is no “destination” per se, the agents will just keep moving in one direction unless there are no good options in that direction, in which case they will turn to a new general direction.

In addition to analyzing sites for barrier free movement, this logic may be useful for modeling ecosystems as well. Most animals, like most people, also don’t like super steep slopes, and will follow lower gradients when possible. Sure, it IS possible to go straight up hill, but in the interest of conserving energy, in the long term lower gradients will be followed. With a bit more scientific rigor, this method of modeling may show potential migration corridors in larger landscapes, and also pinch points, where potential predators might like to hang out! And places that are inaccessible to most animals, might just be a good place for an animal without teeth to carve out a new ecological niche (mountain goats?) So enough of that, on the script.

**Step One**

First, you will need a surface. In this case, I used Elk to create an 8.6 x 8.6 km area of an interesting landscape southeast of Alfeld, Germany. Any landscape with some topographical variation will do. I then use the “Populate Geometry” component to put some starting agents on the surface. I will keep it low for now, just two, but can increase this later.

The second important thing here is to set up a “Step Size”, the distance the agents will cover in each round. Since I want the script to work for smaller and larger sites, I use a bit of math to make the step size proportional to the overall surface dimensions. Note that for clarity I use a rather large step size at first, but I will reduce this later to get more accurate results.

**Step Two**

At each random point, I draw a circle with a radius equal to the “Step Size.” I then move this circle once up and once down based on the maximum amount an agent may move either up or down in each step. This is proportional to the gradient, in this case 1:20. My step in this case is 260 m (this will later be reduce for more accurate results) . That means with the 1:20 gradient I may not move up any more than 13m, or down more than 13m. A loft is drawn between the minimum and maximum circle, and this is then intersected (BREP | BREP Intersection component) with the surface to generate a curve or set of curves of possible vectors of movement. This is again exactly like Example 8.5 which you can refer to for additional explanation.

Note that the top right agent has only one curve of possible movement, while the bottom right agent has two. Once we start looping, a point along the curve in the current direction of movement will be privileged, but for now, the agent at rest could venture off in either direction.

**Step Three**

Here I use the “List” component to give me only the first potential movement curve for each agent point. I then use “Curve Closest Point” to find the closest point on this curve–the agent’s destination–to the agent’s current position. I then add this new point into a list just after the current point.

Please pay attention to the data structuring, that is, the grafting and simplification. The goal is to get the initial point as point “0” on your list, while the second point becomes point “1”

For reference, to this point the overall script should look like the image below

**Step Four**

Now we are going to go big and make the loop all at once! It looks like a lot but it is basically just repeating much of what we did before.

First, we use “List Component” along with the Round Counter to extract the last two points from our list. Right now the list only has two points for each agent, but this will quickly grow!

We then do exactly like step Two above, drawing circles at the current agent position (Point 1 in this case) with a radius based on the step size, and then finding curves of potential movement based on the maximum allowable gradient.

Instead of using list Item to select the first of these potential movement curves, we are now going to do it a little differently. We first find the current vector of movement based on the vector between the next to the last point (Point 0) and the last point (Point 1). We then draw a “Tentative” movement point, in this case at half the total movement, and then run a “Curve Closest Point” test between this “Tentative” point and the potential movement curves.

There could be one, two, three or even more potential movement curves…but there is always at least one. If all else fails the agent will go back to where he came from. Anyways, we then do one more “Closest Point” component to find which of these one, two, or 3+ Closest points on the Individual curves is the closest of the whole set. This is the next destination. If it doesn’t make sense, just copy EXACTLY what I did above and it should work.

I then merge this new agent current position into the ongoing list of agent positions.

**Step Five – Running the Loop**

Once this hard work is done, its smooth sailing–hopefully. In the image above I am labelling the points with their index number for clarity, but you can start to see how the agents are behaving. If it is working, slowly increase the number of iterations, and also now would be a good time to go back to the start and reduce the step size in the interest of more accurate results.

**Step Six – Continue Looping and Play with Representation of Agents.**

You may also want to increase the number of starting agents, by adding a few points to the initial “PopGeo” component. If all is well, it should be able to handle a few more. Lastly, you may want to make the agent trails look a little better. You can add a “Interpolate” curve or a “Nurbs Curve” between the points in the list to track the agents without the red “X’s”. You may also consider, AFTER the loop is finished, adding a “Dash” component. Be careful with this though, and make sure to disable/delete it if you decide you want to run a few more rounds!

There are may other Representation options. In the first image of this post, the agent paths are colored with a gradient based on how far ranging they are. Agents that are confined to topography to their local neighborhood are Orangeish, while agents that wander far from home get colored green. This wasn’t too hard to figure out, but I’ll leave that for you to figure out on your own, if you’d like.

By now, hopefully some patterns are starting to emerge. If this were a park landscape, you may start to see where pedestrian paths would be feasible, or where they could be difficult to construct. If a particular point needs to be accessed, you can also see potential ways to get there with accessible paths.

If this were an ecosystem simulation, you’ll start to see where would be a good place to hang out if you were a mountain lion looking for passing livestock, and might even see where the mountain goats would hang out. Also note that the edge boundaries have a huge effect on agent behavior towards the edges. This is a common problem with computer simulations, since the real world doesn’t have such hard boundaries, but you could image that if a fence were erected around this landscape to create a protection area or such, what the implications might be.

**Optional Step**

The script can now be fine-tuned / altered / improved in any number of ways. Here, as an example, a bit of randomness is added to the path of the agents by rotating the vector of “Tentative” movement. This frees up the agents to wander a bit more, but they still will be constrained by the gradient rules.

**Comparison with the Actual Landscape Condition**

Just out of curiosity, I decided to compare what I learned about the landscape from the agent modeling to the actual landscape condition.

The image to the left is taken from open street maps, the images to the right are the versions with agents strictly going to the closest point in the current direction (above) and the more wandering agents (below).

I’ll let you draw your own conclusions, but remember, topography isn’t the only thing shaping this landscape. Also, some of the information towards the edges is skewed because of the boundary problem discussed earlier.

Anyways, hope this helps as a good start to seeing how agent modeling can be useful in landscape surface analysis and design! As a last image, I just wanted to show a quick test I did of the same agents walking through the Iberian Peninsula. A more careful analysis could start to yield some insight into historical routes of movement through the Peninsula, which in turn informed Spain’s historical development.

]]>

I wasn’t sure where to put this example exactly, since it came as a follow up to Example 8.4, but the general scripting is less complex so I decided to put it a bit earlier. The general problem and solution has many applications beyond topography as well, but for landscape architects, maybe its most ready application would be in the creation of landforms. It could also be used to generate generalized roof profiles for buildings in some cases.

If you already looked at Example 8.4, the recursive offsetting of base curves to create a topography, you may have tried a similar process going inward. Offsetting towards the exterior sometimes, but rarely causes problems with changes in topology, a mathematical term describing the form of a shape, but offsetting towards the inside is often a very different matter. If you are offsetting contour lines for a landform, for example, which is somewhat irregular in form, you will probably get to a point eventually where the landform “splits” into separate contour lines, or separate “peaks”. If you have an automated process in grasshopper, similar to Example 8.4 for example, going towards the inside, this can create problems.

Fortunately, there is a fairly simple solution for describing the topology of a shape through what is called the “medial axis,” and using this description in turn to create a landform out of any arbitrary closed shape or closed set of shapes. The logic of this script using Voronoi cells to find the “medial axis” is explained on the Space Symmetry Syntax blog by Daniel Piker, but here the definition is reworked to work with the latest versions of grasshopper, and also extended a bit at the end. This definition is designed to work with any number of input curves, but you will have to pay attention to the data structure, particularly the “Grafted” elements throughout for it to work properly.

**Step One – Use Voronoi Cells to describe typology of shape**

The script starts here with three arbitrary curves, in this case boomerangs. These curves are divided into a regular number of points, and these division points in turn are used to create a Voronoi diagram. If you look at the diagram, the boundary between the cells corresponds closely to the elements that can be described as the “Ridge” and “Hips” of our landform. You will have to increase the number of curve division points to make this line increasingly more precise, while not overwhelming your computer. Finally, we use the “Trim Region” command to trim the Voronoi cells, and we will only go forward with the pieces of geometry that are inside our region curves.

**Step Two – Extract Medial Axis and “Veins” from Voronoi cells **

Once we have the cells inside our shapes, we can explode the cells. We now divide the remaining geometry into two classes. The pieces of geometry which touch the edge curve always run perpendicular to the slope of our landforms, and we will call these “veins” (like veins on a leaf) going forward. The pieces which do not touch the edges comprise the topological skeleton of our shape. To separate these, we will use the “Collision One” component to return a true/False value for our shapes to see if they touch the outside edge curve. These two sets of Geometry are then dispatched.

Notice also what I did with the data structure. I used the “Trim Tree” component to remove all levels of data structure except for the last one. This is because I don’t care what cell the lines used to be associated with, but I still do care which of the three starting lines each line is associated with. If I flatten all the way, it will not work properly.

**Step Three – Move Topological skeleton vertically to define landform**

In the next steps, I will use the geometry I generated to develop a landform and a mesh. I can use either the medial axis to define this mesh, or I could use the veins. In the image above, I use the veins. In the image below, I use the Medial Axis.

The general principle in both is the same. The endpoints of each piece of geometry are extracted, and then moved vertically based on their distance from the edge curve. The amount of movement is scalable based on the desired overall slope. Once these points are moved, the lines can be redrawn.

**Step Four – Create Mesh and Contour Lines**

Here I am using the endpoints of each of the “veins” to define a mesh, from which I will derive contour lines.

**Step Five – Optional Lofts for the Veins**

You could also draw some geometry with the veins, but this is a totally optional step.

**Variations**

This definition *should** work with any number of closed shapes of any size and form. You will only need to adjust the number of initial curve divisions to get results that are more or less precise. You can also adjust the height scaling factor to get various landform slopes. Below are just two examples of possiblities, based on a complex, curvilinear form, and one based on simpler triangular shapes. Note it works well in both cases!

Cellular Automata are used in many applications to understand and simplify complex natural phenomena. Sand dunes, braided river networks, and ecosystems are just a few of the things that authors have attempted to translate into simple rules and which in the end generate complex results.

This script is based on a well-known cellular automata known as the “Forest-Fire Model” and can be used to model patterns of disturbance in ecosystems. While this could be used to model a fire, the results seemed to slow moving to be a true fire…that is, new growth sprouted up too quickly in the wake of the fire. So to me it seemed more like a slowly, but relentlessly spreading disease or parasite, which can devastate natural systems sometimes much worse than a fire. By adjusting parameters such as growth-rate and chance of spontaneous outbreak, lessons can be learned about how real ecosystems might function.

**Step One – Initial Setup**

The setup here will be similar to the previous script, except this time instead of using a regular grid of cells, we will use a random population of points, scaled based on an average “Area per tree.” I did a quick mesurement of the spacing of trees in a mature beech forest, and determined 300 square meters per tree is a reasonable figure. This is used to determine the geometry in “Cell Centers”

Like the previous example, we also will do a proximity test to determine the vectors along which the disease can spread. In this case, we limit the potential spread to 20 meters. You could use a higher number here later, which will impact how easily and how quickly disease can spread. The “T”apology output of the “Proximity2D” component goes into a data container for later use, but you can see in the image these topological relationships generated in the “L” output.

The last thing we do here is a list of random cell states. For this script, we will have three states. 0 (vacant/dead), 1 (alive), and 2 (infected/dying). We start with only 0’s and 1’s.

To see these results visually, we draw a circle at each center point and use the random number data to color the cell based on its state. Like in the last script, we will move the coloring process after the loop once we create it in the next step.

**Step Two – Loop Procedures Three and Four**

Like the last script, the loop only recalculates a list of numbers called the “Cell State.” This can be 0, 1, or 2 as previously explained. Every time the loop runs, four basic operations will be performed to determine if the cell state changes, and what it changes to. The operations, in this order, are:

1-Cells that are infected die. That is *If* the cell-state is equal to 2, it will now be reset to 0

2-Living Cells that are in the “neighborhood” of a cell that was infected in the previous round, become infected. That is, *If *the cell state is equal to *1* AND** **at least one of the cells in that cell’s neighborhood (determined by the T output of the Proximity component) was equal to 2 at the start of the round, then the cell becomes infected, going from 1 to 2.

3-A new plant has a chance to sprout in each vacant cell. This is determined by comparing a random list of values to a probability test. If this chance is 5%, then in each round, about 5% of the cells will randomly go from 0 to 1.

4-Test for “spontaneous” outbreak. There is a chance that a living cell will spontaneously become infected, despite not being near a neighbor. In nature, spontaneous outbreaks of disease can be caused by introduction of a foreign pathogen to a new environment, by mutation of a previously benign version of a disease, and other causes, but these are by nature, very rare. For our first example, we will have an infection probability of 0.02% to see what happens. But in the rare cases of spontaneous infection the cell would go from 1 to 2.

How does this translate into a code? Since there are no “2’s” at the start, we will not worry about coding the first two steps quite yet. We will focus on three and four since they are set up in almost the exact same way. The most important thing here is to generate and structure a random list of numbers. All in all, we will need many Many MANY random numbers. The precise number is the number of rounds we will be looping, multiplied by the total number of objects or “Cells”. So if we are doing 200 rounds, and have 2000 trees, that is a whopping 400,000 random values! Don’t worry, the computer can handle it More importantly, we only want it to have access to 2000 (or whatever the number of “Cells” is) of those random values at a time. To get this, we use the “Partition” list component with the size of the partitions based on the “List Length” or the number of cells (2000 in our example). We then use “Flip Matrix” and “List Item” so that in round 0, we will get access to each item 0, in round 1, each item 1, etc…. This was a lot of number gymnastics!! But if we get it to work with our first list, we simply copy and paste to get a second list that will work for Procedure 4. The results of this are in the image below.

Once we have these, we now script the procedures 3 and 4 themselves. We again use if/then expressions as explained in Example 12.1. In this case the expression is “if (x>y,1,0)” which translates into “if x is greater than y, then the result is equal to 1, or else if not, it is equal to 0.” X will only be greater y than 5% of the time based on how we wrote this (see below). This is then compared to the existing list of values with the “Max” component.

Once this is done, the values go into the final test, to see if a spontaneous outbreak will occur. This is scripted in exactly the same way.

Once we let this run a couple of rounds, the empty cells will slowly start filling up with living trees (zeros becoming 1s). This could go on forever if we didn’t have the disease procedure 4. Unfortunately for our forest, after a few rounds, finally one of the random values didn’t pass the if/then test, and has now become infected!! It is now time to write a procedure for what happens if disease breaks out.

**Step Three ****– Refining the Loop / Procedures 1 and 2**

To see the fate and destruction of our once flourishing forest, we will script two procedures. The first is very simple. The second a bit more complex. The first is an if then expression “if (x=2,0,x)” This translates to “if x is equal to 2 (infected), then it now is equal to 0 (dead), and if not, it remains x.” So if it was 0 or 1 before, it will remain 0 or 1, but if it were 2, it is now 0. Got it?!

The second is a bit more complicated. We use our typology relationship determined at the start (finally!) and use list item to list, for each cell, the value of all the cells that are in its neighborhood at the beginning of the round, before we killed them in the last step, otherwise there would be no infected cells left. Some cells have rather small neighborhoods (2 or 3 neighbors), while for others it is a bit larger. We then use the “Bounds” component to get the minimum and maximum value in each set. If there is no infected neighbor, the bounds will be between 0 and 0 or 0 and 1. In both these cases, things are looking good for the cell in question. If One neighbor is infected, the bounds will be between 0 and 2. This means things are not good, and the cell in question would now go from a 1 to a 2.

Scripting this requires a tricky if/then/and equation. The syntax is quite difficult at first. We bring in two variables, X and Y. “X” was our current cell state “Y” is the highest value in the neighborhood. First I will write out the expression precisely, and then will translate it.

“if (x=1, if (y=2,2,x),x)”

>:-( What that means is “if x is equal to one, AND if y is equal to 2, then the result is equal to 2, and if not, it is equal to x…in both cases.” Don’t worry if you don’t understand it exactly at first, but if you do, you are smarter than me! Remember, in an if/then expression, if the condition is true, it does whatever is after the first comma. “if (x=1**, if (y=2,2,x)**,x)” If the condition is not true, it does whatever is after the second comma. So if X is not equal to 1, it immediately skips the second if/then expression, jumping to the second comma, and producing the value X as a result.

Anyways, the results of this altogether can be seen in the images below.

You can see that the cells that are yellow in each round become white in the next round, while all neighbors become yellow in that round.

If the density of living cells is very high, the infection will spread relentlessly in a big wave across the forest. If the density of living cells is low, the infection will tend to peter out, having no living neighbors to jump to.

**Playing with Variables**

By playing with the variables, growth rate and infection rate, certain patterns tend to emerge, although the landscape will always be in flux. If infections are very rare, with high growth rate, a very dense forest will tend to emerge, and when an infection does break out, the devastation is complete and catastrophic. If infection rates are high, sometimes a low growth rate will actually do better in managing this in the long term. In other words, sometimes it is better to bounce back slowly after disturbance than to bounce back too fast while the infection is still raging. Otherwise, the disease will become endemic.

Below are some images of two scenarios where the script is extended. Note that once the size gets pretty big, the patterns will be much more interesting, but the computer will also slow down quite a bit.

**Taking it Further**

I played around with this quite a bit to try and improve the results. I won’t show my coding, but a few things you can play around with include introducing the probability that an infected cell can survive and go onto live another day, and also scaling the cells down (through a second data stream) to show growth. In other words, new cells come in at size “1” and increase every round until they reach a maximum size.

]]>I decided to come back to vector fields with one more example. First, I’ve set the goal on this blog to have six posts in each category, and Vector Fields has been at five for a long time, despite being one of my favorite things! I also wanted to come up with a new starting logic for example 11.3 where agents are steered through a field. I was quite happy with 11.3, but sometimes not pleased with the sudden change of direction when the vectors move from one cell to another.

In this script, a vector field is controlled through lines drawn in Rhino. The vectors at any given point are an average of several nearby vectors. The closer you are to a particular drawn line, the more influence that particular line will have over nearby conditions. The field though changes gradually and not too suddenly, as it does in example 11.3. The field is then used at the end to draw particular geometry, in this case, egg-like shapes.

**Step One – Initial Setup**

Before going into grasshopper, a few pieces of geometry are drawn in Rhino. The first is a closed “Field Boundary Curve”. Then several lines are drawn which will be used to identify the general direction of the field in a particular region. Note, the direction or order in which these lines are drawn will be important in determining how the field works.

Once this is done, I do the basic setup of my script. The objects in the field will be anchored to a random population of 1000 points generated by PopGeo (grey X’s). A second step in the setup will be to translate the linear geometry in Rhino into vector information. This is done by using the “Endpoints” component to get the start and end of each line, and then using “Vector2Pt” to find the vector between the start and the end.

The last part of the initial setup is to “Merge” the start points and the endpoints into one point list. The vectors are merged in the same way to make a list of identical length. If you duplicate the image above, it should work, but what is important is that each item in the vector list has an item index which corresponds to the same item index of its associated point.

**Step Two – Associate Nearby Vectors with each point from PopGeo**

This step is the heart of the script, where each of the 1000 points generated by popGeo gets a vector assigned to it. This would be very hard to show graphically, so for this step I temporarily reduced popGeo to 40 points and hopefully it will make graphic sense. The script uses the closest point component to find the 6 closest start and endpoints of my vector lines to each of the PopGeo points. This number doesn’t have to be six, but based on trial and error this seemed to work. Fewer than four doesn’t really generate the results I want, and more than six doesn’t seem to improve the results. This can be changed later though. Anyways, the six closest points are found. The “Closest Points” identifies the item index of these six points, but these also correspond to the item index of the vectors associated with those six closest points (if I set it up right in the previous step). I use “List Item” to identify these six vectors, in these images shown anchored to each of the points in pop geo. I want to sum these vectors together to find an average, but before doing this, I am going to scale the vectors down based on their distance from my PopGeo point. In other words, far away vectors have less weight in the summation than closer vectors. To do this, I use the “VectorLength” component to get the strength of each vector, and then I divide this by the distance, which was also conveniently generated by “ClosestPoints”. Now that the distances are scaled down, I rebuild my vectors with the “Amplitude” component, where the Vector Direction remains the same, but where the Amplitude (vector strength) is reset with the scaled down “Vector Length”. These much smaller scaled down vectors are represented by the little red arrows in the second image above. Finally, I use the “Mass Addition” component to sum the six vectors associated with each point, giving me a resultant vector (shown in black). I put the results of “Mass Addition” into a final “Vector” parameter container. Note these need to then be flattened for the next step.

**Step Three – Draw Geometry based on Resultant Vectors**

Note, even if you went through all these steps, you won’t *see* anything yet since vectors are forces, not geometry. You can use “VectorPreview” to visualize what they are doing, but in the end, we want to translate them into some sort of geometric expression. There are many possibilities, but in this case, I am going to draw some eggs. The process is pretty straightforward. First, I start by drawing a Line with the “Line SDL” (Start/Direction/Length) component. The start are the points from PopGeo (which are now back up to 1000 in this image), the “D” direction is governed by my vectors, and the “L” is determined by multiplying the “Vector Length” by a scaling factor.

The eggs are finished by the script above. I won’t explain the details, but it is using components which already should be familiar to you.

**Varying the Pattern**

There are a few parameters you can change to vary the pattern, but the most important way to change it is to go back to your curves drawn in Rhino, and to edit them by moving the control points around. You can also add or delete curves. Below are two variations of curves drawn in Rhino (shown in Blue) and the resulting field conditions.

Another way to vary the script is to change the initial point population, change the amount that geometry is scaled with “LineSDL”, etc. You can also introduce a “Cull” to get rid of geometry that is either too small or too large. Below are a few possibilities.

It wasn’t my intention while making the script, but in the end it looked a bit like one of my favorite landform phenomena, the “Drumlin Swarm. You can read bit more about it on this page here.

]]>