September 18, 2012

UAV: A New Revolution in Remote Sensing

I hate to admit it but I have been swept up a new technology fad.  Well not really that new I suppose, just new to the general public and industry.  While the technology has been around for many years, the Unmanned Aerial Vehicle, or UAV, phenomenon has recently experienced an explosion of interest.  The use of this technology has been traditionally limited, for the most part, to military use with the advent initially of  reconnaissance drones and subsequently unmanned combat aircraft.  The use of these craft has been relegated to the dark recesses of military covert operations, however, recently there has been a rapid move into the public conciousness.  You may have seen many of the numerous videos showcasing the advances in UAV and drone technology like this one.



So why am I caught up in the hype?  Well the big draw for me is the ability to create "ultra" high resolution imagery at a fraction of the cost of traditional manned aerial surveys.  There is a wide range of payload sizes that can be attached to these craft, anything from 2 or 3 hundred grams up to 5 kilograms depending on the craft.  This means that the sophistication of the guidance systems and the imaging sensors can be quite impressive.  Sophisticated guidance systems (software, GPS, IMU) can allow these vehicles to collect highly accurate imagery and video.

The other thing that impresses me about UAV technology...it is already going open source.  I have seen a number of examples of both open source hardware and software projects (Paparazzi being and example where both come together).  This makes the technology even more affordable and attainable to the technologically capable.

UAV's are heavily restricted in the US, which currently allows only government and government contractors to operate these craft outside of the hobbiest realm.  However, this is expected to change some time before September 30, 2015 as per the FAA Modernization and Reform Act of 2012.   In the mean time, many of the advances in this technology are coming from outside the US.  For example commercial UAV operation is quite legal in Canada and many other jurisdictions so there are many good examples of commercial utilization from some of these countries.  Some examples of commercial UAV use are:
  • Environmental Monitoring for Many Industries
  • Vegetation Health and Vigor Mapping for Viticulture
  • Mapping of Borrow Pits for Construction, Engineering or Mining
  • Providing Aerial Intelligence to Police
These are just a few of the current uses of UAV's.  For more information on this facinating advance in Geomatics, visit any of the following industry websites:




January 3, 2011

Why Use GeoEye-1

GeoEye-1 is one of the newest ultra high resolution optical satellite platforms for earth observation (EO).  With an approximate half meter spatial resolution capability, it gives the user an extremely fine view of their area(s) of interest.  The sensor has five spectral bands (red, green, blue, infra red and panchromatic) which makes it ideal for a number of uses including environmental monitoring, mineral exploration and urban planning.  The resolution of this sensor allows for extraction of very small features (approaching 1/10 hectare) and provides an economical alternative to air photos.

This half meter resolution GeoEye-1 image shows central Port-au-Prince,
Haiti after a 7.0-magnitude earthquake struck the area on Jan. 12, 2010

Another advantage of GeoEye-1 comes from the orbit performance.  With an orbit velocity of 7.5 km/sec, orbiting at an altitude of 681 kilometers, it has a repeat frequency of approximately three days.  This means it is ideal for many types of change detection analysis.  For example, monitoring extreme flood conditions such as those currently being experienced in Queensland, Australia, is just one potential benefit of this satellites revisit frequency.  Imagery from this ultra-high resolution satellite can be acquired at regular intervals to help rescuers identify escape routes, assess areas of highest damage and monitor the changing flood conditions.  This is exactly what was done during the Haitian earthquake disaster which took place last year.  GeoEye-1 imagery was used to assess areas that suffered the most damage in order to more effectively target the rescue efforts. (see image above) .

Other examples of the utility of this satellite platform can be seen on the GeoEye website.  Hatfield Consultants is currently using this and other high resolution sensors on a project to assess the landslide potential from localized forest practices.  This work utilizes not only the spectral signature information of the imagery but also the stereoscopic capture abilities of this satellite.  Using techniques such as DEM extraction and principal components analysis (PCA), experienced Remote Sensing Analysts can determine the effects of activities such as clear cutting and road building from forestry projects.  This can be done at a very fine scale making the output data sets, assuming availability of good ground control,  very accurate and highly detailed.

There are many other uses for this imagery including vegetation health assessments, crop monitoring and even construction feasibility.  For more information contact Hatfield Consultants Partnership.

December 23, 2010

A Word About Tile Caching in ArcGIS Server 10

I have recently been involved in creating a web mapping application that displays environmental monitoring locations over a very large area (approximately 7 million hectares).  Of course one of the primary objectives in developing any application is optimization and performance and, with web maps, that usually means tile caching.  So the development team came together (two of us in this case) and quickly decided what our zoom scale requirements were.  We then tried to implement this which is where it all fell apart.

Because we were using the Bing Map Service as our base layer we were committed to using the zoom scales dictated within that service.  Originally we decided that the minimum scale could be set to 1:9,244,648, which is one of the Bing mid-range zoom scales, however, we felt that the maps should be tiled the the largest scale possible; 1:1,128 in this case.  Here in lies the problem.  If you do a quick calculation to determine how many tiles are created using this range it is approximately 89 million, YES I said 89 million.  Of which 98% come from the scales above 1:9,027.  When we tried to tile with those scales the process would run for a number of days without finishing.

So how did we overcome this you might ask.  By taking a more realistic view of our requirements and removing the unneeded scales from the tile cache.  Once we removed the 3 scale values above 1:9,027 the process took less than an hour.  We also made sure to have Tile On Demand checked for each service so that the tiles could be created on an as needed basis, prior to running the tile caching.

Lesson Learned: Before creating your Map Service infrastructure develop a tile caching strategy which should identify the largest scale you NEED, the number of tile cache layers and which layers MUST be cached.  DO NOT include scales that you don't need and cache only the Map Layers that will be accessed FREQUENTLY.

September 3, 2010

Model to Script to Tool: the "Holy Trinity"

I have been doing some work for a junior mining company recently involving creating Volume/Elevation curves (see my last post). This work is accomplished in ArcGIS using Spatial Analyst and 3D Analyst which works very well but requires many steps. Over the course of this project, however, I have built a number of Python scripts that are now embedded as tools in a new toolbox in ArcMap. So I am going to take this opportunity to briefly discuss how easy it really is to create sophisticated Python geo-processing scripts in Arc and give a little example of how I recommend approaching this task.

First of all I would just like to say that ArcGIS is still the most powerful GIS in my opinion (yes I am still a fan of open source but you can't argue with the benefits of having a large budget to develop good software). I like Arc not so much for it's ability to perform the geo-processing functions but more for the ancillary tools that help you find the appropriate tools and perform these tasks. Arc has built in search capabilities and very good documentation to allow users with a wide variety of skill levels find the appropriate tools. On top of that, there is a very well developed modeling and scripting environment that gives the user the capability to reproduce these tasks in a relatively easy manner.

Ok, now that's been said lets get on with the main purpose of this post, to show how easy it is to create a tool in ArcGIS using the modeling and scripting environments. For this example I thought I'd use the volume elevation curve tool created for the project I mentioned earlier. I will be making reference to a number of steps that may not be clear if you haven't performed this analysis before but you can read my previous post for clarification.

The Model
So over the years I have found that the easiest way to start any Python scripting in Arc is to look at the modeling environment first and see if the task can be done in there. You may ask "why worry about a Python script if you can create a model", but if you need to add some iteration
Figure 1: Surface volume model

to the process (e.g. performing the analysis on successive elevations), for example, then you need to do this with a script. So I started by creating the model shown in Figure 1. The key here is to notice that I have created a number of "model parameters" which will show up as argument variables (sys.argv[]) when the model is exported to Python. These arguments will allow you to attach the fields from a custom tool to them giving the user the flexibility to add their own data, assign a range of elevations to the plane height variable and specify the output location for the delimited text file. You will also notice that there are two height variables (Min Plane Height and Max Plane Height) but only one is actually attached to the model. This is because we are planing to have an iterative model that starts at the bottom of the surface (e.g. 0 cu.m volume) and works it's way up to the top. This requires that we allow the user to assign a lower value and an upper value but because this model is only a single instance of the volume command, it can take only one value. Don't worry though, we will add the iterative properties when we edit the script. Once you have the model the way you like you can export it to a Python script (Model -> Export -> To Script -> Python). Before doing this though, I suggest testing the model to ensure it works to avoid spending an unreasonable amount of time on the scripting.

The Script
Now that we have a script you can open it up in IDLE or whichever Python editing environment you like. The script will look something like Figure 2 after it is first exported.

# ---------------------------------------------------------------------------
# sample_model_script.py
# Created on: Thu Sep 02 2010 04:23:36 PM
# (generated by ArcGIS/ModelBuilder)
# Usage: sample_model_script
# ---------------------------------------------------------------------------

# Import system modules

import sys, string, os, arcgisscripting

# Create the Geoprocessor object
gp = arcgisscripting.create()

# Check out any necessary licenses
gp.CheckOutExtension("3D")

# Load required toolboxes...
gp.AddToolbox("..Toolboxes/3D Analyst Tools.tb
x")

# Script arguments...
Output_Text_File = sys.argv[1]

Dam_Containment_Tin = sys.argv[2]

if Dam_Containment_Tin == '#':

     Dam_Containment_Tin = "defa" # provide a default value if unspecified

Min_Plane = sys.argv[3]


Max_Plane_Height = sys.argv[4]

# Local variables...

# Process: Surface Volume...
gp.SurfaceVolume_3d(Dam_Containment_Tin, Output_Text_File, "BELOW", Min_Height, "1")
Let's run through the components of this script. First of all you have to import the Python libraries necessary to run this script. In this case it is the system (sys), string, operating system (os) and ArcScripting (arcgisscripting) libraries. Right after this you need to create the geoprocessing object that will do the bull work (gp). Next you need to check out the extension(s) required for the type of geoprocessing to be done. In this case we only need 3D Analysist. Then we asign the different arguments to there own reusable variables (e.g. Output_Text_File = sys.argv[1] which assigns the name of the output file, as provided by the user, to it's own variable). In our example we do the same for the input dam containment tin, the lower plane height and the upper plane height. You may find that you need to change the order in which these arguments appear so that the tool variables can be placed in a logical order. So if you want the input TIN to be first followed by the lowest plane height then highest plane height and finally the output text file name then you need to ensure that you set the index number for the sys.argv[] object accordingly. Finally we run the Surface Volume_3D function using the geo-processing object we created earlier.

This script will do most of the work we need, however, we need to create a loop that will iterate through all the elevation changes needed to create a proper VE Curve. In most cases 9 - 12 points are adequate to create a VE Curve in Excel so we will create a loop that creates 9 points.


Height_Interval = (float(Max_Plane) - float(Min_Plane))/9
Height_Plane = float(Min_Plane)
for i in range(1,9):
    gp.SurfaceVolume_3d(
Dam_Containment_Tin, Output_Text_File, "BELOW",
       str(Height_Plane), "1")

    Height_Plane = float(Min_Plane) + (float(Height_Interval) * int(i))


You can see here how we have set up a variable f
or the plane height and calculated it by finding the elevation difference between highest and lowest values and then dividing that by 9 (number of points for the curve). Then we create another variable to represent the current plane height (initially equal to the lowest elevation in the containment TIN). Next we set up a loop that iterates through 9 times for which we calculate the volume below the reference plane and increase it's height each iteration by the Height_Interval. In the end we have a script that, given some user input for each of the variables, will calculate a volume elevation curve and append each value to an output text file.


Creating a Tool
Now that we have the script the way we want it we need to create an ArcToolbox tool. We do this by simply importing it. I recommend adding a new Toolbox first by right clicking on the root of the ArcToolbox window and selecting New Toolbox. You should make sure to give the toolbox an appropriate name so that it is obvious what it contains. Next create a new toolset by right clicking on the new toolbox and selecting New -> Toolset. Finally, add your newly created script by right clicking on the new toolset and selecting Add - > Script, after which you will navigate to the location of your script. Once the script is added you will need to assign parameters to it so that the user can update the variables when the double click on it (see Figure 2). You get to this dialog by right clicking on the script and selecting Properties then click on the Parameters tab. You can add each parameter to the script in the logical order set out in your script.

Figure 2: Add parameters to the script

Remember that the order the parameters show in this dialog is the order they are passed to your code. This means that the first parameter showing here will be accessed in your code through the sys.argv[1] system variable.

You will also need to make sure that you update all the Parameter Properties such as ensuring input and output data sets are assigned accordingly and that Dependencies are also assigned. This last one comes into play in situations such as the requirement to select from a list of fields.

Now when you double click on the script in the toolbox you should see a dialog something like that shown in Figure 3.


Figure 3: Tool dialog for our Script

Now you should have a workable tool that will produce a volume elevation curve from an existing containment TIN.  The intent of this post is not to show you how to do VE Curves but to give you a starting point for creating your own custom script based tools in ArcGIS.  Hope you find it useful.

August 23, 2010

ArcGIS and Volume Elevation Curves

I have been thinking about doing this blog for some time and have finally found the time to get to it. I have been doing volume/elevation curves for engineers for some time and have performed them in a few different applications but ArcGIS Desktop does one of the best jobs.

The premise of this particular curve scenario is one in which a tailings pond is being considered for a mining operation (any mining operation). The geotechnical engineers or geologists have a calculated number for the metric output for the mine over a 20 year period and need to make sure that a tailings pond will accommodate this number. Here is my take on the steps involved in performing a VE curve analysis using ArcGIS and 3D Analyst. Keep in mind I am not an engineer or a geologist and, even though I have performed this analysis hundreds of times, it was always double checked by one of these professionals.

Step 1: Creating the base surface.
Assuming you have a decent DEM for the area in question you will need to import it into ArcGIS as a TIN surface. This can be done using the one of the conversion utilities in Arc such as Raster to TIN under the 3D Analyst toolbar. Assuming the data is currently in an ASCII xyz file you will need to import it as a raster first then perform the Raster to TIN function. The resultant surface will be used for determining the containment area of the tailings pond and for performing the volume calculations at a series of elevations starting at the floor of the proposed tailings area and going to the top of the containment. This top elevation will be roughly determined by the desired containment volume and the practical height of the dam.

Step 2: Create the dam surface
The dam surface can be created in a number of different ways. For the purposes of this exercise we will create the dam in ArcGIS. To do this we will start with a center-line that is placed across the opening of the tailings area as discussed in Step 1. The line can be digitized into a "clean" shapefile or geodatabase feature set. You must ensure that this line crosses well over the desired elevation contour that represents the top elevation of the tailings containment. This will ensure that the resultant dam footprint overlaps the containment area. The resultant data set will need a field for elevation because the dam will be a 3D model used to finish the containment surface.



Figure 1a: Dam Geometry

Figure 1b: Dam TIN

Once the centerline has been established, an offset will be needed to create a practical crest for the dam. A typical offset might be 5 meters to create a 10 meter crest. Next the front and back slopes of the dam must be created. This can be accomplished by offsetting the crest to both the front and back by a predetermined distance (see Figure 1a). Because the faces of the dam are usually planar surfaces a single offset is usually adequate. The trick is figuring out how far to offset the lines. To do this you can look at the crest height and, using the slope of the dam face, determine how far to offset the line so that the outside limits fall below the lowest elevation in the containment surface. For example, if the dam height is 50 meters and the slope is proposed at 2:1 then an offset of 150 meters will ensure that the outside line is well below the containment surface (75 meters as in Figure 1) . This step will be needed both in front and behind the dam because the dam is used for both the tailings containment surface and to establish the footprint of the dam. Finally, convert these features to a new TIN surface using the 3D Analyst dropdown menu Create Tin from Features... It is a best practice to look at the new dam surface and the underlying elevation surface in ArcScene to ensure that the dam side slopes extend well below the Elevation surface (see Figure 2).

Figure 2: Dam Surface through Elevation Surface


Step 3: Containment Outline
The next step is fairly straight forward. You need to acquire the area outline of the tailings containment for given "ultimate dam" height. This requires that you create a planar surface at this elevation.


Figure 3a: Features for Planar Surface
Figure 3b: Tin Difference Polygons

You can do this by creating a new data set (shapefile or geodatabase feature set) and add it to the your map. Remember, you will need to assign a valid projection to this surface or you won't be able to create the necessary TIN surface. Now do the following:
  1. Add a field to the data set for the elevation.
  2. Establish the maximum extents for the containment by looking at the dam location and any contour or elevation information available
  3. digitize a few perpendicular lines covering the entire area (similar to how you created the dam geometry - see Figure 3)
  4. Assign an elevation to each of the lines. This will be the maximum elevation of the containment area which is usually the height of the dam minus an appropriate "free-board" or "draw-down".
  5. Create a TIN surface from the this new data set.
  6. Run the TIN Difference function from the 3D Analyst toolbox using the planar surface as Input TIN 1 and the elevation surface created in Step 1 as Input TIN 2.
The resultant data set will contain the outline of the tailings containment but will be "open" at where the dam should be (see Figure 3b).

Step 4: Create Dam Footprint
This is s very simple and short step. To create the dam footprint you simply use the TIN Difference tool and assign the dam TIN you created in Step 2 as Input TIN 1 and the underlying "DEM" TIN as Input TIN 2. This will give you a number of polygons showing the difference in volume between the two surfaces. The most important polygon will be the one showing the Greater Than area which will include the portion of the DAM above the underlying elevation surface. You may also need to dissolve some of the areas that show "No Change" values to make the polygon complete. Now simply edit this new "Difference" or "Dissolved Difference" data set and remove the polygons that do not include the dam "Toe".




Figure 4a: Dam/Surface Difference

Figure 4b: Dam Toe

Step 5: Close Containment Polygon
To close the containment polygon from Step 3, you can simply run a "Merge" on the new "Dam Toe" data set and the "Containment Difference" data set. You may need to expand the "Dam Toe" dataset to cross the appropriate containment polygon to ensure that the merge is successful and you have only one polygon for the containme nt in the end. Now edit this new data set and remove all the polygons that are not your tailings containment area.

Note: the step of stretching the dam toe is likely due to the detail and accuracy of the underlying DEM.

Step 6: Create the containment TIN for volume elevation analysis
Next you will need to create a surface that describes the containment area of the underlying elevation surface with the appropriate dam face in place. This can be done as follows:
  1. Using the containment area polygon created in Step 4, create a raster with the Polygon to Raster tool where each cell has a value of 1
  2. Using the Map Algebra or Times tools in Spatial Analyst, multiply this containment raster with the underlying elevation raster to create an elevation raster of only the containment area.
  3. Convert this new elevation raster to points (Raster to Point)
  4. Create the inside face of the dam
    • Overlay dam outline from Step 4 and the Dam Geometry from Step 2
    • Offset inside edge of dam crest a number of times so that several end up within the outline of the dam.
    • Adjust the elevation of each line so that it lies at the appropriate slope within the face of the dam. (e.g. 2:1)
    • Trim each line so that it does not extend past the outline of the dam
    • Remove all lines that are not part of the inside face of the dam
  5. Create a new TIN surface that includes the elevation points created here and the new inside dam face.




Figure 5a: Containment Features
Figure 5b: TIN from Features

Step 7: Create Volume Elevation Curve
You can now create a volume elevation curve. The best way to accomplish this is by using the Area and Volume analysis tool under the 3D Analyst tool bar as follows:

  1. Open the Area and Volume tool from the 3D Analyst toolbar
  2. Assign the new containment surface as Input surface
  3. Set the Height of plain to be the same as Z min
  4. Select Calculate statistics below plane
  5. Z factor = 1
  6. Select Save/append statistics to text file and select an appropriate location for the file
  7. Click Calculate statistics
  8. Increase Height of plane by a predetermined amount (e.g. 1o meters). This will likely be determined by the number of points required to create a good curve (typically 8 to 12 points is good)
Figure 6: Area and Volume Statistics Tool

You will now need to open the text file you created and modify the file so that it can be used in excel. The Area and Volume tool writes a number of statistics we don't need like 2D and 3D areas so you will need to remove these. At the same time you will need to create a comma delimited file that can be opened into columns in Excel. In the end you need only the Elevation and Volume items for each iteration in the previous steps. Save the file and open in Excel.

You will now need to add column headers and create the curve. To do this select the Insert -> Chart -> Scatter Chart. When prompted select your data for the chart. You will likely need to switch the columns in the appropriate series so that the volume runs along the X axis and the elevation along the Y. In the end you should have a decent looking VE curve.

Figure 7: Volume Elevation Curve

I hope this was helpful to those that may be new to this process. Please stand by and I will try to add other useful tutorials in the future.


March 23, 2010

How to create a multi-unit tracking application

I am currently working on an application that will allow users to track any number of GPS enabled radios within a specified group. The key to this application is the proprietary GPWGT NMEA string which not only has GPS location info but also an identifier. These strings are typically transmitted from each radio in a group or polled by a central radio in a group. This string has the basic construct of

$GPWGT,<FixStatus>,DD.MMMMM,d,DDD.MMMMM,d,<Altitude>, <COURSES>,<SPEED>,<NameId>,<SCODE>,<GroupId>,<Status>,<ICON>,<Date>,<Time>*CC<CR><LF>

August 20, 2009

Building an Effective open source GIS application - Chapter I

Recently I was thinking about what goes in to building of a GOOD GIS application, open source or otherwise. I have been involved in creating a number of applications, both web based and desktop, and think I am fairly adept at planning the development of such things, but what really goes in to making a useful open source GIS application. This has lead me to dedicate the next four blog posts to looking at the art of designing, developing and deploying an effective GIS using open source technologies. The first chapter touches on the pre-planning and information gathering stage, Chapter II will look at the planning stage, the third chapter is dedicated to the development phase (looking at the process and is not technical), and finally the last chapter will look at deployment and the issues surrounding this critical stage.

Chapter I: Know your audience?
There is an old adage in the newspaper industry that goes something like "Always write to the lowest common denominator". What I understand this to mean is, don't use a bunch of technical jargon which will confuse the general public. This concept is equally important when building a GIS application. You need to understand who will be using the application and what it will be used for.

I guess the first question to be asked is what does the thing need to do. This seems like an obvious question but, in my experience, many application developers don't fully understand what their application needs to do, I mean REALLY understand what it needs to do. This is not an exercise of putting a list of features together that will go into the software. That list should spawn out of a true understanding of what the software needs to accomplish and who will use it. The second part is possibly the most important question that MUST be answered prior to building the application. You must understand whether the application will be used by technical practitioners, non-technical management, non-technical other and so on. Once you understand who will use the application then you can better understand what the application must do.

Example:

Your boss, a technical GIS practitioner, comes to you and says you need to build an application that will allow the user to track a set of assets by grabbing coordinates from a GPS and giving the user the ability to update attribute values on the fly, now go build it. Sounds easy, you need to build an application that can connect to a GPS unit, read the NEMA strings (or equivalent), plot on a map, allow the user to update field values on the fly and convert to a format that can be used in other GIS or web mapping applications. Simple right? Just find out what the assets are and go build it. Now wait a minute. What kind of data collection will it be used for? Environmental monitoring? Engineering? Search and rescue? This is extremely important because it may allow the developer to limit any of the presets to values specific to the target industry. Perhaps some predefined values in pull downs to populate field values and that sort of thing. And what is the level of GIS knowledge of the user? If they are experts you may be able to build the application as a plug-in or add on for an existing application such as QGIS. These are just a couple of reasons why it is imperative to understand the end users needs.

The image on the right is a screen capture of the Quantum Navigator plug-in for QGIS. This plug-in is still in development but will be intended to track GPS input in real time and allow the user to update values, translate between points and tracks and output different data types. The application is being written in Python so will integrate with any operating system. The application is intended to give quick route analysis capabilities to the user will allowing them to add their own data and integrate open street map. This application shows great promise but is intended for the user that may be interested in looking for a optimal route from a specified location to another. This application would not provide all of the functionality that an environmental monitoring team might need for example. Now this application is intended for a very broad audience so it probably meets the objectives of its developers but it helps to illustrate the need to intimately understand your end users requirements. If you were wanting this application to be adequate for an environmental consultant, for example, you would need to add the capabilities to update attribute values and even provide a series of preset values for certain fields. Capabilities that can only be added with intimate knowledge of the intended use.

What should I ask?
So after the this seemingly endless diatribe lets try to understand who the end user is and what questions should I ask that user. So who is going to use this application? Sometimes the initiator of the project will know exactly who the end user will be and what they will user the application for. In this case you as a developer can simply ask them, "Who is the end user and what will they be using the application for". However, quite often it is unclear who all of the users will be and, if you don't know who's using it, how can you know what they need. In these cases it requires some detective work and some of the things you should know up front are:
  1. What information are you hoping to gather?
  2. What asset(s) is this information attached to?
  3. Traditionally, who has collected this data?
  4. Which GIS applications are already being used? (you should probably already know this information)
  5. How will the resultant data be stored (e.g. PostgreSQL, Oracle etc.)
These are just a few of the questions that need to be asked but I hope they provide a good start. This chapter has focused on the pre-planning process, the next chapter will focus on the planning process and will hopefully answer questions like; how do I keep up with clients changing needs, who should be involved in the planning process, should I create a Gantt chart? and more.

I hope you found this chapter to be informative and will come back for the next post.