#
Wolfram Neural Net Repository

Immediate Computable Access to Neural Net Models

Navigate a drone in a forest environment

Released in 2017, these two models constitute a system for autonomous path navigation in unstructured, outdoor environments such as forests. Specifically, this model is trained for steering in a forest environment. The system consists of two main submodules: a navigation net (TrailNet DNN) and an obstacle detection net. The navigation net is a two-headed classifier used to estimate rotation directions and lateral translations given an input image, with a total of three categories each. These output probabilities are later combined to predict a final rotation angle. It is based on ResNet-18, with batch normalization layers removed and ReLUs replaced with shifted ReLUs.

The obstacle detection net is an object detection model based on YOLO V1 with a few modifications, such as removal of batch normalizations and replacement of leaky ReLUs by ReLUs. In case a detected object occupies a large proportion of the space within the image frame, the vehicle is forced to stop.

Number of models: 2

- The TrailNet DNN was first trained on the IDSIA dataset to learn the rotation directions and then on a private unreleased dataset collected in the Pacific Northwest to predict translation offsets. The IDSIA dataset is a dataset for the visual perception of forest trails consisting of 17,119 training images and 7,355 test images, with three classes of objects. The private dataset used to learn the translation offsets consists of footages of hikes recorded by three cameras capturing a diversity of cross-seasonal landscape scenes. The object detection net was retrained on the PASCAL VOC datasets, presumably on a combination of PASCAL VOC 2007 and PASCAL VOC 2012.

Get the pre-trained net:

In[1]:= |

Out[1]= |

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:= |

Out[2]= |

Pick a non-default net by specifying the parameter:

In[3]:= |

Out[3]= |

Pick a non-default uninitialized net:

In[4]:= |

Out[4]= |

Define an evaluation function to calculate the turning angle in radians:

In[5]:= |

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[6]:= |

Define the label list for this model. Integers in the model's output correspond to elements in the label list:

In[7]:= |

Calculate the turn angle in radians given a test image:

In[8]:= |

Out[8]= |

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[9]:= |

In[10]:= |

Out[10]= |

Inspect which classes are detected:

In[11]:= |

Out[11]= |

In[12]:= |

Out[12]= |

Visualize the detection:

In[13]:= |

Out[13]= |

Define an image:

In[14]:= |

The network computes 98 bounding boxes and the probability that the objects in each box are of any given class:

In[15]:= |

In[16]:= |

Out[16]= |

Visualize all the boxes predicted by the net scaled by their "Confidence" measures:

In[17]:= |

In[18]:= |

Out[18]= |

Inspect the number of parameters of all arrays in the net:

In[19]:= |

Out[19]= |

In[20]:= |

Out[20]= |

Obtain the total number of parameters:

In[21]:= |

Out[21]= |

In[22]:= |

Out[22]= |

Obtain the layer type counts:

In[23]:= |

Out[23]= |

In[24]:= |

Out[24]= |

Display the summary graphic:

In[25]:= |

Out[25]= |

In[26]:= |

Out[26]= |

- N. Smolyanskiy, A. Kamenev, J. Smith, S. Birchfield, "Toward Low-Flying Autonomous MAV Trail Navigation Using Deep Neural Networks for Environmental Awareness," arXiv:1705.02550 (2017)
- (available from https://github.com/NVIDIA-AI-IOT/redtail)
- Rights: Model License