Micro Air Vehicles (MAVs) have become more attractive for various missions including surveillance or reconnaissance in recent years. MAVs should be capable of maintaining their attitudes through either inherent passive stability or active feedback in order to successfully perform their directives. Stability and Controllability Augmentation Systems (SCASs) are usually employed to enhance the flight performance of conventional aircrafts and Unmanned Aerial Vehicles (UAVs). However, it is no simple task to obtain an accurate numerical model for the flight dynamics of a MAV. An alternative approach for SCASs would be to incorporate reinforcement learning in order to address this numerical complexity. Such implementation has already been successful in other vehicles, such as unmanned ground vehicles (UGVs), because of their bettered stability compared to aerial vehicles. However, in order to train MAVs to learn how to fly, they must first be airborne. Similar to teaching infants how to walk, this paper presents a new method to provide an effective environment where a MAV can learn how to fly. A test setup was constructed to enable magnetic levitation of a MAV embedded with a permanent magnet. This apparatus allows for flexible experimentation: the position and the altitude of the MAV, the constraint forces, and the resulting moments are all adjustable and fixable. This ‘Pseudo Flight Environment’ was demonstrated with a fixed wing MAV model. In order for the model to maintain a constant altitude, a height hold control system was devised and implemented.

This content is only available via PDF.
You do not currently have access to this content.