Smart wheelchairs with semi or fully autonomous functions, can greatly improve the mobility of physically impaired persons. However, most are controlled using inputs that require physical manipulation (e.g. joystick controllers) and for persons with severe physical impairments this method of control can be too demanding. A noninvasive brain-computer interface (BCI) technology-based controller could bridge between the smart wheelchairs users and physically impaired persons with severe conditions. Current BCI controlled wheelchairs rely on detecting steady-state visually evoked potential (SSVEP) responses as these typically have the greatest data transfer rate. However, this method requires the user to focus on a screen for an extended period of time. This causes strain on the user and takes their attention away from their surroundings, which could be dangerous in a scenario that requires navigation around multiple moving objects. The focus of this project is to design a hybrid BCI controller using an electroencephalogram (EEG) headset to detect hand motor imagery (MI) and jaw electromyography (EMG) signals to control a smart wheelchair in conjunction with its semi-autonomous capabilities. A controller of this kind is well-known to have low data transfer rates, and therefore has lower accuracy and longer response times as compared to other controllers. However, a properly structured controller hierarchy between the BCI controller and semi-autonomous system is developed to compensate the limitations of the controller’s accuracy.