This patent-pending automatic guitar tuner was the culmination of my engineering design analytics class at Stanford University. In this class, I used 12 design tools such as FMEA, CWA, QFD, and others in order to formulate my design for a new product. I designed and prototyped my automatic guitar tuner, the Tunr, and wrote a 30-page technical report documenting my process.
The Tunr uses a standard single-coil guitar pickup to turn the vibrations of a moving guitar string into an oscillating electric signal. This signal is then passed into an amplifier circuit, then a microcontroller. On the microcontroller, a Fast Fourier Transform (FFT) is performed to convert the signal from the strummed note into a frequency. After comparing this frequency with a known reference note for the corresponding string, the microcontroller and a motor driver supply power to a DC servo motor which tunes the string either up or down.
My final prototype for the class (video shown here) could successfully tune a single guitar string. Now that I have filed a provisional patent for my design, I will start the next phase of prototyping and will construct an electric guitar with the Tunr mechanism built in! See here for the technical report I wrote on my findings.
Throughout this project, I honed my skills using design tools, designing a product, prototyping an electromechanical mechanism, and coding in C++.
In ME 210, Stanford's Mechatronics lab class, we spend the quarter learning the fundamentals of mechatronic systems. This included working knowledge of signal acquisition and processing, complex circuit design, motor control, and microcontroller programming. Through this class, I gained experience designing and building systems which could read and process information sent via IR emitters, perform state-based logic operations, drive DC motors using motor drivers, and receive information via ultrasonic sensors.
The class culminated in a final project where our team of four was tasked with building a robot which could navigate around a closed course and perform specified tasks. The robot used ultrasonic sensors to orient itself within the course, IR line sensors to detect tape borders on the floor of the course, and DC motors to actuate a door to release foam balls into a container. Our robot completed all of the necessary tasks and our team received an A in the class.
Throughout this project, I honed my skills designing and prototyping mechatronic systems, using state-based decision making, acquiring and processing signals, 3D modelling in Solidworks, and coding in C++.
The Spring Sat was my final project for the Classical Dynamics (AA 242A) in my second year at Stanford. I was tasked to design a dynamics problem that involved rigid-body 3d motion and was relevant to aerospace engineering. I chose to create a simplified satellite design and explore the resulting motion if it were hit with a piece of space debris.
I solved the problem using three methods. 1) Conservation of linear and angular momentum; 2) Lagrange's equations; 3) Euler's equations of motion for rigid bodies. Each method returned a set of nonlinear differential equations, which I solved numerically using Matlab. I then created plots showing the satellite's resulting motion. A link to my final report can be found here.
RoboYoga is a yoga instruction application that uses a simulated robot to teach yoga poses to beginners. RoboYoga features a robotic yoga instructor and a robotic avatar that mimics the pose of a human user. The user can select a pose they want to learn from a curated list, and the instructor will demonstrate how to move into the correct position. The user is then prompted to match the pose, and they can see how close they are to the desired position by watching their robotic avatar on the display screen. The motivation for this project was to make yoga instruction fun, interactive, and accessible to all!
A joint space controller sends torques to the instructor robot to achieve the desired pose. A Microsoft Kinect is used to track the joint positions and orientations of the user, and this information is converted into positions and orientations in the simulated yoga studio environment. These joint positions and orientations are then used as inputs into an operational space controller that sends torques to the avatar robot. The final implementation also features a joint space controller for the avatar that sends torques in the nullspace of the operational space controller. This ensures that even if the operational point positions and orientations are properly tracked, the avatar’s joint angles also maintain values that are close to realistic human joint angles.
EyeClue was the culmination of a semester of work for myself and five other Georgia Tech seniors. In our entrepreneurial-focused section of the senior design class, our task was to discover a problem and create a startup company focused on solving that problem. Drawing on the team’s experiences, we found our problem: DUI prosecutions rarely succeed due to the lack of objective evidence gathered during a DUI traffic stop. The current handheld breathalyzers and the long-practiced standardized field sobriety test (SFST) were not producing evidence that would hold up in court.
Our solution was to create a device that would objectively gather data for the most reliable indicator of intoxication in the SFST, the horizontal gaze nystagmus (HGN) test. When the officer holds up their finger and instructs the driver to follow it with their eyes as the finger moves back and forth, the officer is looking for nystagmus, or jumps in the eye’s movement as it scans horizontally. The eyeClue eliminates the officer’s subjectivity in determining the number and magnitude of jumps in the eye’s gaze.
Based on well-documented effects of alcohol on the eye’s gaze, the device employs an acceleration-calculating algorithm to detect jumps and presents the number to the officer. The device also uses an open-source eye-tracking algorithm developed by Google.
My primary responsibility for this project was to develop the jump-detection algorithm. I gathered dozens of videos of sober and drunk subjects tracking a moving stimulus with their eyes, and converted the pupil locations returned by the eye-tracking software into eye angle measurements. This calculation used the subject’s distance from the camera, as well as their interpupillary distance, to convert the pupil position in pixels to the eye’s gaze direction in degrees. From this value, the eye’s angular velocity was found numerically, where spikes would indicate jumps. By setting a velocity threshold for defining the jumps, the device could track the number of times the eye exceeded the target velocity and output the number of jumps to the user.
From this project, I learned the importance of experimental data collection and technical communication. I was the designated Chief Communications Officer for the team and was responsible for writing and editing the majority of our technical reports. I was also responsible for preparing and delivering our final presentation, which won us 1st place in our class of over 20 teams.
I have always been interested in the ways that sounds can be produced electronically. For three months during the summer of 2022, I explored the lowest-level of sound production and replicated as much as I could on my Arduino Uno. I first started by turning a mini speaker on and off hundreds of times a second to replicate certain musical notes. However, since pure musical tones are actually AC sine waves, not the DC square waves I was producing, I built my own DC-AC inverter on a breadboard using a H-bridge configuration so that I could get more accurate tones. I eventually wired my tone-producer circuit to a 4x4 keypad so that I could replicate all the musical notes at the press of a button. This was my first homemade instrument!
I then transferred my source code to Matlab, since I was more proficient in it than I was in C++ (Arduino’s coding language). Here, I wrote a script that would interpret a user-input chord name and play it by calculating the frequencies for the relevant notes in that chord. For example, a user-input chord name of “C# min Maj7” would return a chord consisting of the root note (C#: 277.18 Hz), the minor third (E: 329.63 Hz), the fifth (G#: 415.30 Hz), and the major 7th (B#: 523.25 Hz). The program didn’t have a preset list of these frequencies. Rather, it would take the base frequency and use a formula to produce the relevant intervals (minor third, fifth, etc.) given any root note.
I also wrote a song-player script which would take an input of all the chords in a song and the length that they should be played (quarter note, half note, etc.) and would play the song in the correct timing. My future plans with this project is to replicate the sounds of different instruments electronically, rather than just playing pure tones. I will do this by playing the base frequency and its harmonics at specific amplitudes that characterize the instrument’s timbre.
The purpose of this research project was to improve the design of a mechanized red onion seedling planter to increase its reliability and feasibility for production on Bangladeshi farms. The original design contained issues that prevented it from functioning correctly with such tasks as feeding the seedlings into the dispenser, correctly orienting them with respect to the ground, and placing dirt around the planted seedlings to keep them from falling over. Along with a graduate researcher, I redesigned the seedling dispenser and burying mechanism. I also designed experiments that were conducted over a three-month period to gather data on the new machine's performance and reliability.
One specific contribution of mine was that I redesigned the burying wheels to include camber in order to more effectively press dirt around the planted seedlings. I created a testing rig using a 3d-printed sleeve that I designed, as well as aluminum inserts to hold the wheel axles that I designed, machined, and threaded myself. This testing rig also allowed for the testing of different wheels placed at different distances apart. I then conducted tests for various wheels and spacings and compared the results to determine the optimal design moving forward.
From this project, I learned the importance of experimental design and of iteration in the mechanical design process. I also improved my report-writing capabilities, as my findings were collected in a single 30-page final report that had to be approved by my research professor.
For the final project of the Creative Decisions and Design course (ME 2110) at Georgia Tech, my team and I had to design and construct a 12”x24”x18” robot that would compete against other teams in a series of competitions. The robot had to complete certain tasks such as moving to a specific position, physically grabbing and moving small objects, reacting to moving mechanical stimuli, and throwing objects accurately at a short distance. This project combined the mechanical design and mechatronics aspects of mechanical engineering. In the final competition for our class section, my team came in 1st place among eight teams.
My primary responsibility for the project was on the mechanical design and construction sides. I designed a swinging trap door that was powered by a mousetrap spring and actuated by a solenoid. This mechanism worked as intended and successfully flicked a wooden block behind the robot, earning our team maximum points for that task. I also designed a segmented extending arm that started within the robot’s 18” height and reached 60” above the ground. This arm used drawer slides that were continuously rigged using a pulley and string attached to a DC motor.
In addition to designing some of the robot’s primary subsystems, I also helped construct the majority of the final machine. Though I did not have much previous tooling experience, this project taught me to use machines such as a laser cutter, electric drills, and powered saws. This project helped me develop my mechanical design abilities, as well as my construction and report writing skills.