Other Projects

This page details non-research projects that I've undertaken personally or for coursework. Click on project titles to see more detail. Some minor projects have been omitted for brevity.

2020

Course Project: Deep Multi-Task and Meta Learning (CS 330)
Stanford University Collaborators: Philipp Wu

[Report]

Meta-learning, or learning-to-learn, is a relatively new sub-field in deep learning that involves training machines that are adaptable to a variety of tasks rather than specializing in any particular one. This paradigm is useful when considering cases where data are sparse or when training a model from scratch is simply impractical. Meta-learning is in fact the default mode of learning for humans and other biological organisms: during childhood, we develop very strong priors for how the world roughly operate and tune our expectations when we observe task-specific information. For example, we have strong priors for how projectile motion should look (e.g. tossing a ball in the air), but we can adjust our spatiotemporal predictions of a ball-shaped balloon being thrown after we see it being tossed once.

It's this adaptability that we are targeting in this project. We would like to investigate the one-shot setting, where the agent gets to observe on one trajectory of some task to adjust its task-specific physics-based prior. Then, we would like the agent to predict the rest of any query trajectories we give it. For example, it gets to see some arbitrary object being pushed on a table. Then, given one second of new pushing data, we may ask it to predict the following nine seconds.

The key tool we are using is the extended Kalman filter (with the implementation here piggybacking off of the Replay Overshooting code), which we have used in a learning-based prediction setting with great success for single-task prediction. For this project, we saw reasonable results predicting on various object types in the MIT Push dataset.

Course Project: Optimal and Learning-Based Control (AA 203)
Stanford University
Collaborators: Daniel Sotsaikich, Brent Yi

[Video]

This project sought to implement the multi-rate safe control scheme from [this paper] in simulation. In the paper, they consider the simple case of a segway, while we instead consider here the 3D motion of a quadcopter.

The key idea is that we can maintain two controllers: one which operates slowly and generates high-level goals using some computationally-intensive planning scheme like MPC. A second fast controller maintains immediate safety using control barrier functions, which prevents the system from colliding with obstacles in the environment. We show that there are control frequencies where the slow controller alone fails, but the combination of the slow and fast controllers succeed in stabilizing the quadcopter to a desired trajectory safely.

To see the controller in action, check out the linked video!

Course Project: State Estimation and Filtering for Aerospace Systems (AA 273)
Stanford University
Collaborators: Brent Yi

[Report]

While most filter-based dynamics learning algorithms (e.g. Deep Markov Models, Deep Variational Bayes Filters, Kalman Variational Autoencoders, etc.) operate in discrete-time, we wanted to study the use of continuous-time filters for the same effect. Inspired by the advent of neural ODEs, we explored the setting where we used the Kalman-Bucy filter to consume data and a differentiable ODE solver to try learning the underlying nonlinear dynamics governing the system evolution.

The key idea behind the filter is posterior inference, or computing a belief over some latent state from a sequence of observations. The classical Kalman filter provides a very fast, iterative way to conduct posterior inference, which makes it perfect for an optimization-based approach towards learning dynamical and observation models.

In these preliminary results, we found that there were some benefits to the continuous-time formulation, though our results were a little raw (preceding our Replay Overshooting paper by several months). Ultimately, the project was fairly novel and it was fun finding rarely-explored perspectives connecting state estimation theory and deep learning.


Course Project: Multi-Robot Control, Communication, and Sensing (AA 277)
Stanford University
Collaborators: Bibit Bianchini, Lauren Luo

[Report] [Slides]

Consider a situation where you are a leader in a group of heterogeneous robotic assistants and you want to carry some heavy object from one location to another through a busy environment. In this scenario, each agent has a distinct field of view, but ultimately would like to enforce some notion of safely carrying the object while still following the leader. Since humans don't have a WiFi or Bluetooth module in their brains, we additionally enforce that the cooperation is done without explicit communication except through sensed forces on the load.

This project studied a heuristically-motivated decentralized dynamical prediction strategy where each agent estimates the aggregate behavior of every other agent in the system and tries to apply an input it believes will keep the load safe using control barrier functions. We also developed a notion of dynamic trust, which is a way for robots to mediate the amount of faith they have in the group to maintain safety. Our preliminary results in simulation show that this strategy is effective for following a nominal trajectory even when obstacles directly block the motion of the system.


2019

Course Project: Intro to Robot Autonomy (AA 274A)
Stanford University
Collaborators: Daniel Sotsaikich, Brent Yi

[Slides]

In this project, we were tasked with creating a "delivery" robot that could navigate in a mock environment to do food pickup and delivery. The system was a simple differential drive robot (Turtlebot) with a Velodyne lidar sensor mounted on top. The robot operated in two phases: a manual exploration phase in which a SLAM algorithm would map out the environment, including key "pickup" and "delivery" locations, and an autonomous delivery phase, in which we could suggest any order of pickup and delivery locations and the robot would plan the trajectory (keeping in mind obstacles in the environment and replanning as necessary) while navigating from location to location.

All software was built on ROS. The delivery logic was encoded in a finite state machine and planning was done using A*. We also implemented a web-based command center that allowed any web-connected device (like phone or laptop) to manually control the robot, display the vendors on the map, and allowed a quick switch between exploration and delivery modes. Unfortunately, there is no video of our presentation run, but it worked!

Course Project: Mechatronics (ME 102B)
UC Berkeley
Collaborators: Miranda Maravilla-Louie, Matt Morrison, Sepehr Rostamzadeh, Daniel Sotsaikich, Kriya Wong

[Video] [Poster]

This was one of my favorite projects and the brainchild of group member Matt (who humorously narrates the linked video). We sought to design and fabricate from scratch a 2-speed vinyl record player embedded into a real redwood tree stump acquired from a felled tree after a storm on one of Matt's friend's property. The end product featured a beautiful wooden exterior with a simplistic, custom-designed interface. I was primarily involved in electronic integration, motor control, and hardware specification. I was secondarily involved in the mechanical design of the moving parts like the turntable and tone arm.

First, the stump's ends were flattened to use as datum surfaces for a wood router. Afterwards, the raw stump still had many cracks and holes in it in its natural condition that made it unsuitable for immediate processing. A few runs of overnight epoxy filling helped fill those areas and strengthen the interior material. Over the course of a few months, the interior was milled out to allow the electronics and other mechanical components to be mounted.

The turntable was manufactured from aluminum on a CNC mill and designed to be stiff and structurally sound while light. Record players have two common methods for actuating the turntable: direct and belt drive. For this project, we opted to use a belt drive system so we could abuse a high drive ratio in order to run the motor at a higher RPM, thus allowing the use of a lower resolution encoder.

With the belt drive system, we also had to design a method to consistently tension the belt. For this, the entire motor assembly was placed into a carriage-style sliding assembly that allowed the user to move its position until the belt was sufficiently tensioned.

The tone arm of a record player is a delicate piece: it acts as the part holding the needle and must apply a very precise force on the record. Too low, and the audio signal will not be very strong. Too high, and the record runs the risk of being damaged by the needle force. Further, the base of the tone arm must also exhibit very low friction so that as the record grooves push on the needle, the base rotates smoothly. The placement of the tone arm is also important. The greater the tangency of the tone arm direction to the grooves, the better the sound quality. We used the Lofgren B method to place the tone arm in an optimal position. We also implemented an auto-stop system to detect when the record ended, stopping the turntable.

The whole system featured a fairly interconnected set of electronic parts all powered at different voltage levels. Our goal was to fully integrate the power, speaker, motor control, and audio processing circuits together on a physically small module that was easy to design around and that minimized the weight of the player. Most of the electronic components were bought off-the-shelf, including a buck converter, the Teensy 3.6 as the microcontroller, the audio adapter board for signal processing, the motor driver board and motor, a logic level shifter, and an amp to go along with the speakers. I also wrote some low-level code to read the motor encoders and implemented a simple PID scheme to control the motor speed.

Stumpy is now retired and resides with Matt's parents in San Diego.


Course Project: Nonlinear Systems (ME C237)
UC Berkeley
Collaborators: Daniel Sotsaikich, Philipp Wu

[Slides]

This project sought to control a 6-DOF robot arm along a pre-computed trajectory using input-output linearization. In particular, our goal was to judge the capacity of this nonlinear controller to perform complex tasks such as ball-catching by analyzing its performance in tracking a relatively quick-moving trajectory. In doing so, we abstracted the ability of the controller away from other aspects of task completion, such as perception, trajectory optimization, etc. We found the controller's performance to be passable for a relatively naive application of I/O linearization.

The arm, its model, and all control code are part of the Blue project (now at Berkeley Open Arms), and are the product of research done from UC Berkeley's Robot Learning Lab. I have no affiliation with the RLL.

Since we had direct access to the arm's physical parameters, we could derive an analytical model for the robot dynamics from standard open-chain manipulator models. We let some symbolic computations run for a few days to compute all the necessary matrix functions, then compiled the results into fast C++ functions. We also generated smooth trajectories using target points and cubic spline interpolation. Our desired trajectory was just chosen as three random points that the robot cycled between. After some iteration, we found that some PD-style modifications to the I/O linearization controller were enough to achieve fairly good tracking. For more details, see the slides.


2018

Personal Project
Collaborators: Philipp Wu

[Code]

This was a fun personal project that helped introduce me a bit to more complex ideas in sensor fusion, communication protocols like I2C, and hardware calibration routines. The idea of an attitude and heading reference system is to provide attitude information (roll, pitch, yaw or some equivalent rotational coordinates) while also estimating the heading relative to the global magnetic field. Typically, you can use a 9-axis IMU consisting of a magnetometer, accelerometer, and gyroscope and then use some attitude estimation algorithm to filter the signals.

The traditional method here is the Kalman filter, though there are newer and more computationally efficient (with looser accuracy guarantees) methods like Mahony's or Madgwick's filters for aerospace systems that work in quaternion space. We opted for an implementation of Madgwick's algorithm for this project. One of the most fun parts of the project was learning how to mess around with bit registers on these sensors to set things like sensitivity/precision, communication modes, and data rate selection. We ultimately ended up implementing a bunch of stuff in both C++ and Python, but the system wasn't really used for anything and by 2018 we were too busy to make too much more progress on it.

2017

Course Project: Model Predictive Control and Loop Shaping (ME C231A)
UC Berkeley
Collaborators: Rachel Thomasson, Philipp Wu, Allan Zhao

This project sought to implement a model predictive controller in simulation for a 6-DOF arm to follow pre-computed trajectories. The arm not only successfully followed these trajectories, it demonstrated rejection of randomly generated Gaussian perturbations to the end-effector. Simulations were conducted in MATLAB using the nonlinear solver fmincon.

The arm, its model, and all control code are part of the Blue project (now at Berkeley Open Arms), and are the product of research done from UC Berkeley's Robot Learning Lab. I have no affiliation with the RLL.

I chose two trajectory geometries for study: a square and a helical path. Curves describing these motions were plotted in Cartesian space and discretized into sets of target points used for the controller.

Model predictive control operates on the principle of solving constrained finite time optimal control (CFTOC) problems repeatedly until an end condition is satisfied (there may not be an end condition). The CFTOC problem can be defined by a cost function and the constraints on the state and input space. We defined our states as the joint positions and velocities and the inputs to be actuator torques. An equality constraint is also applied: the evolution of the system's dynamics defined in the previous section.

To reduce computation time, at each time step I linearized the dynamics around the current state of the system. Our controller divided the trajectory-tracking problem into several smaller ones whose end conditions were satisfied when the end-effector position was close to the next point in our discretized trajectory. The end-effector would attempt to travel in a straight line in Cartesian space between these points. We also showed that our controller is quite robust to random Gaussian force disturbances applied to the end-effector.

The approach taken here was quite naive, and it was a while before I learned more advanced optimal control techniques like iLQR or techniques in sequential convex programming.


Course Project: Intro to Robotics (EE C106A)
UC Berkeley
Collaborators: Kireet Agrawal, David Gealy, Rachel Thomasson, Philipp Wu

[Video] [Slides]

This project sought to implement a real-time method for catching a ball with a 6-DOF robotic arm developed by the Robot Learning Lab. At the time, dynamic methods were deemed too slow to run online, so we used kinematic methods instead. A ball was identified by a Kinect as it was tossed and a Kalman filter estimated the position of the ball when in range of the arm. When a high enough confidence was established, the arm would move to the position it predicted would intercept the ball's trajectory. We were successful in catching almost all underhand throws.

The arm, its model, and all control code are part of the Blue project (now at Berkeley Open Arms), and are the product of research done from UC Berkeley's Robot Learning Lab. I have no affiliation with the RLL.

The arm was mounted onto a static frame clamped onto a table. Additionally, ball-tracking was done using a Kinect, which was mounted snugly onto an auxiliary frame that I attached onto the main base. Rather than a traditional end-effector, a velcro ball and a corresponding pad was used to perform the catching. Note the Vive virtual reality trackers attached to the arm's links. These were not used for position feedback, but rather for initial calibration of the arm's position for use in rviz to visualize the arm on lab computers.

The Kinect computed the ball position using a pinhole model and the trajectory of the ball was estimated using projectile motion equations estimated using a Kalman filter. We fixed a desired distance of catching, which formed a sphere, and continuously recomputed the intersection between the ball trajectory and this sphere, which became the desired pad position.


Course Project: Microprocessor-Based Mechanical Systems (ME 135)
UC Berkeley
Collaborators: Denny Min, Vedang Patankar, Patrick Scholl

This project sought to hack a toy RC car and implement two main functions: following the pulse of a handheld ultrasonic beacon or navigating to a target GPS coordinate in real time. The hardware provided to us was the NI MyRIO, and we supplemented that with ultrasonic and infrared sensors, a magnetometer, an accelerometer, and a GPS module. We were successful in implementing both functions.

The system was an RC car with several layers. The inner layer housed the motors and H-bridges, the middle layer housed the MyRIO, and the upper layer held the battery and the sensor array. At the front of the car were three ultrasonic receivers to interface with the beacon.

The beacon was composed of a single ultrasonic transducer and an array of IR LEDs. The IR signal was used to synchronize the clocks between the ultrasonic transducer and the receiver array on the car so that accurate time of flight could be recorded both for distance and direction control. I designed the circuit for the beacon and implemented a state machine architecture to distinguish between waiting on IR signals, waiting on ultrasonic signals, and interpreting ultrasonic signals to actuate the motors.

The other mode of the vehicle was GPS coordinate-tracking. In this mode, the user simply input global coordinates to command the vehicle to and it would automatically move to that location. Unfortunately, the car was destroyed for parts after the project and there wasn't too much documentation during the process.


Course Project: Advanced Programming with MATLAB (E 177)
UC Berkeley

[Code]

This project sought to implement a solver for statically determinate 2D beams with transverse loads applied as an educational tool for new engineering undergrads. In particular, my goal was for the solver to analytically calculate shear and moment diagrams for an arbitrarily high number of loads, including distributed loads represented by arbitrary real functions. Online beam calculators exist, but typically impose limits on how many loads can be applied and only consider uniform distributed loads. This project was part of a larger submission with three distinct parts. There were no collaborators for this portion of the project.

The calculator analyzed the system constraints defined by the user to verify that the constraints were valid. Then, the boundary conditions were applied and symbolic integration was performed to retrieve the shear and moment functions describing the beam's reaction to external loadings.


3D Printing Hack-a-thon: 3DMC
UC Berkeley
Collaborators: Kireet Agrawal, Travis Brashears, Sepehr Rostamzadeh, Philipp Wu

This project sought to prototype a flexible workout band that could analyze the movements of an individual engaging in physical activity and give encouragement or advice in response. The allotted time for the hack-a-thon was 24 hours. We were successful in producing the band, collecting data, and returning basic feedback to the user. However, more complex analysis of the data was not possible given our time constraints. We placed 2nd at the competition.

The band itself was 3D-printed on a flexible filament called NinjaFlex. This material printed extremely slowly and was very prone to failure - we had about 3 failed prints over the duration of the hack-a-thon, but we were lucky to be able to print on multiple printers at once. The user wears the armband on the upper arm and performs exercises. Onboard is a 9-axis IMU that measures angular data during a motion. There is also an audio unit and speaker that can give live feedback to the user, though this feature was not fully implemented during the duration of the hack-a-thon.


2016

Robotics Competition: Dorm Ex Machina
UC Berkeley
Collaborators: Adam Castiel, Denny Min

[Video]

This project was a part of a larger one whose goal was to prototype a whiteboard marker printer, a device that could analyze an image and reproduce it on a whiteboard. My portion of the project was the image analysis algorithm that took an image as an input and produced two outputs: a visual of the path a marker would take to draw the image, and a set of instructions passed to servos commanding the device. I was successful in implementing the algorithm, but the resolution of the servos permitted only simple images to be drawn.

The path generated for the marker was meant to replicate human tendencies in drawing features. For example, outlines would tend to be traversed first with the details of the interior being filled in after. This feature-based approach to drawing was designed to produce a more artistic rather than mechanistic device. The path generator was written in Java and the generated path was converted into servo commands for the physical drawing. Unfortunately, little documentation remains of the mechanical system, which was later destroyed after the competition to re-use parts.

To visualize the algorithm, check out the linked video!