My current research focusses on development of unified Optimal Control and Reinforcement Learning based approaches for robotic control and locomotion.
As an undergraduate, I studied Electronics Engineering at the University of Mumbai where I worked on several projects which involved development of Embedded Systems for robotic applications.
October 2017 - November 2021
Autonomous Intelligent Machines and Systems
Learning System-Adaptive Legged Robotic Locomotion Policies
Dr. Ioannis Havoutis and Prof. Ingmar Posner
July 2012 - June 2016
Final Year Project:
Micro-controller based Low-Powered Semi-Autonomous Quadcopter
November 2021 - February 2023
Dynamic Robot Systems Group
Long-Horizon Motion Planning for Legged and Manipulator Robots using learned system dynamics models.
June 2018 - August 2018
Learning Platform Adaptive Locomotion Policies
June 2018 - August 2018
Robotics Systems Lab
Heterogeneous Swarm Optimization using Deep Reinforcement Learning
Here are some of the projects I have worked on. Feel free to check them out.
I mostly used C for my Embedded Systems projects in which I largely worked on ARM Cortex M based micro-controllers. I used C++ for my Robotics projects and have been using it extensively for my current research.
I consider Python to be a brilliant prototyping tool. I use it extensively for machine learning, especially for training RL agents as part of my research. I then port most of my models to C++ for use with physical hardware.
I do not use MATLAB often but it has been quite a convenient tool for performing basic control optimizations.
I'm familiar with the instruction sets for Intel 8051 and 8086. I used these for some of my embedded systems projects.
Definitely a great library. Was very important in a project where I developed my own shared memory based inter-process communication library.
I have used Eigen in every Robotics C++ project I have worked on.
It has been my go-to Deep Learning framework.
This has been very useful for me to be able to port my models trained in Python to C++ with ease.
I definitely use PyTorch more than I use Tensorflow but I do much of my RL training using Tensorflow based frameworks and hence use it often.
I use baselines for training RL agents using some of the widely used RL algorithms. This is mostly for prototyping after which, in most cases, I use my own implementation of these algorithms.
I first started RL with MuJoCo since it was widely used along with the OpenAI Gym framework.
I also tested PyBullet for training some RL agents. In fact, PyBullet was my first choice when I started training an RL policy for controlling the ANYmal quadruped.
Most RL algorithms have been known to be extremely sample inefficient. To train a feasible RL policy thus necessitates super fast simulators. And that is pretty much why I greatly enjoy using RaiSim. I now use it as the go-to simulator for RL.
I experimented with V-REP for RL. Cannot say I use it a lot. Do like the drag and drop features it supports though.
I use Gazebo with all of my ROS projects. Everything I do on the real robot is first tested using Gazebo.