Professional Presentations
Task-Oriented Grasp Planning Based on Disturbance Distribution;
ISRR; ISRR; 2013
Grasp Planning Based on Grasp Strategy Extraction from Demonstration;
IEEE/RSJ International Conference on Intelligent Robots and Systems; IEEE; 2014
Functional Analysis of Grasping Motion;
IEEE/RSJ International Conference on Intelligent Robots and Systems; IEEE; 2013
Grasp Mapping Using Locality Preserving Projections and KNN Regression;
IEEE Intl. Conference on Robotics and Automation; IEEE; 2013
Human-Object-Object-Interaction Affordance;
Workshop in Robot Vision (WoRV)/Winter Vision Meeting ; IEEE; 2013
Five Most Important Trends in Robotics Research;
Link: Hardware Partners @ Silicon Valley; Makerlink; 2015
Determining the Benefit of Human Input in Human-in-the-Loop Robotic Systems;
IEEE ROMAN ; IEEE; 2013
Virtually Transparent Epidermal Imagery;
NSF CPS PI Meeting; NSF; 2013
From Knowledge to Action-- Toward Robotic Cooking;
ICRA Workshop on Sensor-Based Object Manipulation for Collaborative Assembly; International Conference on Robotics and Automation; 2017
Hand and Mind;
Singularity University; 2016
Ideomotor Learning for Robotic Manipulation;
Bay Area Robotics Symposium; Stanford University; 2016
Bring AI into Physical World through Robotic Hands;
Google; 2016
Renaissance of Robotic Grasping;
CCF Global Artificial Intelligence & Robotics Summit; Chinese Computing Foundation; 2016
Tasks in Robotic Grasping and Manipulation Competitions;
IROS Workshop on Development of Benchmarking Protocols for Robot Manipulation; IROS; 2017
Other Professional Activities
Co-Chair, Member Services Committee, Member Activities Board
Board Member, Electronic Products and Services Board
Associate Editor, IEEE Robotics & Automation Magazine
Associate Editor, , IEEE International Conference on Robotics and Automation (ICRA) 2014
Editorial Advisory Board of Assembly Automation journal
Technologies
Competitive Advantages:
Adaptable learning, Flexibility in realistic situations, Makes real time decisions.
System and methods for generating a trajectory of a dynamical system are described herein. An example method includes modelling a policy from demonstration data. The method also includes generating a first set of action particles by sampling from the policy, where each of the action particles in the first set includes a respective system action, predicting a respective outcome of the dynamical system in response to each of the action particles in the first set, and weighting each of the action particles in the first set according to a respective probability of achieving a desired outcome. The method further includes generating a second set of action particles by sampling from the weighted action particles, where each of the action particles in the second set includes a respective system action, and selecting a next system action in the trajectory of the dynamical system from the action particles in the second set.USF inventors have created a trajectory generation approach that learns a broad and stochastic policy from demonstrations. This approach first learns from the human demonstrations of the policies and is able to generate a trajectory with arbitrary length, and treats the change of the constraints naturally. Based on the magnitude of action and the desired future state, the dynamic system decides to keep, reject or execute samples. The approach weighs the action particles drawn from the policy by their respective likelihood of achieving a desired future outcome, and obtains the optimal action using the weighed actions.