Robot Grasping and Manipulation

A brief introduction to the field of robot grasping and manipulation research

Kenechi Franklin Dukor
6 min readAug 27, 2021
Source: Economic Times, India Times

Britannica defines a robot as any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner. By extension, robotics is the engineering discipline dealing with robots' design, construction, and operation.

There are various fields in robotics research, such as Human-Centered and Life-Like Robotics, Robot Structures, Manipulation and Interfaces, Field and Service Robotics, Mobile and Distributed Robotics, Sensing and Perception (source: Springer Handbook of Robotics)

Grasping and Manipulations

Robots have increasingly been used in environments that require object grasping and manipulation. They are helpful in households where there may be a need to pick and place objects such as books, balls, toys, and manufacturing productions lines to pick and move products such as packaged goods and mechanical parts.

Research in robotic grasping and manipulation is believed to have started as far back as the 1970s, mainly when science-fiction classic Westworld staged robots in a fictitious film. They were indistinguishable from their human guests, except that they had not yet perfected their hands.

In grasping, the robot aims at a full grasp of an object with its robotic hand. The key success indicator is identifying and grasping firmly (pick) the object while efficiently eliminating the surrounding uncertainties in object pose, geometry and texture. Controlling the motion of the object after that becomes as easy as controlling the motion of the hand.

AUTOLAB’s Ambidextrous robot grasping (UC Berkeley)

On the other hand, manipulation involves applying force or motion to that same object to change its state and orientation in an environment.

Open AI robotic hand manipulating a Rubix cube (source: VentureBeat)

Robotic perception in and of itself grounds the application of robots in the real world. Just like the sensory organs in humans responsible for tasks such as vision, hearing, feeling, taste, and smell, it is also critical that robots can perform these tasks to assist humans in their environment.

Source: [Nan Tian et al 2018]

Robotic perception has achieved success in tasks like vision, haptics, tactile perception, and hearing. Nevertheless, how does this relate to the grasping and manipulation tasks?

Grasping Manipulation Reality

We know without a doubt that robots have speed and strength far superior to human hands, but they cannot reliably grasp unfamiliar objects. Variations in object shape, size, and texture still make it challenging to build super-intelligent machines for household, manufacturing, and security applications. The difficulty stems from the inherent uncertainty in the physics, perception, and control of the robot.

Almost all applications, from manufacturing to service to security, would benefit if robots were to achieve the ability to grasp any object among a diverse range of shapes and sizes from rigid to deformable and under a variety of frictional conditions.

Despite over 40 years of research, this problem remains yet wholly unsolved. It, therefore, implies that robots rely on the simplifications of their environment, like ensuring that the objects are arranged in a specific manner or using strong backlights to allow better perception and locating of the object or subject matter.

[source: The Verge]

The two challenges to allowing robotic tasks like pick and place to be carried out in a broader range of settings are creating robust perception systems to locate the objects and planning robust grasp locations on objects.

The glaring importance of this field spurred an increased industry interest. It caused the like of e-commerce giant Amazon to challenge researchers in an annual competition for the past three years.

The Amazon Robotics Challenge asks researchers to design and build a robot that can sort the items for a customer’s order from containers and place them together in boxes. The items are varied, ranging from bottles and bowls to soft toys and sponges. They are initially jumbled together, making it a difficult task for object identification and mechanical grasping.

Some that research of apply robot grasping and manipulation that you should checkout

Ambi Robotics, German Bionic, Covariant, Nimble Robotics, MegaRobo, XYZ Robotics, Amazon Robotics, Fetch Robotics, Soft Robotics, Simbe Robotics, Locus Robotics

Some recent research work in robot grasp and manipulation you should look at

DexNET (1, 2, 3), DOPE, PPR-Net, OP-Net, SingleGrasp, MultiGrasp, Two-Stage System, GG-CNN, Hand-Eye Coordination, Supersizing self-supervision, FC-GQ-CNN, QT-Opt, QT-Opt RCAN, TossingBot, S4G, GraspNet, Grasping in the Wild (ref: Kleeberger et al. 2020)

This is an introduction to a survey I ̶h̶o̶p̶e̶ ̶t̶o̶ ̶p̶u̶b̶l̶i̶s̶h̶ ̶s̶o̶o̶n̶ have published. Since numerous research has been conducted in this field, it is gradually growing cumbersome for late entry researchers. The survey aims to ease the entrance of researchers into the field and foster continuity of the ongoing research.

Link 1: Zenodo — https://zenodo.org/record/5559125#.YnEwzfPMK3I

Link 2: ResearchGate — https://www.researchgate.net/publication/355585161_A_Survey_Robot_Grasping

Important References

[1] Mavrakis, N., Stolkin, R. (2020). Estimation and exploitation of objects ’ inertial parameters in robotic grasping and manipulation: A survey. Robotics and Autonomous Systems, 124, 1–3. https://doi.org/10.1016/j.robot.2019.103374

[2] Yamazaki, K., Ueda, R., Nozawa, S., Kojima, M., Okada, K., Matsumoto, K., Ishikawa, M., Shimoyama, I.,Inaba, M. (2012). Home-Assistant Robot for an Aging Society. Proceedings of the IEEE, 100(8), 2429–2441.https://doi.org/10.1109/jproc.2012.2200563

[3] Christensen, H., Amato, N., Yanco, H., Mataric, M., Choset, H., Drobnis, A., Goldberg, K., Grizzle, J., Hager, G., Hollerbach, J., Hutchinson, S., Krovi, V., Lee, D., Smart, B., Trinkle, J., Sukhatme, G. (2021). A Roadmap for US Robotics — From Internet to Robotics 2020 Edition. Foundations and Trends in Robotics, 8(4), 49–54.https://doi.org/10.1561/2300000066

[4] Osorio, V.R. (2019).Robotic Grasping using Demonstration and Deep Learning. UWSpace.http://hdl.handle.net/10012/15035

[5] Chen, N., Chew, C. M., Tee, K. P., Han, B. S. (2012). Human-aided robotic grasping. 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. Published.https://doi.org/10.1109/roman.2012.6343734

[6] Leeper, A. E., Hsiao, K., Ciocarlie, M., Takayama, L., Gossow, D. (2012). Strategies for human-in-the-loop robotic grasping. Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction -HRI ’12. Published. https://doi.org/10.1145/2157689.2157691

[7] Du, G., Wang, K., Lian, S., Zhao, K. (2020). Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review. Artificial Intelligence Review, 54(3), 1677–1734. https://doi.org/10.1007/s10462-020-09888-5

[8] Paul, S. K., Chowdhury, M. T., Nicolescu, M., Nicolescu, M., Feil-Seifer, D. (2021). Object Detection and PoseEstimation from RGB and Depth Data for Real-Time, Adaptive Robotic Grasping. Advances in Computer Vision and Computational Biology, 121–142. https://doi.org/10.1007/978-3-030-71051-4_10

[9] Kleeberger, K., Bormann, R., Kraus, W., Huber, M. F. (2020). A Survey on Learning-Based Robotic Grasping.Current Robotics Reports, 1(4), 239–249. https://doi.org/10.1007/s43154-020-00021-65

[10] Sahbani, A., El-Khoury, S., Bidaud, P. (2012). An overview of 3D object grasp synthesis algorithms. Robotics and Autonomous Systems, 60(3), 326–336. https://doi.org/10.1016/j.robot.2011.07.016

[11] Saxena, A., Driemeyer, J., Ng, A. Y. (2008). Robotic Grasping of Novel Objects using Vision. The InternationalJournal of Robotics Research, 27(2), 157–173. https://doi.org/10.1177/0278364907087172

[12] Deng, L. Y. (2006). The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning. Technometrics, 48(1), 147–148. https://doi.org/10.1198/tech.2006.s353

[13] Leitner, J., Tow, A. W., Sunderhauf, N., Dean, J. E., Durham, J. W., Cooper, M., Eich, M., Lehnert, C., Mangels,R., McCool, C., Kujala, P. T., Nicholson, L., Pham, T., Sergeant, J., Wu, L., Zhang, F., Upcroft, B., Corke, P. (2017).The ACRV picking benchmark: A robotic shelf picking benchmark to foster reproducible research. 2017 IEEE International Conference on Robotics and Automation (ICRA). Published. https://doi.org/10.1109/icra.2017.7989545

[14] Hodson, R. (2018).How robots are grasping the art of gripping. Nature, 557(7704), S23–S25. https://doi.org/10.1038/d41586-018-05093-1

--

--

Responses (1)