In this module, the results from the other modules are integrated towards a common demonstrator.
To demonstrate the work being done by the SENSOPAC partners, a number of demos has been created, showing the added value of combining neuroscience and robotics research. One demonstrator concentrates on a task simple for humans, but to date impossible for robots. This "simple" task is the following: take an unseen object in your hand and decide what it is. Rather trivial for most objects, since you can "feel" their surface and estimate their shape and form from dynamic properties. But: impossible for even the most advanced robotic system around! To address this essential scenario, the final SENSOPAC demonstrator is constructed as follows:
- with a robotic hand-arm system, grasp an object
- identify the object from its following properties:
- weight;
- dynamic properties, obtained by shaking it
Despite the advances in robotics and neuroscience modelling, it cannot be expected that SENSOPAC solves this problem as efficiently as humans do. Nonetheless, given a limited set of objects to discern between, possibly parametered with, e.g., "full glass", "half-full glass", "empty glass", this result is demonstrated in the Videos section of this website.