Date:05/06/17
People instinctively know the best way to grab an object so that it does not fall, but robots do not have this ability. Therefore, researchers used the concept of in-depth training to help two manipulators successfully take objects of any shape with 99 percent accuracy. Deep training is a type of automated learning, in which a computer receives a large amount of data and learns to make decisions, processing new incoming information.
In this study, scientists created a database of contact points on 10,000 3D models, which in total contained about 6.7 million points. This information was then used to create a neural network – a system in which the computer makes decisions in the same way that our brain processes information. The system was connected to two manipulators and a set of sensors. The researchers named the DexNet 2.0 installation.
The sensor looks at each object in front of it, and the neural network chooses the best place to capture the object. The system not only perfectly executes each capture, but also does it three times faster than the previous version.
Robots taught human dexterity
Spend a minute and think about all the things that you took today in your hands. Perhaps these were keys, a mug, a toothbrush, a fork or a spoon. Now think about how much mental energy you are spending to properly take these items. Most likely, it does not make for you special work. However, for robots this is a serious problem. But now thanks to the new work of researchers from the University of California at Berkeley, robots can acquire human agility.People instinctively know the best way to grab an object so that it does not fall, but robots do not have this ability. Therefore, researchers used the concept of in-depth training to help two manipulators successfully take objects of any shape with 99 percent accuracy. Deep training is a type of automated learning, in which a computer receives a large amount of data and learns to make decisions, processing new incoming information.
In this study, scientists created a database of contact points on 10,000 3D models, which in total contained about 6.7 million points. This information was then used to create a neural network – a system in which the computer makes decisions in the same way that our brain processes information. The system was connected to two manipulators and a set of sensors. The researchers named the DexNet 2.0 installation.
The sensor looks at each object in front of it, and the neural network chooses the best place to capture the object. The system not only perfectly executes each capture, but also does it three times faster than the previous version.
Views: 414
©ictnews.az. All rights reserved.Similar news
- Justin Timberlake takes stake in Facebook rival MySpace
- Wills and Kate to promote UK tech sector at Hollywood debate
- 35% of American Adults Own a Smartphone
- How does Azerbaijan use plastic cards?
- Imperial College London given £5.9m grant to research smart cities
- Search and Email Still the Most Popular Online Activities
- Nokia to ship Windows Phone in time for holiday sales
- Internet 'may be changing brains'
- Would-be iPhone buyers still face weeks-long waits
- Under pressure, China company scraps Steve Jobs doll
- Jobs was told anti-poaching idea "likely illegal"
- Angelic "Steve Jobs" loves Android in Taiwan TV ad
- Kinect for Windows gesture sensor launched by Microsoft
- Kindle-wielding Amazon dips toes into physical world
- Video game sales fall ahead of PlayStation Vita launch