Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Identification is the first step towards the manipulation of parts for robotic disassembly and remanufacturing. PointNet is a recently developed deep neural network capable of identifying objects from 3D scenes (point clouds) irrespective of their position and orientation. PointNet was used to recognise 12 instances of components of turbochargers for automotive engines. These instances included different mechanical parts, as well as different models of the same part. Point clouds of partial views of the parts were created from CAD models using a purpose-developed depth-camera simulator, reproducing various levels of sensor imprecision. Experimental evidence indicated PointNet can be consistently trained to recognise with accuracy the objects. In the presence of sensor imprecision, the accuracy in the recall phase can be increased adding stochastic error to the training examples. Training 12 independent classifiers, one for each part, did not yield significant improvements in accuracy compared to using one classifier for all the parts. [Submitted 13 September 2019; Accepted 27 March 2020]
International Journal of Manufacturing Research – Inderscience Publishers
Published: Jan 1, 2022
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.