Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Fast 3D particle reconstruction using a convolutional neural network: application to dusty plasmas

Fast 3D particle reconstruction using a convolutional neural network: application to dusty plasmas thisworkmaybeused underthetermsofthe Wepresentanalgorithmtoreconstructthethree-dimensionalpositionsofparticlesinadense CreativeCommons Attribution4.0licence. cloudofparticlesinadustyplasmausingaconvolutionalneuralnetwork.Theapproachisfound Anyfurtherdistribution tobeveryfastandyieldsarelativelyhighaccuracy.Inthispaper,wedescribeandexaminethe ofthisworkmust maintainattributionto approachregardingtheparticlenumberandthereconstructionaccuracyusingsyntheticdataand theauthor(s)andthetitle experimentaldata.Toshowtheapplicabilityoftheapproachthe3Dpositionsofparticlesina ofthework,journal citationandDOI. densedustcloudinadustyplasmaunderweightlessnessarereconstructedfromstereoscopic cameraimagesusingtheprescribedneuralnetwork. 1.Introduction Machinelearningcurrentlyisarapidlygrowingfieldinitsapplicationtophysicsquestions.Manychallenging problemscannowbeaddressedwithdifferentapproachesfromtheconstantlyevolvingartificialintelligence repository.Machinelearninghasbeenespeciallyappliedtoimageanalysisorimageclassification[1–4]. Thethree-dimensionalreconstructionofparticlepositionsfrommultiple-viewcamerasetupisanother problemwheremachinelearningcanbeofenormoushelp.Traditionally,thefollowingdifferentapproaches forparticlepositionreconstructionareoftenemployed:volumetricreconstruction,triangulation,and iterativereconstruction.AnexampleofavolumetricreconstructionmethodistomographicPIV[5,6]. There,thevolumetricsourcefieldthatcontainstheparticlesiscomputedbyalgebraicmeansusingthe measurementimagesasprojectionsofthesourcefield.Itsdrawbackisaveryhighcomputationaleffortand thustheslowprocessingspeed.Triangulation-basedapproaches[7,8]arefastercomparedtotomographic PIV.Triangulationreliesonacameracalibrationandreconstructsthethree-dimensionalparticlepositions fromknownparticlecorrespondencesinthemeasurementimages.Themainproblemthatneedstobe solvedhereistofindthesecorrespondingparticleprojectionsinthedifferentcameraviews.Usuallythis problemisaddressedbymeansofepipolargeometry.Thecleardrawbackofthisapproachistheambiguityof thecorrespondenceswhentheparticledensityinthemeasurementishigh.Theiterativereconstruction approaches[9,10]optimizetheparticlepositionstothegivenmeasurementimagesandusetriangulationas wellasvolumetricreconstructionatcertainmoments.TheShake-the-Box(STB)-algorithm[10]isbasedon initialparticletracks,thatareobtainedusingtomographicreconstruction.Then,theseinitialtracksareused topredictfurtherlocationsoftheparticlesthatarerefinedtomatchthemeasurementimages.Approvedand optimizedparticlesareerasedfromthemeasurementimageandthennewparticlesaredetectedusing triangulation.STBiscurrentlyconsideredtobeoneofthestate-of-the-artalgorithmsforhighparticle densities. WewanttoapplytheproposedAIPRalgorithminourfieldofresearchcalledDusty-orComplexPlasmas. There,micrometersizedparticlesareinjectedintoaplasmaenvironmentandattainahighlynegativecharge. Thisresultsinavarietyofinterestingcollectiveeffectslikedensitywavesorcrystallinephasesoftheparticle system.Toourknowledge,machinelearninghasbeenonlyappliedtoanalysisoftwo-dimensional investigationssofar[11,12]. ©2021TheAuthor(s). PublishedbyIOPPublishingLtd Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure1.(a)Schematicofthecamerasetupinourexperiment.Fourcamerasimagemicrometer-sizedparticlesthatare illuminatedbyalaser-sheet.(b)Cameraimageofa2mm-sliceofavolumetricdustcloudinadustyplasma(invertedand processedforclarity).Approximately3000particlesarevisible. Inthispaperwewilluseamachinelearningapproachtoreconstructthree-dimensionalparticlepositions inadustyplasmaexperiment.Weadoptadeeplearningalgorithm,AIPR(ArtificialIntelligenceParticle Reconstruction)[13]thatreliesonaneuralnetworktoretrievevolumetricfieldsfrompreliminary(coarse) tomographicreconstructions.WeadvanceAIPRbyapplyingittotheexperimentdatafromaparabolicflight campaignwithabout3000visibleparticles.Further,weextracttheresultingthree-dimensionalparticle positionsfromthecomputedvolumetricfield.Ourapproachwillbetestedandcomparedagainstthe traditionalSTBapproach,whereasuperiorspeed,maintainingacomparableaccuracy,ofAIPRis demonstrated.Itisspecialtothisalgorithmthatthecomputingworkloadisseparatedintotwoparts:the energy-andtimeconsumingtraining,followedbytheframewisenon-intenseandfastreconstruction.This makesthisalgorithmespeciallysuitableforremoteapplications.Forexample,futureexperimentsonboard theISScanhighlybenefitfromsuchadataanalysisworkflow. TheMATLABsourcecodeoftheimplementationisavailable[14]. 2.Experiment TheexperimentaldatausedintheAIPRreconstructionarefromdustyplasmasunderweightlessnesson parabolicflights.Undersuchconditionsthedustparticleswithatypicaldiameterof4−8 µmattaina highnegativechargeandformalargeanddensevolume-fillingcloud[15–18].Intheexperiment, micrometer-sizedgrainsaretrappedinalow-temperatureargonradio-frequencydischarge.Theplasma chamberisverysimilartotheIMPF-K2designfromearlierexperiments[19,20].Theargonpressurewas 30Paandtheplasmapowerwas3W.Undersuchconditions,byinjectingthemicroparticlesusinga dispenser,adustcloudofabout10 microspherescanbeconfinedintheplasmaenvironment.Therethe particlesinteractviatheirelectrostaticrepulsionandviaplasma-mediatedforces[21].Alaserilluminatesa volumeofthedensedustcloudwithanexpandedbeamofafewmillimetresthickness.Thelightscatteredby theparticlesisrecordedwithhigh-speedvideocamerasassketchedinfigure1.Toreconstructthe three-dimensionalpositionsoftheparticles,ourcamerasetupconsistsoffoursynchronizedcameras (MV-BlueFox3-2-2051).Themeasurementshavebeendoneat200fps.Thepixelsizeofthesensorswas 3.54 µm,butweuseda2 ×2binningmodewhichresultsinaneffectivepixelsizeof7.08 µm.Theobserved volumeinthedustcloudwasabout14 ×9 ×2mm.Inthatregion,afewthousandparticleswerepresent. Fordetailsregardingthesetupandthecalibrationofthiscamerasystem,thereaderisreferredto[22].In thispaper,wewillrevisitameasurementthathasalreadybeenanalyzedwithastate-of-the-artalgorithm calledSTB[9,10,22].Thereconstructedthree-dimensionalmotionofseveralthousandsofparticleshas revealedthattheparticlesarrangedintwodistinctlayerswithintheobservedvolume[22].Thisfindingis confirmedbytheAIPRapproach,buttheresultsareobtainedmuchfaster. 2 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer ( total of 8 ) Figure2.DesignoftheAIPRneuralnetwork.The3Dconvolutionallayersuseeitherone3×3×3filteror163×3×3filters whichisdenotedhereby1×3and16×3.Thenetworkdesignistakenfrom[13]. 3.OutlineoftheAIPRapproach Here,first,thegeneralideaforthemachinelearningreconstructionisdescribed,thedetailsaregiveninthe followingsections.Themainprocessingchainoftheproposedalgorithmisasfollows.Themeasurement volumeisdiscretizedintoanumberofvoxels,asocalledvolumetricfield.ThefieldcontainsN ×N ×N x y z voxels,whereN ,N ,N isoftheorderof100–400,whichinourcaseislimitedbytheGPUmemory.Then, x y z themeasurementimagesofallfourcamerasarealgebraicallycombinedtoasingleinitialvolumetricfield thatcontainsakindofray-castinginformationofbrightimagepixelsthatareprojectedintothevolumetric field.Then,aneuralnetworkistrainedwithsyntheticdataadoptedtothemeasurementonewantsto analyze.Thetaskofthenetworkwillbetopredictthefinalvolumetricfieldfromtheinitialvolumetricfield. Thisfinalfieldcanthenbeusedtoextracttheactual3Dpositionsoftheparticles. Whensystemsofindistinguishableparticles(asinourcase)arestudied,thereisasignificantadvantage comparedtomanycomplexneuralnetworkdesignsthataretypicallyappliedforthree-dimensional reconstructionofreal-lifescenes.First,theobjectsareofsimpleshape(spherical)whichcanbeeasily modeledartificially.Thereisnoneedfortherecognitionorclassificationofobjetsasanindividualtask. Hence,thenetworkdoesnotneedtocomputefeaturemapsbutcanbeseenasakindofsharpeningfilterina 3Dfield.Anotheradvantageliesinthepossibilityofartificialtrainingdata.Aneuralnetworkgenerallyneeds alargenumberoftrainingdatasetstobeproperlytrained.Inourcase,theseareimagesetsforthefour camerasandthedesiredfinalvolumetricfield.Inmanyreal-lifeapplicationsofneuralnetworksthistraining dataishardorexpensivetoobtain.Inoursituation,wecaneasilycalculateartificialimagesfromrandomly chosen3Dpositionswithtuneableparticleappearances.Also,thecorrespondingvolumetricfieldiseasily constructed.Asaresult,theneuralnetworkcanbefine-tunedtomatchtheexactmeasurementconditions suchasparticlesizes,brightnessandtheimagenoiseofthecameras.Inthefollowingsectionwewillpresent theactualnetworkdesignandgivedetailedinformationregardingthetrainingprocess. 4.Networkdesign Thenetworkdeignistakenfrom[13]andwillbebrieflyoutlinedhere.Theneuralnetworkisdesignedto transformthevolumetricinitialinputfieldI ofsizeN ×N ×N tothevolumetricfinaloutputfieldI with i x y z f thesamesize.Itsdesignisdepictedinfigure2.Thedimensionalityandvoxelnumbersofthevolumetricfield remainunchangedthroughoutthenetwork.Thenetworkisbuiltwitha3Dimageinputlayermatchingthe sizeoftheinitialfieldN ×N ×N .Thefirstconvolutionallayerischosenas3 ×3 ×3slidingcuboidal x y z convolutionfilterwithasinglefilterfollowedbybatchnormalizationandaReLulayer.Thefollowing convolutionallayersaresettomaintainthedatasize—thisistypicallycalledsamepadding—andarefollowed eachbyabatchnormalizationlayerandaReLulayer.Thefirstblockiscontinuedwitheight 3D-convolutionallayerswithfiltersofsize3 ×3 ×3andanumberof16filterseach.Thelastconvolutional layerhasafiltersizeof1andakernelsizeof3 ×3 ×3followedbyabatchnormalizationlayer.Tomapthe networkontoanoutputfieldwithvaluesbetweenzeroandone,asigmoidlayerisusedhereratherthana ReLulayer. Theregressionlayerneedsasuitablelossfunctiontoensurethatthenetworkweightsconvergeduring training.Thelossfunctionisnecessarytodefineamatchormismatchbetweenthedesiredresultandthe actualresultfromthenetwork.Theoriginallyproposedlossfunctionusedafinetuningparameter εto 3d image input layer 1x3 3d convolutional layer batch normalization layer relu layer 16x3 3d convolutional layer batch normalization layer relu layer 1x3 3d convolutional layer batch normalization layer sigmoid layer regression layer Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer ensureconvergenceofthenetworkduringtraining.Here,unliketheoriginalnetworkfrom[13],weemploy adicecoefficientasalossfunction L[23].Thisoneisapplicablewithoutanyinputparameterorprior knowledgeoftheuserside.Itisdefinedby 2 (T ·Y) L =1 − ∑ ∑ , (1) 2 2 T + Y whereT isthegroundtruthtrainingdatafield(Target)andY istheinitialfield.Thesummationisdoneover allvoxelsofthefieldandoverallbatchesandthemultiplicationisdoneelement-wise(Hadamardproduct). Thislosscoefficientisnormalizedsothatitreturns0foraperfectmatchbetweenT andY and1for mismatchingT andY.Forclarityitcanbethoughtofasanintersection-over-unionlossfunctionthatis widelyaddressedintheliterature[24,25]. 5.Networktrainingdetails Thetrainingoftheneuralnetworkisverytimeconsuming,especiallywhenthevolumetricfieldconsistsofa finevoxelgridandmanytrainingimagesareused.Eachtrainingdatasetcontainsthegroundtruth3Dfield anditscorrespondingprojectedcameraimages.Thegroundtruthfieldisgeneratedbychoosingrandom particlepositionsinthereconstructionvolume.Tothevoxelsaroundthechosenparticleposition,Gaussian distributedvoxelintensitiesareassigned.ThespatialwidthoftheGaussianischosentosignificantlyliftat leasttheparticlecontainingvoxelandtheclosestneighboringvoxelsabovenoiselevel.Theartificialcamera imagesarecomputedfromtheexactrandomparticlepositionsusingthecameraprojectionmatricesofeach camera.TheparticlesintheimagearethenagainmodeledasaGaussianwithawidthoftypically5px.This resultsinaparticlediameterof8–10pxintheimage,whereas1pxcorrespondstoabout12 µminthe investigatedplasmavolume.Toaccountfordifficultimagingsituationsinourexperiment,eachparticleis givenarandommaximumintensityintherangeof0.7–1.Thisintensityisusedintheimagesaswellasinthe volumetricfield.Toaccountforasrealisticaspossibletrainingdata,wealsoincludedbackgroundnoisein theimages.However,wefoundthatnoiseisnotnecessarytopreventoverfittingduringthetrainingprocess. Finally,thecameraimagedataisstoredas8-bitdataandthe3DfieldisstoredinsingleprecisiontosaveGPU memoryduringtraining. Aftertraining,thenetworkshouldofcoursebecapabletocorrectlyprocessingunknowninputdata insteadofjustreproducingthetrainingdata.Toobtainsuchageneralizing network,thereisaminimum numberofdifferenttrainingdatasetsnecessary.Theminimumnumberofthesetrainingimagescanonlybe estimated:everyvoxelshouldbecoveredbytheinitialfieldatleastonceinthewholetrainingdataset.Asa ruleofthumb,thenecessarynumberofimagescanbecalculatedbythenumberofvoxelsdividedbythe numberofparticlespertrainingimages.Forourcase,wefoundthatanumberof1000trainingimagesis sufficientforthetrainingtoconvergetowardsageneralizingsolutionforthenetworkcoefficients.Thereby, the1000trainingimagesaresynthesizedwithanumberof4000particlesrandomlyspreadoverthe investigatedmeasurementvolumeasshowninfigure3.Asthenetwork’smemoryfootprintisusuallylarge, weproposetouseabatchsizeof1,whichmeansthatonlyonetrainingdatasetisusedatatimetooptimize thenetwork.Wefoundthatconvergenceisusuallyreachedin3epochswithdecreasinglearningratesof0.1, 0.01andfinally0.001. ThetrainingprocessissignificantlyfasterusingGPU-acceleration.WeusedaNVIDIARTX-2080Ti graphicsadapterforthetraining.Ittakesapproximately6hcomparedtoaCPUtrainingof16h.Thecrucial parameteristheamountofGPUmemory.Thevolumetricgridonwhichtheinitialfieldisdefinedisthe mainreasonfortheneedofalargeamountofmemory.Inourcaseweusea (332 ×220 ×68)-gridwitha spacingof40 µmandafiner (456 ×302 ×91)-gridwithaspacingof30 µm.Thisresultsinalmostfull occupancyofthe11GBGPUmemoryduringthetrainingprocess. Whenaneuralnetworkistrained,ithastobeensuredthattheconvergedresultgeneralizeswellto unknowndatainsteadofjustlearningthetrainingdata‘byheart’.Tostudythegeneralizationbehaviour,we randifferenttrainingswithavariedamountofnoiseinthesyntheticmeasurementimagesandneverfounda convergedsolutionthatdidnotgeneralizetounknowndata.Hence,weconcludethatthenetworkis generalizingquitewellbydesign. 6.Fromimagestoparticlepositions Inthissection,wewillgiveadetaileddescriptionofthenecessarystepstoretrieveparticlepositionsbasedon AIPRfrommeasuredimages.Anaccuratecameracalibration[26–29]isaprequisiteforallfurthersteps.The networkdefinedintheprevioussectionistrainedusingthesamecameracalibrationsasforgeneratingthe trainingdataandwillbeusedinthecorrespondingprocessingstep. 4 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure3.(a)Syntheticcameraimagewith4000particles.(b)Close-upofthesyntheticimage.Thespatialparticleintensityis Gaussiandistributedwitharandommaximumintensitybetween0.7and1. Figure4.(a)Closeupofmeasuredimage(inverted).(b)Identicalcloseupafterpreprocessing. 6.1.Preprocessingofexperimentalimages Inthepresenteddesign,theneuralnetworkprocessingneedstheparticlestohaveacommonsphericalshape withcomparablebrightnessandsize.Thisprequesiteisoftenhardtomatchinsomeexperimental measurements.Hence,weappliedthefollowingimagepreprocessingstepstoachieveuniformparticle projections.TherawmeasurementimageisprocessedbyaSobelfilterwhichemphasizesintensitygradients. Afterwards,atwo-dimensionalGaussianbandpassfilterisappliedtofilteroutnoiseandtoturntheparticle imagesintoaGaussianshapedsphericalintensityprofile.Infigure4onecanseetheexperimentalrawimage (a)andtheprocessedimage(b).Afterpreprocessingtheimages,theparticleshaveanicelysphericalshape buttheintensityisstillnotuniform.Toaddressthisissueonecanmakethenetworklearntoaccept non-uniformbrightnessinacertainmanner.Duringnetworktraining,onehastokeepinmindtoproduce trainingimagesthatalsofeatureparticleprojectionswithnon-uniformintensity.Wefoundthatenlarging theparticlesinthetrainingimagesaswellasthemeasurementimageswillimprovethenetworkdetection outcome.Thisisprobablyduetoourrelativelycoarse3Dgrid(seenextparagraph)whichhasa correspondingresolutionofabout4pxinthemeasurementimage. 6.2.Initialfieldgeneration Afterthemeasurementimagesarepreprocessed,asocalledinitialfieldI isgeneratedforeverycamera i,N view(N =1, . . . ,4inourcase).Thefieldisdefinedonthesamevolumetricgridasthegroundtruthfield. Foraknownprojectionmatrixormappingfunction,typicallyobtainedbycameracalibration,theinitial fieldisgeneratedasfollows.First,foreachvoxelofthevolumetricfield,oneneedstofindthatpixelonto 5 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure5.Asmallsub-sampleofthevolumetricfieldshowingtheprocessingsteps.Theparticleimageis‘ray-casted’intothe volumetricfieldusingtheprojectionsofeachoftheN cameras,(a)–(d).Thefourdifferentcameraviewsofthesameparticle resultinslightlydifferentvoxelsbeingactivated.(e)Theresultinginitialfieldfromequation(2)inthisregion.(f)Predicted volumetricfieldof(e)fromthetrainednetwork.(g)Thegroundtruthfieldusedtotrainthenetworkfromtheexact3Dposition oftheparticle. whichthevoxelcenterisprojected.Then,the(scalar)entriesoftheinitialfieldofeachcameraI aregiven i,N bytheintensityofthecorrespondingpixelsconnectedtoeachvoxel(‘ray-casting’).Wewanttonotethatthe algorithmisspedupforprocessingalargenumberofimageswhenalookuptableisgeneratedsothatthe projectionisnotcomputedagainforeveryimage.Whenthiscomputationisdoneforallcameraviews,then theN initialfieldsarecombinedby ( ) 1/N I = I . (2) i i,N Forclarity,thiscombination-processissketchedinfigure5.Images(a)–(d)showasmallsubsampleofthe volumetricfieldsI fromtheprojectionsofthesensorpixels.Asthecamerashaveslightlydifferentlinesof i,N sightintothevolume,thisray-castingprocessresultsinslightlydifferentdirectionsoftheactivatedvoxels. Aftercombiningthefields(a)–(d)byusingequation(2),theinitialfieldI asshowninimage(e)isreadyto beprocessedbytheneuralnetwork.Theinitialfieldcombinestheray-castinginformationfromeachcamera intoasinglevolumetricfield. 6.3.Networkprocessing Thenetworkprocessingitselfisjustthecallofthetrainednetwork(inMATLABthisstepiscalledprediction) withtheinitialfieldasaninput.ThepredictedoutputfieldI producedbytheneuralnetworkisagaina volumetricfieldofthesamesizeastheinputfieldwheretheparticleintensitiesrangefromzerotoone. Particlelocationsarethenencodedbycontiguousbrightvoxelsorideallyas3DGaussiandistributed intensityregions(ifthegridresolutionisfineenough).Thegivenexampleinfigure5(f)showstheresultof anexamplepredictionofthenetwork.Theelongatedinitialfieldfromfigure5(e)seemsto‘collapse’intoa 3Dspot.Forcomparisonwealsoshowthegroundtruthfieldinfigure5(g),whichisknownastestdatafrom thenetwork,buthasnotbeenusedaslearningdata.Thegroundtruthfieldandthepredictedfieldarein goodagreementdemonstratingthecapabilityofthenetwork. 6.4.Particlepositionextraction Toextracttheactualthree-dimensionalparticlepositionsfromthisoutputvolumetricfield,weproposethe followingapproach.Inexperimentsitisnotalwayspossibletoguaranteeauniformilluminationofthe 6 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer observedparticles.Thisresultsinavolumetricfieldthatcontainsvoxelsrepresentingreconstructedparticles, whichhaveanon-uniformbrightness.Asimplethresholdfilterappliedonthe3Dfieldwillthusnotrecover allparticlepositionsinthiscase.Todetectthepositionsofasmanyaspossibleparticles,weproposea step-wisereducedthresholdvaluefollowedbyidentifyingcontiguousregionsinthevolumetricdata.In MATLAB,theregionsaboveacertainintensitythresholdcanbefoundusingtheregionpropsfunction.Ifa regionwith7ormorevoxelsabovethecurrentthresholdisfound,theirintensity-weightedmeanisthen associatedwiththeparticleposition.Beforeproceedingwiththenextlowerthresholdtofindparticleswith lowerintensity,itisneccessarytozeroallpreviouslyidentifiedconnectedregionsfromthevolumetric dataset.Inourmeasurementswefoundthatreducingtheintensityfrom0.7to0.05withastepsizeof0.05 workswell.Thesevalues,ofcourse,dependonthenoiselevelandthegeneraldataquality. 6.5.PositionrefinementbasedonSTB AstheAIreconstructionalgorithmisbasedonavolumetricgridwithalimitedspatialresolution,itisclear thattheaccuracyofthereconstructedparticlepositionsisalsolimited.Duetohardwarerestrictionsorto reducecomputingtimethevolumetricfieldmightbechosencoarserthantherequiredspatialresolution. Nevertheless,inthiscasethecoarserpositionsserveasaperfectstartingpointforarefinementstepusingthe STBmethod[10].Inthefollowing,wewillcarryoutbenchmarktestswithandwithouttheadditionalSTB refinementtoshowitsinfluence. 7.Results—syntheticdata Inthisstepwegeneratetestdatainthesamewayasthelearningdata.However,thesetestdatahavenotbeen usedinthelearningdataset,hence,thenetworkdoesnot‘know’thetestdata.Toestimatetheperformance ofthetrainednetwork,wewillshowdifferentmeasuresforvariedparameters.Onemeasurewillbecalledthe meanerror.Thisisdefinedasthedistance(in µm)fromareconstructedpositiontoitsnearestneighborin thegroundtruthofthetestdata.AnothermeasureisthedetectionrateR =N /N ,whichisdefinedbythe NN gt fractionofthecorrectlyreconstructednumberofparticlesN thatarecloserthan50 µmtooneoftheN NN gt groundtruthparticles.Ittellsus,howmanyofthesyntheticparticlesaresuccessfullyreconstructed.Thelast importantmeasurewillbetheratioofghostparticles.Particlesthatarereconstructed,butdonotmatcha groundtruthparticlewithinadistanceof50 µm,willbeconsideredasaghostparticle.Theratio G =N /N isthendefinedastheratiobetweenthenumberofghostparticlesN andthetotalnumberof gh NN gh reconstructedparticlesN .Thefollowingbenchmarksonsyntheticimagedatahavebeendoneusingthe NN neuralnetworkbasedona30 µm(456 ×302 ×91voxels)anda40 µm(332 ×220 ×68voxels)grid.The imageinputdataandthereconstructionvolumewasidenticalinbothcases.Theslightreductionofvoxel sizefrom40to30 µmincreasesthenumberofvoxelsbyafactorof2.5.Afurtherreductionofvoxelsizewas notpossiblewithourhardware. Thebenchmarkresultsusingthecoarsergridareshowninfigure6.Thenumberofparticlesthathasbeen usedforeachbenchmarkrunisshownasatotalnumberandasaseedingrate(particlesperpixelorppp), whichismoreusefultobecomparablewithotherexperimentsandsimulations.Figure6(a)showstheresults thatareobtainedbyjustusingAIPR,figure6(b)showsthesameresultsfollowedbySTBrefinement.Itcanbe seenthatforparticlenumbersofabout3000whichwefindtypicallyinourexperiments,90%oftheparticles withoutrefinementand80%withSTB-refinementarereconstructed.Forclarity,theparticlenumberof3000 isalsoindicatedbytheverticaldashedlineintheplot.Notethatthehigherreconstructionfractionwithout usingtherefinementcomesatthecostofahigherghostparticleratewhichis7%withoutrefinementand3% withrefinement.ThismeansSTBeffectivelyreducesghostparticles,butalsotrueparticles.STBistherefore morerestrictive.Thepositioningerror(dottedline)ofthereconstructionbasedonthe40 µmgridisabout 15 µmwithoutrefinementandjust10 µmwithrefinementataparticlenumberof3000. Theneuralnetworkbasedonthefinergridperformsslightlybetterasshowninfigure7.The reconstructionrateisonahigherlevelforthenon-refinedresultsinfigure7(a).Theghostparticlerateis slightlylowercomparedtothecoarsegridfromfigure6.Thismightbeduetothefactthatthevolumetric fieldisnowlesssparseandcontainsmore‘hotvoxels’thatcontributetoaparticlewhicheasestheparticle detection.ThemeanerrorisalmostthesamefortherawnetworkprocessingandtheSTB-refinement pos-processing. Thenetworkthatwasdefinedonafinergridperformsreasonablywell.Thepredictionstepoftheneural networkfollowedbythepositionextractionfromthevolumetricfieldisdoneinaboutonesecondorlesson ourGPU.TheSTBrefinement,whichtakesseveralminutesperframedoesnotimprovethereconstruction toomuch.TheaccuracyiscomparabletotheSTB-refinedapproachwhiletheghostparticlefractionisstill lowconsideringabout3000visibleparticles.Thereadershouldnotethatafurtherreductionofghost particlesispossiblebytrackingparticlesovermanyframes.Ghostparticleswouldonly‘exist’forafew 7 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure6.Networkperformancetestedonsyntheticdatawithvariationoftheparticlenumber.Thegridresolutionwas40 µm. (a)Resultsobtainedbyneuralnetworkprocessing.(b)Resultsobtainedbyneuralnetworkprocessingfollowedby STB-refinement. Figure7.Networkperformancetestedonsyntheticdatawithvariationoftheparticlenumber.Thegridresolutionwas30 µm. (a)Resultsobtainedbyneuralnetworkprocessing.(b)Resultsobtainedbyneuralnetworkprocessingfollowedby STB-refinement. trackedframes.Fortheapplicationtoourexperimentaldata,wewillusetheneuralnetworkwiththefiner (30 µm)resolution. Forbothcasesitcanbeseenthattheperformancegetsworsewhentheparticlenumberorseeding densitygetstoohigh.Wethinkthatthisisbasedonthefactthatweneedacertainminimumparticle distanceinthevolumetricfieldforourparticlepositionextractiontoworkaccurately.Unfortunately,this meansthatwithincreasingseedingdensityonewouldneedanincreasinggridresolutionwhichisnot possibleinmostcasesduetohardwarelimitations. 8.Results—experimentaldata Here,wenowuseexperimentaldatafromtheparabolicflightasdescribedinsection2.Theneuralnetwork hasbeentrainedwithasetofprojectionmatricesthatareobtainedfromanactualcalibrationofour experimentalsetup.Thevolumeofthevolumetricfieldwasadjustedtomatchtheinvestigatedvolumeinthe experiment.Thus,thetrainednetworkcanbedirectlyappliedtoreconstructparticlesfromasetof 8 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure8.(Top)Thereconstructedparticletrajectoriesfromtenconsecutiveframes.BluetrajectoriesareobtainedusingSTB, orangetrajectoriesareobtainedfromneuralnetworkprocessing.(a)Trajectoriesinthefullreconstructionvolume.(b)Close-up forclarity. measurementimagesthathavebeendonewiththecamerasystem.Oneimagefromtheexperimentisshown infigure1. Asthisisameasurement,thereisnogroundtruthdataavailablewhichcanbeusedtoverifythe reconstructionresults.However,thesedatahasbeenpreviouslyanalyzedusingSTB[22]andwecannow comparetheresultsfromtheneuralnetworkwiththeSTBresults.Itshouldbenotedthatinthisanalysisof theexperimenttheSTBapproachwasusedcompletelyindependentlyoftheneuralnetworkandnotasa refinementstepofneuralnetworkpredictions.Infigure8weshowsampleparticletrajectoriesthathavebeen trackedfortenframes.Asonecansee,thedifferentmethodsseemtobesensitivetodifferentaspectsasthe detectedparticlesappearpartiallyindifferentregionsinthedustcloud.WhileAIPRhasmoredetectionsin thepositivey-directionwhichcannotbefoundintheSTBdata,therearelessAIPRdetectionsinthenegative y-regioncomparedtotheSTBresults.Toquantifythis,weidentifiedallmatchingparticlepositionsthatare characterizedbyaproximitybetweenthereconstructiontechniquesoflessthan40 µm. Oneresultofthispositionmatchingis,thatthemeandistanceoftheparticlepositionsfromboth approachesis16 µm.Inotherwords,whentheapproachesreconstructaparticle,thepositionsagreequite well.Ontheonehand,58%oftheAIPRpositionsmatchtheSTBpositionsand42%oftheAIPRpositions areexclusivelydetectedinthisapproach.Ontheotherhand,only32%oftheSTBpositionsmatchtheresults fromtheAIPRprocessingand68%ofthepositionsareexclusivelyfoundbySTB. Thedifferenceintheresultsbetweenbothalgorithmsmaybeduetotheirbasicprinciples.Whereasthe AIPRalgorithmmakesasinglesnapshot-likedetectionineverysingleframe,theplainSTB-algorithmtries tofollowtheparticlepathconsecutivelybyusingKalman[30]orWiener[31]filtering.Bothapproacheshave prosandcons.TheAIPRalgorithmisthusinsensitivetoasuddenchangeoftheparticlesystem,thatcanbe inducedbyvibrationsinthemeasurementsetupwhichisthecaseinourmeasurementsonparabolicflights. TheSTB-algorithmhastheadvantagethatparticlesareprojected,reconstructedandthentrackedfora relativelylongtimeevenifthebrightnessorimagingqualityofthisparticlevariesinconsecutiveframes. Anotherpossibilitytocomparebothalgorithmsistolookatthephysicalpropertiesobtainedwitheither approach.Thenumberdensityprofilealongz-directionoftheobserveddustcloudwillnowbecompared.In earlierworkonthisdataset,wefoundthatthez-profileofthenumberdensityn revealedalayeredstructure. Infigure9,thisprofileobtainedbySTBisshownbycircles.Thecorrespondingsolidlinerepresentsafitwith threeGaussiandistributionstothedensitydata.Thesamedataset,butanalyzedwithAIPR,givesaquite 9 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure9.DensityprofilescreatedusingSTB(blue)andAIPR(red)algorithms.ThedatapointsarefittedwiththreeGaussian functions. similarimpressionofthestructure.TheAIPRdatainfigure9showstwostrongpeakswithamuchclearer separationofthelayers.Thethirdpeaknearz ≈0.7mmsuggestedbytheSTBanalysisisonlyfaintlypresent intheAIPRanalysis.Itcanbesuspected,thattheAIPRalgorithmseemstobelesssusceptibletoghost particlesinbetweenthepeakscomparedtoSTB.Moreghostparticleswouldresultinansmearedout distribution.Ontheotherhand,thefaintthirdpeakintheAIPRanalysismaybecausedbythepoorimaging qualityofparticlesthatarenotwellfocusedorilluminated.Asalreadysaid,itisdifficultfortheAIPR approachtohandleparticleprojectionswithapoorsignal-to-noiseratio.Incontrast,someparametersin STBcanbefine-tunedtoalsohandleatleastsomeoftheweaklyilluminatedparticles. 9.Conclusion Wehavepresentedtheapplicationofaneuralnetworktoreconstructthree-dimensionalparticlepositions fromamulti-viewimagingdiagnostic.Withanexemplary4-camerasetupthenecessarystepsfortraining andapplyinganeuralnetworkaredescribed.Itwasshownthattheneuralnetworkperformsnearlyasgood astheShake-the-boxalgorithm,whilstbeingextremelyfast(oncethetrainednetworkisavailable).The predictionstepcanbedoneonanymodernofficePC,butfortrainingofthenetworkaGPUwithalarge memoryisrecommended.Forremoteapplicationofsuchreconstructiontasks,ase.g.ontheInternational SpaceStation,thispossibilitytosharethecomputationloadisverywelcome.Thedemandingcomputations, namelythenetworktraining,canbedonebeforethemeasurementandonhigh-performancecomputers. Afterthisenergyandtimeconsumingtask,theanalysisofthemeasurementimagescanbedonewithregular hardwareinshorttime. Thereconstructionapproachwasbenchmarkedonsyntheticdataandappliedtoexperimentaldata.The AIPRapproachcanbesuggestedforstereoscopicmeasurementsofparticlesatadecentlyhighseedingrate. WithAIPRthesuccessfullyreconstructedparticlefractionisintherangebetween80%and90%evenata highparticleseedingrate.Thenumberofghostparticlesisstillinanacceptiblelevelandthepositionerroris smallerthanthevoxelsize. AIPRhasproblemswhentheimagingconditionsarenotperfect.Theinfluenceofcameramodelsand errorsincamerapositioningneedtobeaddressedinfutureinvestigations.Nevertheless,wewereableto reliablyreconstruct3Dpositionsfromexperimentaldataofadustyplasma.Theresultswereverycompatible withearlieranalysisusingSTB.WithAIPRwecouldverifythelayeringoftheinvestigateddustcloud. ThereisstillworktobedonetooptimizethebehaviouroftheAIPRwhenimagingconditionsarenot perfect.Additionally,itisnotyetclearhowcameramodelsandcamerapositioningmayaffectthe performanceoftheneuralnetworkreconstruction.Butaswepresentedinthispaper,thespeedofthe reconstructionprocesswhichisnearlyindependentoftheparticlenumberisasufficientreasontocontinue researchinthisfield.Furthermore,thereconstructionofparticlesathigherseedingratesthanweseeinour experimentisstillchallenging.Infutureworkwehopetogetbetterresultsofhigherparticledensityusing moresophisticatedpositionextractionfromthefinalvolumetricfieldsobtainedbytheneuralnetwork. 10 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Dataavailabilitystatement Thedatageneratedand/oranalysedduringthecurrentstudyarenotpubliclyavailableforlegal/ethical reasonsbutareavailablefromthecorrespondingauthoronreasonablerequest.Thedatathatsupportthe findingsofthisstudyareavailableuponrequestfromtheauthors. Acknowledgment FinancialsupportfromtheDeutschesZentrumfürLuft-undRaumfahrt(DLR)underProjectNo. 50WM1962isgratefullyacknowledged. ORCIDiDs MichaelHimpel https://orcid.org/0000-0001-6710-0071 AndréMelzer https://orcid.org/0000-0001-9301-9357 References [1] WeiB,WeiW,ShaoyiB,BenJandBoL2020IOPConf.Ser.:Mater.Sci.Eng.787012002 [2] WangC,SunXandLiH2019J.Phys.:Conf.Ser.1176032028 [3] ShenZ2021J.Phys.:Conf.Ser.1881022005 [4] SwalaganataG,SulistyaningrumDRandSetiyonoB2017J.Phys.:Conf.Ser.893012062 [5] ElsingaGE,ScaranoF,WienekeBandvanOudheusdenBW2006Exp.Fluids41933–47 [6] WilliamsJD2011Phys.Plasmas18050702 [7] AkhmetbekovY,LozhkinV,MarkovichDandTokarevM2011Multisettriangulation3DPTVanditsperformancecomparedto tomographicPIV9thInt.Symp.OnParticleImageVelocimetry-PIV vol11pp21–3 [8] MulsowM,HimpelMandMelzerA2017Phys.Plasmas24123704 [9] WienekeB2013Meas.Sci.Technol.24024008 [10] SchanzD,GesemannSandSchröderA2016Exp.Fluids5770 [11] HuangH,SchwabeMandDuCR2019J.Imaging 536 [12] DietzC,BudakJ,KamprichT,KretschmerMandThomaMH2021Contrib.PlasmaPhys.e202100079 [13] GaoQ,LiQ,PanS,WangH,WeiRandWangJ2019Particlereconstructionofvolumetricparticleimagevelocimetrywithstrategy ofmachinelearning(arXiv:1909.07815) [14] HimpelM2021AIPRToolboxforMATLAB(availableat:https://physik.uni-greifswald.de/ag-melzer/aipr-toolbox/) [15] NefedovAPetal2003NewJ.Phys.533 [16] SchwabeM,ZhdanovS,RäthC,GravesDB,ThomasHMandMorfillGE2014Phys.Rev.Lett.112115002 [17] ThomaMH,HöfnerH,KretschmerM,RatynskaiaS,MorfillGE,UsachevA,ZobninA,PetrovOandFortovV2006Microgravity Sci.Technol.847–50 [18] ThomasHMetal2008NewJ.Phys.10033036 [19] KlindworthM,ArpOandPielA2006J.Phys.D:Appl.Phys.391095–104 [20] KlindworthM,ArpOandPielA2007Rev.Sci.Instrum.78033502 [21] IshiharaO2007J.Phys.D:Appl.Phys.40R121–47 [22] HimpelM,SchüttS,MilochWJandMelzerA2018Phys.Plasmas25083707 [23] DiceLR1945Ecology26297–302 [24] JaccardP1912NewPhytol.1137–50 [25] RezatofighiSH,TsoiN,GwakJ,SadeghianA,ReidIDandSavareseS2019Generalizedintersectionoverunion:ametricanda lossforboundingboxregressionCoRR(arXiv:1902.09630) [26] WienekeB2008Exp.Fluids45549–56 [27] WienekeB2018Meas.Sci.Technol.29084002 [28] ZhangZ2000IEEETrans.PatternAnal.Mach.Intell.221330–4 [29] HimpelM,ButtenschönBandMelzerA2011Rev.Sci.Instrum.82053706 [30] KalmanRE1960Trans.ASME,J.BasicEng.8235–45 [31] WienerN1964Extrapolation,InterpolationandSmoothingofStationaryTimeSeries(Cambridge,MA:MITPress) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Machine Learning: Science and Technology IOP Publishing

Fast 3D particle reconstruction using a convolutional neural network: application to dusty plasmas

Loading next page...
 
/lp/iop-publishing/fast-3d-particle-reconstruction-using-a-convolutional-neural-network-hJSEzFniCF

References (31)

Publisher
IOP Publishing
Copyright
© 2021 The Author(s). Published by IOP Publishing Ltd
eISSN
2632-2153
DOI
10.1088/2632-2153/ac1fc8
Publisher site
See Article on Publisher Site

Abstract

thisworkmaybeused underthetermsofthe Wepresentanalgorithmtoreconstructthethree-dimensionalpositionsofparticlesinadense CreativeCommons Attribution4.0licence. cloudofparticlesinadustyplasmausingaconvolutionalneuralnetwork.Theapproachisfound Anyfurtherdistribution tobeveryfastandyieldsarelativelyhighaccuracy.Inthispaper,wedescribeandexaminethe ofthisworkmust maintainattributionto approachregardingtheparticlenumberandthereconstructionaccuracyusingsyntheticdataand theauthor(s)andthetitle experimentaldata.Toshowtheapplicabilityoftheapproachthe3Dpositionsofparticlesina ofthework,journal citationandDOI. densedustcloudinadustyplasmaunderweightlessnessarereconstructedfromstereoscopic cameraimagesusingtheprescribedneuralnetwork. 1.Introduction Machinelearningcurrentlyisarapidlygrowingfieldinitsapplicationtophysicsquestions.Manychallenging problemscannowbeaddressedwithdifferentapproachesfromtheconstantlyevolvingartificialintelligence repository.Machinelearninghasbeenespeciallyappliedtoimageanalysisorimageclassification[1–4]. Thethree-dimensionalreconstructionofparticlepositionsfrommultiple-viewcamerasetupisanother problemwheremachinelearningcanbeofenormoushelp.Traditionally,thefollowingdifferentapproaches forparticlepositionreconstructionareoftenemployed:volumetricreconstruction,triangulation,and iterativereconstruction.AnexampleofavolumetricreconstructionmethodistomographicPIV[5,6]. There,thevolumetricsourcefieldthatcontainstheparticlesiscomputedbyalgebraicmeansusingthe measurementimagesasprojectionsofthesourcefield.Itsdrawbackisaveryhighcomputationaleffortand thustheslowprocessingspeed.Triangulation-basedapproaches[7,8]arefastercomparedtotomographic PIV.Triangulationreliesonacameracalibrationandreconstructsthethree-dimensionalparticlepositions fromknownparticlecorrespondencesinthemeasurementimages.Themainproblemthatneedstobe solvedhereistofindthesecorrespondingparticleprojectionsinthedifferentcameraviews.Usuallythis problemisaddressedbymeansofepipolargeometry.Thecleardrawbackofthisapproachistheambiguityof thecorrespondenceswhentheparticledensityinthemeasurementishigh.Theiterativereconstruction approaches[9,10]optimizetheparticlepositionstothegivenmeasurementimagesandusetriangulationas wellasvolumetricreconstructionatcertainmoments.TheShake-the-Box(STB)-algorithm[10]isbasedon initialparticletracks,thatareobtainedusingtomographicreconstruction.Then,theseinitialtracksareused topredictfurtherlocationsoftheparticlesthatarerefinedtomatchthemeasurementimages.Approvedand optimizedparticlesareerasedfromthemeasurementimageandthennewparticlesaredetectedusing triangulation.STBiscurrentlyconsideredtobeoneofthestate-of-the-artalgorithmsforhighparticle densities. WewanttoapplytheproposedAIPRalgorithminourfieldofresearchcalledDusty-orComplexPlasmas. There,micrometersizedparticlesareinjectedintoaplasmaenvironmentandattainahighlynegativecharge. Thisresultsinavarietyofinterestingcollectiveeffectslikedensitywavesorcrystallinephasesoftheparticle system.Toourknowledge,machinelearninghasbeenonlyappliedtoanalysisoftwo-dimensional investigationssofar[11,12]. ©2021TheAuthor(s). PublishedbyIOPPublishingLtd Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure1.(a)Schematicofthecamerasetupinourexperiment.Fourcamerasimagemicrometer-sizedparticlesthatare illuminatedbyalaser-sheet.(b)Cameraimageofa2mm-sliceofavolumetricdustcloudinadustyplasma(invertedand processedforclarity).Approximately3000particlesarevisible. Inthispaperwewilluseamachinelearningapproachtoreconstructthree-dimensionalparticlepositions inadustyplasmaexperiment.Weadoptadeeplearningalgorithm,AIPR(ArtificialIntelligenceParticle Reconstruction)[13]thatreliesonaneuralnetworktoretrievevolumetricfieldsfrompreliminary(coarse) tomographicreconstructions.WeadvanceAIPRbyapplyingittotheexperimentdatafromaparabolicflight campaignwithabout3000visibleparticles.Further,weextracttheresultingthree-dimensionalparticle positionsfromthecomputedvolumetricfield.Ourapproachwillbetestedandcomparedagainstthe traditionalSTBapproach,whereasuperiorspeed,maintainingacomparableaccuracy,ofAIPRis demonstrated.Itisspecialtothisalgorithmthatthecomputingworkloadisseparatedintotwoparts:the energy-andtimeconsumingtraining,followedbytheframewisenon-intenseandfastreconstruction.This makesthisalgorithmespeciallysuitableforremoteapplications.Forexample,futureexperimentsonboard theISScanhighlybenefitfromsuchadataanalysisworkflow. TheMATLABsourcecodeoftheimplementationisavailable[14]. 2.Experiment TheexperimentaldatausedintheAIPRreconstructionarefromdustyplasmasunderweightlessnesson parabolicflights.Undersuchconditionsthedustparticleswithatypicaldiameterof4−8 µmattaina highnegativechargeandformalargeanddensevolume-fillingcloud[15–18].Intheexperiment, micrometer-sizedgrainsaretrappedinalow-temperatureargonradio-frequencydischarge.Theplasma chamberisverysimilartotheIMPF-K2designfromearlierexperiments[19,20].Theargonpressurewas 30Paandtheplasmapowerwas3W.Undersuchconditions,byinjectingthemicroparticlesusinga dispenser,adustcloudofabout10 microspherescanbeconfinedintheplasmaenvironment.Therethe particlesinteractviatheirelectrostaticrepulsionandviaplasma-mediatedforces[21].Alaserilluminatesa volumeofthedensedustcloudwithanexpandedbeamofafewmillimetresthickness.Thelightscatteredby theparticlesisrecordedwithhigh-speedvideocamerasassketchedinfigure1.Toreconstructthe three-dimensionalpositionsoftheparticles,ourcamerasetupconsistsoffoursynchronizedcameras (MV-BlueFox3-2-2051).Themeasurementshavebeendoneat200fps.Thepixelsizeofthesensorswas 3.54 µm,butweuseda2 ×2binningmodewhichresultsinaneffectivepixelsizeof7.08 µm.Theobserved volumeinthedustcloudwasabout14 ×9 ×2mm.Inthatregion,afewthousandparticleswerepresent. Fordetailsregardingthesetupandthecalibrationofthiscamerasystem,thereaderisreferredto[22].In thispaper,wewillrevisitameasurementthathasalreadybeenanalyzedwithastate-of-the-artalgorithm calledSTB[9,10,22].Thereconstructedthree-dimensionalmotionofseveralthousandsofparticleshas revealedthattheparticlesarrangedintwodistinctlayerswithintheobservedvolume[22].Thisfindingis confirmedbytheAIPRapproach,buttheresultsareobtainedmuchfaster. 2 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer ( total of 8 ) Figure2.DesignoftheAIPRneuralnetwork.The3Dconvolutionallayersuseeitherone3×3×3filteror163×3×3filters whichisdenotedhereby1×3and16×3.Thenetworkdesignistakenfrom[13]. 3.OutlineoftheAIPRapproach Here,first,thegeneralideaforthemachinelearningreconstructionisdescribed,thedetailsaregiveninthe followingsections.Themainprocessingchainoftheproposedalgorithmisasfollows.Themeasurement volumeisdiscretizedintoanumberofvoxels,asocalledvolumetricfield.ThefieldcontainsN ×N ×N x y z voxels,whereN ,N ,N isoftheorderof100–400,whichinourcaseislimitedbytheGPUmemory.Then, x y z themeasurementimagesofallfourcamerasarealgebraicallycombinedtoasingleinitialvolumetricfield thatcontainsakindofray-castinginformationofbrightimagepixelsthatareprojectedintothevolumetric field.Then,aneuralnetworkistrainedwithsyntheticdataadoptedtothemeasurementonewantsto analyze.Thetaskofthenetworkwillbetopredictthefinalvolumetricfieldfromtheinitialvolumetricfield. Thisfinalfieldcanthenbeusedtoextracttheactual3Dpositionsoftheparticles. Whensystemsofindistinguishableparticles(asinourcase)arestudied,thereisasignificantadvantage comparedtomanycomplexneuralnetworkdesignsthataretypicallyappliedforthree-dimensional reconstructionofreal-lifescenes.First,theobjectsareofsimpleshape(spherical)whichcanbeeasily modeledartificially.Thereisnoneedfortherecognitionorclassificationofobjetsasanindividualtask. Hence,thenetworkdoesnotneedtocomputefeaturemapsbutcanbeseenasakindofsharpeningfilterina 3Dfield.Anotheradvantageliesinthepossibilityofartificialtrainingdata.Aneuralnetworkgenerallyneeds alargenumberoftrainingdatasetstobeproperlytrained.Inourcase,theseareimagesetsforthefour camerasandthedesiredfinalvolumetricfield.Inmanyreal-lifeapplicationsofneuralnetworksthistraining dataishardorexpensivetoobtain.Inoursituation,wecaneasilycalculateartificialimagesfromrandomly chosen3Dpositionswithtuneableparticleappearances.Also,thecorrespondingvolumetricfieldiseasily constructed.Asaresult,theneuralnetworkcanbefine-tunedtomatchtheexactmeasurementconditions suchasparticlesizes,brightnessandtheimagenoiseofthecameras.Inthefollowingsectionwewillpresent theactualnetworkdesignandgivedetailedinformationregardingthetrainingprocess. 4.Networkdesign Thenetworkdeignistakenfrom[13]andwillbebrieflyoutlinedhere.Theneuralnetworkisdesignedto transformthevolumetricinitialinputfieldI ofsizeN ×N ×N tothevolumetricfinaloutputfieldI with i x y z f thesamesize.Itsdesignisdepictedinfigure2.Thedimensionalityandvoxelnumbersofthevolumetricfield remainunchangedthroughoutthenetwork.Thenetworkisbuiltwitha3Dimageinputlayermatchingthe sizeoftheinitialfieldN ×N ×N .Thefirstconvolutionallayerischosenas3 ×3 ×3slidingcuboidal x y z convolutionfilterwithasinglefilterfollowedbybatchnormalizationandaReLulayer.Thefollowing convolutionallayersaresettomaintainthedatasize—thisistypicallycalledsamepadding—andarefollowed eachbyabatchnormalizationlayerandaReLulayer.Thefirstblockiscontinuedwitheight 3D-convolutionallayerswithfiltersofsize3 ×3 ×3andanumberof16filterseach.Thelastconvolutional layerhasafiltersizeof1andakernelsizeof3 ×3 ×3followedbyabatchnormalizationlayer.Tomapthe networkontoanoutputfieldwithvaluesbetweenzeroandone,asigmoidlayerisusedhereratherthana ReLulayer. Theregressionlayerneedsasuitablelossfunctiontoensurethatthenetworkweightsconvergeduring training.Thelossfunctionisnecessarytodefineamatchormismatchbetweenthedesiredresultandthe actualresultfromthenetwork.Theoriginallyproposedlossfunctionusedafinetuningparameter εto 3d image input layer 1x3 3d convolutional layer batch normalization layer relu layer 16x3 3d convolutional layer batch normalization layer relu layer 1x3 3d convolutional layer batch normalization layer sigmoid layer regression layer Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer ensureconvergenceofthenetworkduringtraining.Here,unliketheoriginalnetworkfrom[13],weemploy adicecoefficientasalossfunction L[23].Thisoneisapplicablewithoutanyinputparameterorprior knowledgeoftheuserside.Itisdefinedby 2 (T ·Y) L =1 − ∑ ∑ , (1) 2 2 T + Y whereT isthegroundtruthtrainingdatafield(Target)andY istheinitialfield.Thesummationisdoneover allvoxelsofthefieldandoverallbatchesandthemultiplicationisdoneelement-wise(Hadamardproduct). Thislosscoefficientisnormalizedsothatitreturns0foraperfectmatchbetweenT andY and1for mismatchingT andY.Forclarityitcanbethoughtofasanintersection-over-unionlossfunctionthatis widelyaddressedintheliterature[24,25]. 5.Networktrainingdetails Thetrainingoftheneuralnetworkisverytimeconsuming,especiallywhenthevolumetricfieldconsistsofa finevoxelgridandmanytrainingimagesareused.Eachtrainingdatasetcontainsthegroundtruth3Dfield anditscorrespondingprojectedcameraimages.Thegroundtruthfieldisgeneratedbychoosingrandom particlepositionsinthereconstructionvolume.Tothevoxelsaroundthechosenparticleposition,Gaussian distributedvoxelintensitiesareassigned.ThespatialwidthoftheGaussianischosentosignificantlyliftat leasttheparticlecontainingvoxelandtheclosestneighboringvoxelsabovenoiselevel.Theartificialcamera imagesarecomputedfromtheexactrandomparticlepositionsusingthecameraprojectionmatricesofeach camera.TheparticlesintheimagearethenagainmodeledasaGaussianwithawidthoftypically5px.This resultsinaparticlediameterof8–10pxintheimage,whereas1pxcorrespondstoabout12 µminthe investigatedplasmavolume.Toaccountfordifficultimagingsituationsinourexperiment,eachparticleis givenarandommaximumintensityintherangeof0.7–1.Thisintensityisusedintheimagesaswellasinthe volumetricfield.Toaccountforasrealisticaspossibletrainingdata,wealsoincludedbackgroundnoisein theimages.However,wefoundthatnoiseisnotnecessarytopreventoverfittingduringthetrainingprocess. Finally,thecameraimagedataisstoredas8-bitdataandthe3DfieldisstoredinsingleprecisiontosaveGPU memoryduringtraining. Aftertraining,thenetworkshouldofcoursebecapabletocorrectlyprocessingunknowninputdata insteadofjustreproducingthetrainingdata.Toobtainsuchageneralizing network,thereisaminimum numberofdifferenttrainingdatasetsnecessary.Theminimumnumberofthesetrainingimagescanonlybe estimated:everyvoxelshouldbecoveredbytheinitialfieldatleastonceinthewholetrainingdataset.Asa ruleofthumb,thenecessarynumberofimagescanbecalculatedbythenumberofvoxelsdividedbythe numberofparticlespertrainingimages.Forourcase,wefoundthatanumberof1000trainingimagesis sufficientforthetrainingtoconvergetowardsageneralizingsolutionforthenetworkcoefficients.Thereby, the1000trainingimagesaresynthesizedwithanumberof4000particlesrandomlyspreadoverthe investigatedmeasurementvolumeasshowninfigure3.Asthenetwork’smemoryfootprintisusuallylarge, weproposetouseabatchsizeof1,whichmeansthatonlyonetrainingdatasetisusedatatimetooptimize thenetwork.Wefoundthatconvergenceisusuallyreachedin3epochswithdecreasinglearningratesof0.1, 0.01andfinally0.001. ThetrainingprocessissignificantlyfasterusingGPU-acceleration.WeusedaNVIDIARTX-2080Ti graphicsadapterforthetraining.Ittakesapproximately6hcomparedtoaCPUtrainingof16h.Thecrucial parameteristheamountofGPUmemory.Thevolumetricgridonwhichtheinitialfieldisdefinedisthe mainreasonfortheneedofalargeamountofmemory.Inourcaseweusea (332 ×220 ×68)-gridwitha spacingof40 µmandafiner (456 ×302 ×91)-gridwithaspacingof30 µm.Thisresultsinalmostfull occupancyofthe11GBGPUmemoryduringthetrainingprocess. Whenaneuralnetworkistrained,ithastobeensuredthattheconvergedresultgeneralizeswellto unknowndatainsteadofjustlearningthetrainingdata‘byheart’.Tostudythegeneralizationbehaviour,we randifferenttrainingswithavariedamountofnoiseinthesyntheticmeasurementimagesandneverfounda convergedsolutionthatdidnotgeneralizetounknowndata.Hence,weconcludethatthenetworkis generalizingquitewellbydesign. 6.Fromimagestoparticlepositions Inthissection,wewillgiveadetaileddescriptionofthenecessarystepstoretrieveparticlepositionsbasedon AIPRfrommeasuredimages.Anaccuratecameracalibration[26–29]isaprequisiteforallfurthersteps.The networkdefinedintheprevioussectionistrainedusingthesamecameracalibrationsasforgeneratingthe trainingdataandwillbeusedinthecorrespondingprocessingstep. 4 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure3.(a)Syntheticcameraimagewith4000particles.(b)Close-upofthesyntheticimage.Thespatialparticleintensityis Gaussiandistributedwitharandommaximumintensitybetween0.7and1. Figure4.(a)Closeupofmeasuredimage(inverted).(b)Identicalcloseupafterpreprocessing. 6.1.Preprocessingofexperimentalimages Inthepresenteddesign,theneuralnetworkprocessingneedstheparticlestohaveacommonsphericalshape withcomparablebrightnessandsize.Thisprequesiteisoftenhardtomatchinsomeexperimental measurements.Hence,weappliedthefollowingimagepreprocessingstepstoachieveuniformparticle projections.TherawmeasurementimageisprocessedbyaSobelfilterwhichemphasizesintensitygradients. Afterwards,atwo-dimensionalGaussianbandpassfilterisappliedtofilteroutnoiseandtoturntheparticle imagesintoaGaussianshapedsphericalintensityprofile.Infigure4onecanseetheexperimentalrawimage (a)andtheprocessedimage(b).Afterpreprocessingtheimages,theparticleshaveanicelysphericalshape buttheintensityisstillnotuniform.Toaddressthisissueonecanmakethenetworklearntoaccept non-uniformbrightnessinacertainmanner.Duringnetworktraining,onehastokeepinmindtoproduce trainingimagesthatalsofeatureparticleprojectionswithnon-uniformintensity.Wefoundthatenlarging theparticlesinthetrainingimagesaswellasthemeasurementimageswillimprovethenetworkdetection outcome.Thisisprobablyduetoourrelativelycoarse3Dgrid(seenextparagraph)whichhasa correspondingresolutionofabout4pxinthemeasurementimage. 6.2.Initialfieldgeneration Afterthemeasurementimagesarepreprocessed,asocalledinitialfieldI isgeneratedforeverycamera i,N view(N =1, . . . ,4inourcase).Thefieldisdefinedonthesamevolumetricgridasthegroundtruthfield. Foraknownprojectionmatrixormappingfunction,typicallyobtainedbycameracalibration,theinitial fieldisgeneratedasfollows.First,foreachvoxelofthevolumetricfield,oneneedstofindthatpixelonto 5 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure5.Asmallsub-sampleofthevolumetricfieldshowingtheprocessingsteps.Theparticleimageis‘ray-casted’intothe volumetricfieldusingtheprojectionsofeachoftheN cameras,(a)–(d).Thefourdifferentcameraviewsofthesameparticle resultinslightlydifferentvoxelsbeingactivated.(e)Theresultinginitialfieldfromequation(2)inthisregion.(f)Predicted volumetricfieldof(e)fromthetrainednetwork.(g)Thegroundtruthfieldusedtotrainthenetworkfromtheexact3Dposition oftheparticle. whichthevoxelcenterisprojected.Then,the(scalar)entriesoftheinitialfieldofeachcameraI aregiven i,N bytheintensityofthecorrespondingpixelsconnectedtoeachvoxel(‘ray-casting’).Wewanttonotethatthe algorithmisspedupforprocessingalargenumberofimageswhenalookuptableisgeneratedsothatthe projectionisnotcomputedagainforeveryimage.Whenthiscomputationisdoneforallcameraviews,then theN initialfieldsarecombinedby ( ) 1/N I = I . (2) i i,N Forclarity,thiscombination-processissketchedinfigure5.Images(a)–(d)showasmallsubsampleofthe volumetricfieldsI fromtheprojectionsofthesensorpixels.Asthecamerashaveslightlydifferentlinesof i,N sightintothevolume,thisray-castingprocessresultsinslightlydifferentdirectionsoftheactivatedvoxels. Aftercombiningthefields(a)–(d)byusingequation(2),theinitialfieldI asshowninimage(e)isreadyto beprocessedbytheneuralnetwork.Theinitialfieldcombinestheray-castinginformationfromeachcamera intoasinglevolumetricfield. 6.3.Networkprocessing Thenetworkprocessingitselfisjustthecallofthetrainednetwork(inMATLABthisstepiscalledprediction) withtheinitialfieldasaninput.ThepredictedoutputfieldI producedbytheneuralnetworkisagaina volumetricfieldofthesamesizeastheinputfieldwheretheparticleintensitiesrangefromzerotoone. Particlelocationsarethenencodedbycontiguousbrightvoxelsorideallyas3DGaussiandistributed intensityregions(ifthegridresolutionisfineenough).Thegivenexampleinfigure5(f)showstheresultof anexamplepredictionofthenetwork.Theelongatedinitialfieldfromfigure5(e)seemsto‘collapse’intoa 3Dspot.Forcomparisonwealsoshowthegroundtruthfieldinfigure5(g),whichisknownastestdatafrom thenetwork,buthasnotbeenusedaslearningdata.Thegroundtruthfieldandthepredictedfieldarein goodagreementdemonstratingthecapabilityofthenetwork. 6.4.Particlepositionextraction Toextracttheactualthree-dimensionalparticlepositionsfromthisoutputvolumetricfield,weproposethe followingapproach.Inexperimentsitisnotalwayspossibletoguaranteeauniformilluminationofthe 6 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer observedparticles.Thisresultsinavolumetricfieldthatcontainsvoxelsrepresentingreconstructedparticles, whichhaveanon-uniformbrightness.Asimplethresholdfilterappliedonthe3Dfieldwillthusnotrecover allparticlepositionsinthiscase.Todetectthepositionsofasmanyaspossibleparticles,weproposea step-wisereducedthresholdvaluefollowedbyidentifyingcontiguousregionsinthevolumetricdata.In MATLAB,theregionsaboveacertainintensitythresholdcanbefoundusingtheregionpropsfunction.Ifa regionwith7ormorevoxelsabovethecurrentthresholdisfound,theirintensity-weightedmeanisthen associatedwiththeparticleposition.Beforeproceedingwiththenextlowerthresholdtofindparticleswith lowerintensity,itisneccessarytozeroallpreviouslyidentifiedconnectedregionsfromthevolumetric dataset.Inourmeasurementswefoundthatreducingtheintensityfrom0.7to0.05withastepsizeof0.05 workswell.Thesevalues,ofcourse,dependonthenoiselevelandthegeneraldataquality. 6.5.PositionrefinementbasedonSTB AstheAIreconstructionalgorithmisbasedonavolumetricgridwithalimitedspatialresolution,itisclear thattheaccuracyofthereconstructedparticlepositionsisalsolimited.Duetohardwarerestrictionsorto reducecomputingtimethevolumetricfieldmightbechosencoarserthantherequiredspatialresolution. Nevertheless,inthiscasethecoarserpositionsserveasaperfectstartingpointforarefinementstepusingthe STBmethod[10].Inthefollowing,wewillcarryoutbenchmarktestswithandwithouttheadditionalSTB refinementtoshowitsinfluence. 7.Results—syntheticdata Inthisstepwegeneratetestdatainthesamewayasthelearningdata.However,thesetestdatahavenotbeen usedinthelearningdataset,hence,thenetworkdoesnot‘know’thetestdata.Toestimatetheperformance ofthetrainednetwork,wewillshowdifferentmeasuresforvariedparameters.Onemeasurewillbecalledthe meanerror.Thisisdefinedasthedistance(in µm)fromareconstructedpositiontoitsnearestneighborin thegroundtruthofthetestdata.AnothermeasureisthedetectionrateR =N /N ,whichisdefinedbythe NN gt fractionofthecorrectlyreconstructednumberofparticlesN thatarecloserthan50 µmtooneoftheN NN gt groundtruthparticles.Ittellsus,howmanyofthesyntheticparticlesaresuccessfullyreconstructed.Thelast importantmeasurewillbetheratioofghostparticles.Particlesthatarereconstructed,butdonotmatcha groundtruthparticlewithinadistanceof50 µm,willbeconsideredasaghostparticle.Theratio G =N /N isthendefinedastheratiobetweenthenumberofghostparticlesN andthetotalnumberof gh NN gh reconstructedparticlesN .Thefollowingbenchmarksonsyntheticimagedatahavebeendoneusingthe NN neuralnetworkbasedona30 µm(456 ×302 ×91voxels)anda40 µm(332 ×220 ×68voxels)grid.The imageinputdataandthereconstructionvolumewasidenticalinbothcases.Theslightreductionofvoxel sizefrom40to30 µmincreasesthenumberofvoxelsbyafactorof2.5.Afurtherreductionofvoxelsizewas notpossiblewithourhardware. Thebenchmarkresultsusingthecoarsergridareshowninfigure6.Thenumberofparticlesthathasbeen usedforeachbenchmarkrunisshownasatotalnumberandasaseedingrate(particlesperpixelorppp), whichismoreusefultobecomparablewithotherexperimentsandsimulations.Figure6(a)showstheresults thatareobtainedbyjustusingAIPR,figure6(b)showsthesameresultsfollowedbySTBrefinement.Itcanbe seenthatforparticlenumbersofabout3000whichwefindtypicallyinourexperiments,90%oftheparticles withoutrefinementand80%withSTB-refinementarereconstructed.Forclarity,theparticlenumberof3000 isalsoindicatedbytheverticaldashedlineintheplot.Notethatthehigherreconstructionfractionwithout usingtherefinementcomesatthecostofahigherghostparticleratewhichis7%withoutrefinementand3% withrefinement.ThismeansSTBeffectivelyreducesghostparticles,butalsotrueparticles.STBistherefore morerestrictive.Thepositioningerror(dottedline)ofthereconstructionbasedonthe40 µmgridisabout 15 µmwithoutrefinementandjust10 µmwithrefinementataparticlenumberof3000. Theneuralnetworkbasedonthefinergridperformsslightlybetterasshowninfigure7.The reconstructionrateisonahigherlevelforthenon-refinedresultsinfigure7(a).Theghostparticlerateis slightlylowercomparedtothecoarsegridfromfigure6.Thismightbeduetothefactthatthevolumetric fieldisnowlesssparseandcontainsmore‘hotvoxels’thatcontributetoaparticlewhicheasestheparticle detection.ThemeanerrorisalmostthesamefortherawnetworkprocessingandtheSTB-refinement pos-processing. Thenetworkthatwasdefinedonafinergridperformsreasonablywell.Thepredictionstepoftheneural networkfollowedbythepositionextractionfromthevolumetricfieldisdoneinaboutonesecondorlesson ourGPU.TheSTBrefinement,whichtakesseveralminutesperframedoesnotimprovethereconstruction toomuch.TheaccuracyiscomparabletotheSTB-refinedapproachwhiletheghostparticlefractionisstill lowconsideringabout3000visibleparticles.Thereadershouldnotethatafurtherreductionofghost particlesispossiblebytrackingparticlesovermanyframes.Ghostparticleswouldonly‘exist’forafew 7 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure6.Networkperformancetestedonsyntheticdatawithvariationoftheparticlenumber.Thegridresolutionwas40 µm. (a)Resultsobtainedbyneuralnetworkprocessing.(b)Resultsobtainedbyneuralnetworkprocessingfollowedby STB-refinement. Figure7.Networkperformancetestedonsyntheticdatawithvariationoftheparticlenumber.Thegridresolutionwas30 µm. (a)Resultsobtainedbyneuralnetworkprocessing.(b)Resultsobtainedbyneuralnetworkprocessingfollowedby STB-refinement. trackedframes.Fortheapplicationtoourexperimentaldata,wewillusetheneuralnetworkwiththefiner (30 µm)resolution. Forbothcasesitcanbeseenthattheperformancegetsworsewhentheparticlenumberorseeding densitygetstoohigh.Wethinkthatthisisbasedonthefactthatweneedacertainminimumparticle distanceinthevolumetricfieldforourparticlepositionextractiontoworkaccurately.Unfortunately,this meansthatwithincreasingseedingdensityonewouldneedanincreasinggridresolutionwhichisnot possibleinmostcasesduetohardwarelimitations. 8.Results—experimentaldata Here,wenowuseexperimentaldatafromtheparabolicflightasdescribedinsection2.Theneuralnetwork hasbeentrainedwithasetofprojectionmatricesthatareobtainedfromanactualcalibrationofour experimentalsetup.Thevolumeofthevolumetricfieldwasadjustedtomatchtheinvestigatedvolumeinthe experiment.Thus,thetrainednetworkcanbedirectlyappliedtoreconstructparticlesfromasetof 8 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure8.(Top)Thereconstructedparticletrajectoriesfromtenconsecutiveframes.BluetrajectoriesareobtainedusingSTB, orangetrajectoriesareobtainedfromneuralnetworkprocessing.(a)Trajectoriesinthefullreconstructionvolume.(b)Close-up forclarity. measurementimagesthathavebeendonewiththecamerasystem.Oneimagefromtheexperimentisshown infigure1. Asthisisameasurement,thereisnogroundtruthdataavailablewhichcanbeusedtoverifythe reconstructionresults.However,thesedatahasbeenpreviouslyanalyzedusingSTB[22]andwecannow comparetheresultsfromtheneuralnetworkwiththeSTBresults.Itshouldbenotedthatinthisanalysisof theexperimenttheSTBapproachwasusedcompletelyindependentlyoftheneuralnetworkandnotasa refinementstepofneuralnetworkpredictions.Infigure8weshowsampleparticletrajectoriesthathavebeen trackedfortenframes.Asonecansee,thedifferentmethodsseemtobesensitivetodifferentaspectsasthe detectedparticlesappearpartiallyindifferentregionsinthedustcloud.WhileAIPRhasmoredetectionsin thepositivey-directionwhichcannotbefoundintheSTBdata,therearelessAIPRdetectionsinthenegative y-regioncomparedtotheSTBresults.Toquantifythis,weidentifiedallmatchingparticlepositionsthatare characterizedbyaproximitybetweenthereconstructiontechniquesoflessthan40 µm. Oneresultofthispositionmatchingis,thatthemeandistanceoftheparticlepositionsfromboth approachesis16 µm.Inotherwords,whentheapproachesreconstructaparticle,thepositionsagreequite well.Ontheonehand,58%oftheAIPRpositionsmatchtheSTBpositionsand42%oftheAIPRpositions areexclusivelydetectedinthisapproach.Ontheotherhand,only32%oftheSTBpositionsmatchtheresults fromtheAIPRprocessingand68%ofthepositionsareexclusivelyfoundbySTB. Thedifferenceintheresultsbetweenbothalgorithmsmaybeduetotheirbasicprinciples.Whereasthe AIPRalgorithmmakesasinglesnapshot-likedetectionineverysingleframe,theplainSTB-algorithmtries tofollowtheparticlepathconsecutivelybyusingKalman[30]orWiener[31]filtering.Bothapproacheshave prosandcons.TheAIPRalgorithmisthusinsensitivetoasuddenchangeoftheparticlesystem,thatcanbe inducedbyvibrationsinthemeasurementsetupwhichisthecaseinourmeasurementsonparabolicflights. TheSTB-algorithmhastheadvantagethatparticlesareprojected,reconstructedandthentrackedfora relativelylongtimeevenifthebrightnessorimagingqualityofthisparticlevariesinconsecutiveframes. Anotherpossibilitytocomparebothalgorithmsistolookatthephysicalpropertiesobtainedwitheither approach.Thenumberdensityprofilealongz-directionoftheobserveddustcloudwillnowbecompared.In earlierworkonthisdataset,wefoundthatthez-profileofthenumberdensityn revealedalayeredstructure. Infigure9,thisprofileobtainedbySTBisshownbycircles.Thecorrespondingsolidlinerepresentsafitwith threeGaussiandistributionstothedensitydata.Thesamedataset,butanalyzedwithAIPR,givesaquite 9 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Figure9.DensityprofilescreatedusingSTB(blue)andAIPR(red)algorithms.ThedatapointsarefittedwiththreeGaussian functions. similarimpressionofthestructure.TheAIPRdatainfigure9showstwostrongpeakswithamuchclearer separationofthelayers.Thethirdpeaknearz ≈0.7mmsuggestedbytheSTBanalysisisonlyfaintlypresent intheAIPRanalysis.Itcanbesuspected,thattheAIPRalgorithmseemstobelesssusceptibletoghost particlesinbetweenthepeakscomparedtoSTB.Moreghostparticleswouldresultinansmearedout distribution.Ontheotherhand,thefaintthirdpeakintheAIPRanalysismaybecausedbythepoorimaging qualityofparticlesthatarenotwellfocusedorilluminated.Asalreadysaid,itisdifficultfortheAIPR approachtohandleparticleprojectionswithapoorsignal-to-noiseratio.Incontrast,someparametersin STBcanbefine-tunedtoalsohandleatleastsomeoftheweaklyilluminatedparticles. 9.Conclusion Wehavepresentedtheapplicationofaneuralnetworktoreconstructthree-dimensionalparticlepositions fromamulti-viewimagingdiagnostic.Withanexemplary4-camerasetupthenecessarystepsfortraining andapplyinganeuralnetworkaredescribed.Itwasshownthattheneuralnetworkperformsnearlyasgood astheShake-the-boxalgorithm,whilstbeingextremelyfast(oncethetrainednetworkisavailable).The predictionstepcanbedoneonanymodernofficePC,butfortrainingofthenetworkaGPUwithalarge memoryisrecommended.Forremoteapplicationofsuchreconstructiontasks,ase.g.ontheInternational SpaceStation,thispossibilitytosharethecomputationloadisverywelcome.Thedemandingcomputations, namelythenetworktraining,canbedonebeforethemeasurementandonhigh-performancecomputers. Afterthisenergyandtimeconsumingtask,theanalysisofthemeasurementimagescanbedonewithregular hardwareinshorttime. Thereconstructionapproachwasbenchmarkedonsyntheticdataandappliedtoexperimentaldata.The AIPRapproachcanbesuggestedforstereoscopicmeasurementsofparticlesatadecentlyhighseedingrate. WithAIPRthesuccessfullyreconstructedparticlefractionisintherangebetween80%and90%evenata highparticleseedingrate.Thenumberofghostparticlesisstillinanacceptiblelevelandthepositionerroris smallerthanthevoxelsize. AIPRhasproblemswhentheimagingconditionsarenotperfect.Theinfluenceofcameramodelsand errorsincamerapositioningneedtobeaddressedinfutureinvestigations.Nevertheless,wewereableto reliablyreconstruct3Dpositionsfromexperimentaldataofadustyplasma.Theresultswereverycompatible withearlieranalysisusingSTB.WithAIPRwecouldverifythelayeringoftheinvestigateddustcloud. ThereisstillworktobedonetooptimizethebehaviouroftheAIPRwhenimagingconditionsarenot perfect.Additionally,itisnotyetclearhowcameramodelsandcamerapositioningmayaffectthe performanceoftheneuralnetworkreconstruction.Butaswepresentedinthispaper,thespeedofthe reconstructionprocesswhichisnearlyindependentoftheparticlenumberisasufficientreasontocontinue researchinthisfield.Furthermore,thereconstructionofparticlesathigherseedingratesthanweseeinour experimentisstillchallenging.Infutureworkwehopetogetbetterresultsofhigherparticledensityusing moresophisticatedpositionextractionfromthefinalvolumetricfieldsobtainedbytheneuralnetwork. 10 Mach. Learn.: Sci. Technol.2(2021)045019 MHimpelandAMelzer Dataavailabilitystatement Thedatageneratedand/oranalysedduringthecurrentstudyarenotpubliclyavailableforlegal/ethical reasonsbutareavailablefromthecorrespondingauthoronreasonablerequest.Thedatathatsupportthe findingsofthisstudyareavailableuponrequestfromtheauthors. Acknowledgment FinancialsupportfromtheDeutschesZentrumfürLuft-undRaumfahrt(DLR)underProjectNo. 50WM1962isgratefullyacknowledged. ORCIDiDs MichaelHimpel https://orcid.org/0000-0001-6710-0071 AndréMelzer https://orcid.org/0000-0001-9301-9357 References [1] WeiB,WeiW,ShaoyiB,BenJandBoL2020IOPConf.Ser.:Mater.Sci.Eng.787012002 [2] WangC,SunXandLiH2019J.Phys.:Conf.Ser.1176032028 [3] ShenZ2021J.Phys.:Conf.Ser.1881022005 [4] SwalaganataG,SulistyaningrumDRandSetiyonoB2017J.Phys.:Conf.Ser.893012062 [5] ElsingaGE,ScaranoF,WienekeBandvanOudheusdenBW2006Exp.Fluids41933–47 [6] WilliamsJD2011Phys.Plasmas18050702 [7] AkhmetbekovY,LozhkinV,MarkovichDandTokarevM2011Multisettriangulation3DPTVanditsperformancecomparedto tomographicPIV9thInt.Symp.OnParticleImageVelocimetry-PIV vol11pp21–3 [8] MulsowM,HimpelMandMelzerA2017Phys.Plasmas24123704 [9] WienekeB2013Meas.Sci.Technol.24024008 [10] SchanzD,GesemannSandSchröderA2016Exp.Fluids5770 [11] HuangH,SchwabeMandDuCR2019J.Imaging 536 [12] DietzC,BudakJ,KamprichT,KretschmerMandThomaMH2021Contrib.PlasmaPhys.e202100079 [13] GaoQ,LiQ,PanS,WangH,WeiRandWangJ2019Particlereconstructionofvolumetricparticleimagevelocimetrywithstrategy ofmachinelearning(arXiv:1909.07815) [14] HimpelM2021AIPRToolboxforMATLAB(availableat:https://physik.uni-greifswald.de/ag-melzer/aipr-toolbox/) [15] NefedovAPetal2003NewJ.Phys.533 [16] SchwabeM,ZhdanovS,RäthC,GravesDB,ThomasHMandMorfillGE2014Phys.Rev.Lett.112115002 [17] ThomaMH,HöfnerH,KretschmerM,RatynskaiaS,MorfillGE,UsachevA,ZobninA,PetrovOandFortovV2006Microgravity Sci.Technol.847–50 [18] ThomasHMetal2008NewJ.Phys.10033036 [19] KlindworthM,ArpOandPielA2006J.Phys.D:Appl.Phys.391095–104 [20] KlindworthM,ArpOandPielA2007Rev.Sci.Instrum.78033502 [21] IshiharaO2007J.Phys.D:Appl.Phys.40R121–47 [22] HimpelM,SchüttS,MilochWJandMelzerA2018Phys.Plasmas25083707 [23] DiceLR1945Ecology26297–302 [24] JaccardP1912NewPhytol.1137–50 [25] RezatofighiSH,TsoiN,GwakJ,SadeghianA,ReidIDandSavareseS2019Generalizedintersectionoverunion:ametricanda lossforboundingboxregressionCoRR(arXiv:1902.09630) [26] WienekeB2008Exp.Fluids45549–56 [27] WienekeB2018Meas.Sci.Technol.29084002 [28] ZhangZ2000IEEETrans.PatternAnal.Mach.Intell.221330–4 [29] HimpelM,ButtenschönBandMelzerA2011Rev.Sci.Instrum.82053706 [30] KalmanRE1960Trans.ASME,J.BasicEng.8235–45 [31] WienerN1964Extrapolation,InterpolationandSmoothingofStationaryTimeSeries(Cambridge,MA:MITPress)

Journal

Machine Learning: Science and TechnologyIOP Publishing

Published: Dec 1, 2021

Keywords: 3D; particle; reconstruction; neural; networks; vision; dusty plasma

There are no references for this article.