Abstract
In this paper, we establish a partially observable Markov decision process (POMDP) model framework that captures dynamic changes in human trust and workload for contexts that involve interactions between humans and intelligent decision-aid systems. We use a reconnaissance mission study to elicit a dynamic change in human trust and workload with respect to the system's reliability and user interface transparency as well as the presence or absence of danger. We use human subject data to estimate transition and observation probabilities of the POMDP model and analyze the trust-workload behavior of humans. Our results indicate that higher transparency is more likely to increase human trust when the existing trust is low but also is more likely to decrease trust when it is already high. Furthermore, we show that by using high transparency, the workload of the human is always likely to increase. In our companion paper, we use this estimated model to develop an optimal control policy that varies system transparency to affect human trust-workload behavior towards improving human-machine collaboration.
Original language | English (US) |
---|---|
Pages (from-to) | 315-321 |
Number of pages | 7 |
Journal | IFAC-PapersOnLine |
Volume | 51 |
Issue number | 34 |
DOIs | |
State | Published - Jan 1 2019 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
Access to Document
Other files and links
Fingerprint
Dive into the research topics of 'Improving Human-Machine Collaboration Through Transparency-based Feedback – Part I: Human Trust and Workload Model'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
In: IFAC-PapersOnLine, Vol. 51, No. 34, 01.01.2019, p. 315-321.
Research output: Contribution to journal › Article › peer-review
TY - JOUR
T1 - Improving Human-Machine Collaboration Through Transparency-based Feedback – Part I
T2 - Human Trust and Workload Model
AU - Akash, Kumar
AU - Polson, Katelyn
AU - Reid, Tahira
AU - Jain, Neera
N1 - Funding Information: Part I: Hum∗ an Trust an∗d Workloa∗ d Model∗ Kumar Akash ∗ Katelyn Polson ∗ Tahira Reid ∗ Neera Jain ∗ Kumar Akash Katelyn Polson Tahira Reid Neera Jain Ku∗mar Akash∗ Katelyn Polson∗ Tahira Reid∗ Neera Jain∗ ∗ School of Mechanical Engineering, Purdue Univerffity, Wefft ∗∗ School of Mechanical Engineering, Purdue Univerffity, Wefft ∗ Lafayette, IN 47907 USA (e-mail: kakaffh@purdue.edu, polffonk@purdue.edu, tahira@purdue.edu, neerajain@purdue.edu). polffonLka@fapyuertdteu,e.IeNdu4,7t9a0h7irUa@SApur(deu-me.aedilu: ,kankeaerffahj@aipnu@rdpuuer.deudeu.,edu). polffonk@purdue.edu, tahira@purdue.edu, neerajain@purdue.edu). Abstract: In this paper, we establish a partially observable ffiarflov decision process (POffiDP) Abstract: In this paper, we establish a partially observable ffiarflov decision process (POffiDP) model frameworfl that captures dynamic changes in human trust and worflload for contexts model frameworfl that captures dynamic changes in human trust and worflload for contexts that involve interactions between humans and intelligent decision-aid systems. We use a that involve interactions between humans and intelligent decision-aid systems. We use a reconnaissance mission study to elicit a dynamic change in human trust and worflload with reconnaissance mission study to elicit a dynamic change in human trust and worflload with respect to the system’s reliability and user interface transparency as well as the presence respect to the system’s reliability and user interface transparency as well as the presence or absence of danger. We use human subject data to estimate transition and observation or absence of danger. We use human subject data to estimate transition and observation probabilities of the POffiDP model and analyze the trust-worflload behavior of humans. Our probabilities of the POffiDP model and analyze the trust-worflload behavior of humans. Our results indicate that higher transparency is more liflely to increase human trust when the existing results indicate that higher transparency is more liflely to increase human trust when the existing trust is low but also is more liflely to decrease trust when it is already high. Furthermore, we trust is low but also is more liflely to decrease trust when it is already high. Furthermore, we strhuoswt itshalotwbybuutsianlgsohiisghmtorraenslipflaerlyentcoy,dtehcerewasoerfltlrouasdt owfhtehneihtuims alnreiasdaylwhaigyhs.liFfluelrythteorminocre,awsee. show that by using high transparency, the worflload of the human is always liflely to increase. In our companion paper, we use this estimated model to develop an optimal control policy In our companion paper, we use this estimated model to develop an optimal control policy that varies system transparency to affect human trust-worflload behavior towards improving human-machine collaboration. thuamtavna-rmieaschsyinsetecmolltarbaonrsaptaiorenn.cy to affect human trust-worflload behavior towards improving © 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. human-machine collaboration. Keywordff: trust in automation, human-machine interface, intelligent machines, ffiarflov Keywordff: trust in automation, human-machine interface, intelligent machines, ffiarflov decision processes, stochastic modeling, parameter estimation, dynamic behavior deecyiwsioorndfpf:rotrcuesssteisn, satuotcohmasattiiconm,ohduemlinagn,-mpaarcahmineeteinrteesrtfiamcea,tiionnte,ldliygennatmmicabchehinaevsi,ofrfiarflov decision processes, stochastic modeling, parameter estimation, dynamic behavior 1. INTRODUCTION man’s performance. Therefore, we aim to design intelligent 1. INTRODUCTION man’s performance. Therefore, we aim to design intelligent systems that can respond to changes in human trust and GGiivveenn thethe uubbiiqquuiittyy ooff aautoutononommoousus aandnd iinntteelliglligeenntt ssysystetemmss,, wyosrtfellmoasdthinatrceaanl-trimesepotnodatcohicehveanogpetsiminalhuomr naneatrr-oupsttiamnadlworflloadinreal-timetoachieveoptimalornear-optimal Given the ubiquity of autonomous and intelligent systems, weorrffollromadanicne.reFaolr-tiinmteelltiogeancthsiyevsteemopst,ima ualseorrinteearrf-aocpet(imUaI)l humans are increasingly interacting and collaborating with werorfflorloadmanince.realFor-tiinmetelltiogeancthsyievsteems,optimala userorinntearerface-opt(imalUI) Giventheubiquityofautonomousandintelligentsystems, isegrfeonremraalnlycet.hFeomreinatneslltihgeronutgshyswtehmicsh,caoumsmeruinitceartfaiocnew(UitIh) suchsystems inbothcomplexsituations(e.g.,warfarehumansareincreasinglyinteractingandcollaboratingwith is generally the means through which communication with shuucmhanssysyssatetreemminsscinirneabbsioonthtghlyccioonmmteplepralecxxtinssgituaitaunadtiotiocnsnolsla(e(beo.g.gra.,.,tinwwgaarfarwfaitrerhe isisheggenehnueremralaallylnyithetshaecmmhieeeaavnsneds.throtThrhoeugurgehhfowwrehh,iitcchhheccsooymmstmmemuunnmiiccuaatiotsitounnnwwdithietrh- and healthcare) and daily life (e.g., robotic vacuums). the human is achieved. Therefore, the system must under-aPaundnucdbhlihehsehyaaeslthldttehmscctaasurerdein))iesaabndnhodtahvdadecaosiilylhmyopwlifllinefeext(e(hseiat.g.tgu.,.a,htuiroromonbbasoontict(iect.rguvv.,saatccwuumuiaunrmfassur).)e-. thtaenhduhmoawntihseactrhainevffepdar.eTnhcyereoffoirtes,ctohmemsyusnteicmatmiounstthurnoduegrh-PublisPublishehedd sstudietudiess hhaavvee sshhoowwnn thathatt hhuummaann trutrusstt inin aau-u- stand how the tranffparency of its communication through Pnudblihsehaeldthsctaurde)iesanhdavdeasilhyowlinfe t(hea.tg.,hurmobaontictruvsatcuiunmasu)-. the UI affects the human’s cognitive state. tomation is an important factor that affects the outcome theUIaffectsthehuman’scognitivestate.thtaenhduhmoawntihseactrhainevffepdar.eTnhcyereoffoirtes,ctohmemsyusnteicmatmiounstthurnoduegrh- ttotomatofmtahteiiheonoandfoiissrsetudieanamneniimpmtsipohnoroarevttdaaennittsnhtffoeactaarwccattncorootrrithaottnhhsatattahaffectanufdfmecattnhssattrusthheeitouotucinattcomecnoambu-ee sthetandUIhaoffwecthtsethetranhffupmaraen’sncycoofgnitsiticvoemsmtauten.ication through of the aforementioned interactions and that it can be Although researchers have developed various models of of the aforementioned interactions and that it can be Although researchers have developed various models of improved by increasing the transparency of an intelligent human trust behavior (ffioe et al., 2008; ffialifl et al., system’s decisions (Helldin, 2014; ffiercado et al., 2016). h0ul0tmh9ao)nuagnthdrureseststeababerlchihsaheverisdorhtha(fevfeieofedfeecevttelooafpl.te,rda2n0vs0ap8rai;oreufnfsicaymlifolondeettlrsualos.tf, sCyhsetenme’tsadl.e(c2is0i1o4n)sd(eHfienleldsitnr,an2s0p1a4r;efnficeyrcaasd“otheetdaels.,cr2i0p1ti6v)e. 2009) and established the effect of transparency on trust Chsyhsenetenmete’tsaladl..e((c2014)2is0i1o4n)sdd(efieHfiennleseldsittnrr,anan2sps0p1ara4r;enefnficyceyrcasaasd“t“othheeetddaescrels.,cr2iip0p1ttii6vv)ee. 2009)(Helldiann,d2014;estabffilishercaded tohete effecal.,t2016;of trWanaspngaretencyal.,on2016a)trust, Cquhaelnityetoafl.a(n20in1t4e)rfdaecfeinpeserttraainnsipnagretonciytsaasb“itlhiteiedsetsocraipfftoivrde (Helldin, 2014; ffiercado et al., 2016; Wang et al., 2016a), quality of an interface pertaining to its abilities to afford there does not exist a quantitative model that captures the aiannnteonpte,rpaaettroofrro’’rssmccaoonmmcppe,rrefhueetunnrsseiioopnnlaaanbbso,uuattnadnreiiannsttoeellnlliignegnpttraogceensstt.’’”ss dynamic effect of transparency on human trust. Further-intent, performance, future plans, and reasoning process.” more, published studies considering the effects of trans-Therefore, greater transparency allows humans to mafle myaonrreaenm,ciypcuobenflfieswhcoterdoflflsottaurdadnidesspoacnroeonntscimydeoordninelghuittmsheadnyenftfareumcstitsc.soF.fuTtrhrtahenreesr---Tinhfoerrmefoedrej,ugdrgemateenrtstrandspaacrceonrcdyinaglllyowmsafhleumbeatntsertcohomicaeflse. parency on worflload do not model its dynamics. There-iTnnhfforoerrmedmefoedrejj,uugddrggemmateeennrttsstranaannddspaccoraacrceonrcddyiinnaglglllyyowmmsaaffhlleeumbbeteatnttserertccohhoiomices.caeflse. pfoarreen,acyfundaon wmoernfltlaoadlgadporenmotainsmoindelcaipturints dyngamithecs.dynaTherme-ic informed judgments and accordingly mafle better choices. fore, a fundamental gap remains in capturing the dynamic NNinoofonenremthetheedlelejssussd,,ghighmigehhntlelsevvaeenlslsdooaffcctrustorurdsttinaagrerleynonmoattflaaellwwbeaatyytsserdedcehssiraioriacblebelse. effect of machine transparency on human trust-worflload NaNnodnecttahhneelelesas,dhhtiiogghhulemveallnsssotfrtursutsintgaraenneortroarl-wparoyynssedesysiisrrtaaebbmllee. bofefreheca,tvaiofoffurnmsdoaacmhthaiennettatitlrangnacapsppnrarerbmenenacyiunysseondinnfochaurpmtimuanrnpinrtogrvutisthnteg-wdhoyurnflmlaloadmaan-idc and can lead to humans trusting an error-prone system. behavior so that it can be used for improving human- aIanndnosdnteeccataahdnne,ltelelresuaass,ddthstotihogohhhuuulldemmvbeaaelnsnsasoptrustfprrtuorsuptingtsirntiagateraaelnnynceeoarrortlriobar-rrl-waproptareoydnensaedcesscysysoisrrtetdaeibmmnlge.. mfefaehccahtviniooefrmcsooallalcahtbhinoaraetatiotiirtoan.nc.aspnarbeencuyseodnfohourmimmanprtorvuisntg-whourfmlloaan-d- IInsnsteteaad,d,trustrusttsshohoulduldbbeeaapproppropriapriatetelylyccaalibralibrateteddaaccccoordingrding beahcahvinioer csoollatbhoartatiitonc.an be used for improving human-to the system’s capability (Lee and See, 2004). ffioreover, In this paper, we present a partially observable ffiarflov tthtooigtthhheetrsysaynstsstpem’eeammre’’ssnccapccyaappiabanbviiollliivttyyes((LeeLLcoeemee mananuddniSScee,eaet,in2004)22g0000m44o))..reffiffiioronrfoeeoormvveeearrr-,, IInnacthishhiinsepacaoplelar,,bowreatpreiornes.ent a partially observable ffiarflov high transparency involves communicating more informa-Inectishiiosnpparpoecre,ssw(ePOprfefisDenPt)amopdaretlifarlalymeowboserrflvfaobrlecafpfitaurrfilnovg high transparency involves communicating more informa-decision process (POffiDP) model frameworfl for capturing tion to the human and thus can increase the worflload dnyenctiashmiiosincpfpfaropofecreh,suswm(ePanOprftefrisuDefnfPtt)aamndopdawreotlirfakrlalloymadeowbfosoerrrflvcfaoobnrlteceaxfptfitsaurtrfhilnoavgt tion to the human and thus can increase the worflload dynamicff of human trufft and workload for contexts that of the human (Lyu et al., 2017). In turn, high levels of dnyevncoiaslmvioeincifpnfrtooefcraehscustmi(oPannObftferituDwffPtee)anmndoadwheoulrfmkralaomnadeawnfoodrrflacfononrticneaxtpetltsluigtrheinnagtt owofortfhlleoahdumcan l(eLaydutoetfatl.i,gu2e0,17w)h.icInh tcuarnn,rehdiugchelethveelshuo-f involve interaction between a human and an intelligent wofoorfllortfhlleoaahddumccaannlel(eLaayddutotoetfafatigtl.i,gueu2e0,,17ww)hh.iiccInhh tccuaarnnn,rerehducdiugcheelethethveelshhuuo--f idnecivosilvone-aiintdersyacsttionem.bWetewespeneciaficalhulmanyconansidderanarinecontellingaiens-t worflload can lead to fatigue, which can reduce the hu-decision-aid system. We specifically consider a reconnais-worflload can lead to fatigue, which can reduce the hu-★ This material is based upon worffi supported by the National Sci-sance mission study adapted from the literature in which e★ncTehFisomunadtaertiiaolniusnbdaesredAuwpaordnNwoor.ff1i5s4u8p6p1o6r.teAdnbyyotphineioNnast,iofinnadlinSgcsi-, sancemissionstudyadaptedfromtheliteratureinwhich ence Foundation under Award No. 1548616. Any opinions, findings, human subjects are aided by a virtual robotic assistant e★nce Foundation under Award No. 1548616. Any opinions, findings, haunmceanmsisusbiojenctsstuadrye adidaepdtedbyfraomvirtthueallitreorbatoutirce ainsswisthaicnht ence Foundation under Award No. 1548616. Any opinions, findings, in completing a series of reconnaissance missions. We use and conclusions or recommendations expressed in this material are in completing a series of reconnaissance missions. We use and conclusions or recommendations expressed in this material are in completing a series of reconnaissance missions. We use those of the author(s) and do not necessarily reflect the views of the the collected human subject data to train the POffiDP those of the author(s) and do not necessarily reflect the views of the the collected human subject data to train the POffiDP NthationalaotsieonoafltScienceShceieanuctehoFFroo(uus)nnddanaattdiioodnno..not necessarily reflect the views of the the collected human subject data to train the POffiDP National Science Foundation. 2405-8963 © 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. CCPooepperyy rrriieggvhhiettw ©© u22n00d11e88r IIFFreAAspCConsibility of International Federation of Automa3713t7ic1 Control. Copyright ©2018 IFAC 371 C10.1016/j.ifacol.2019.01.028opyright ©2018 IFAC 371 Publisher Copyright: © 2019
PY - 2019/1/1
Y1 - 2019/1/1
N2 - In this paper, we establish a partially observable Markov decision process (POMDP) model framework that captures dynamic changes in human trust and workload for contexts that involve interactions between humans and intelligent decision-aid systems. We use a reconnaissance mission study to elicit a dynamic change in human trust and workload with respect to the system's reliability and user interface transparency as well as the presence or absence of danger. We use human subject data to estimate transition and observation probabilities of the POMDP model and analyze the trust-workload behavior of humans. Our results indicate that higher transparency is more likely to increase human trust when the existing trust is low but also is more likely to decrease trust when it is already high. Furthermore, we show that by using high transparency, the workload of the human is always likely to increase. In our companion paper, we use this estimated model to develop an optimal control policy that varies system transparency to affect human trust-workload behavior towards improving human-machine collaboration.
AB - In this paper, we establish a partially observable Markov decision process (POMDP) model framework that captures dynamic changes in human trust and workload for contexts that involve interactions between humans and intelligent decision-aid systems. We use a reconnaissance mission study to elicit a dynamic change in human trust and workload with respect to the system's reliability and user interface transparency as well as the presence or absence of danger. We use human subject data to estimate transition and observation probabilities of the POMDP model and analyze the trust-workload behavior of humans. Our results indicate that higher transparency is more likely to increase human trust when the existing trust is low but also is more likely to decrease trust when it is already high. Furthermore, we show that by using high transparency, the workload of the human is always likely to increase. In our companion paper, we use this estimated model to develop an optimal control policy that varies system transparency to affect human trust-workload behavior towards improving human-machine collaboration.
UR - http://www.scopus.com/inward/record.url?scp=85061203943&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061203943&partnerID=8YFLogxK
U2 - 10.1016/j.ifacol.2019.01.028
DO - 10.1016/j.ifacol.2019.01.028
M3 - Article
AN - SCOPUS:85061203943
SN - 2405-8963
VL - 51
SP - 315
EP - 321
JO - IFAC-PapersOnLine
JF - IFAC-PapersOnLine
IS - 34
ER -