[1] THRUN S, MONTEMERLO M, DAHLKAMP H, et al. Stanley: The Robot That Won the DARPA Grand Challenge // BUEHLER M, IAGNEMNA K, SINGH S, eds. The 2005 DARPA Grand Cha-llenge. Berlin, Germany: Springer, 2007: 1-43.
[2] 秦志嫒,白文岭,贾宁.智能网联汽车企业技术路径解析.汽车与配件, 2019(9): 31-33.
(QIN Z Y, BAI W L, JIA N.Analysis of Technical Path of Intelli-gent Networked Vehicle Enterprises. Automobile and Parts, 2019(9): 31-33.)
[3] PALMEIRO A R, VAN DER KINT S, VISSERS L, et al. Interaction between Pedestrians and Automated Vehicles: A Wizard of Oz Experiment. Transportation Research Part F(Traffic Psychology and Behaviour), 2018, 58: 1005-1020.
[4] JUN M, CHAUDHRY A I, ANDREA R D.The Navigation of Autonomous Vehicles in Uncertain Dynamic Environments: A Case Study // Proc of the 41st IEEE Conference on Decision and Control. Washington, USA: IEEE, 2002: 3770-3775.
[5] KIBALOV V, SHIPITKO O.Safe Speed Control and Collision Pro-bability Estimation Under Ego-Pose Uncertainty for Autonomous Vehicle // Proc of the 23rd IEEE International Conference on Intelligent Transportation Systems. Washington, USA: IEEE, 2020. DOI: 10.1109/ITSC.45102.2020.9294531.
[6] BERNSTEIN D S, GIVAN R, IMMERMAN N, et al. The Comple-xity of Decentralized Control of Markov Decision Processes. Mathematics of Operations Research, 2002, 27(4): 819-840.
[7] 房俊恒. 基于点的值迭代算法在POMDP问题中的研究.硕士学位论文.苏州:苏州大学, 2015.
(FANG J H.Research on Point-Based Value Iteration Algorithm in POMDP Domains. Master Dissertation. Suzhou, China: Soochow University, 2015.)
[8] PINEAU J, GORDON G, THRUN S.Point-Based Value Iteration: An Anytime Algorithm for POMDPs // Proc of the 18th International Joint Conference on Artificial Intelligence. San Francisco, USA: Morgan Kaufmann, 2003: 1025-1030.
[9] BAI H Y, CAI S J, YE N, et al. Intention-Aware Online POMDP Planning for Autonomous Driving in a Crowd // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2015: 454-460.
[10] LUO Y F, CAI P P, BERA A, et al. PORCA: Modeling and Planning for Autonomous Driving Among Many Pedestrians. IEEE Robotics and Automation Letters, 2018, 3(4): 3418-3425.
[11] CAI P P, LUO Y F, SAXENA A, et al. LeTS-Drive: Driving in a Crowd by Learning from Tree Search[C/OL].[2022-11-16]. https://arxiv.org/pdf/1905.12197.pdf.
[12] MONAHAN G E.State of the Art-A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms. Management Science, 1982, 28(1): 1-16.
[13] JORRITSMA P H M. Incremental Region Enhanced Neural Q-Learning for Solving Model-Based POMDPs. Bachelor Dissertation. Groningen, The Netherlands: University of Groningen, 2011.
[14] HORÁK K, BOŠANSKý B, CHATTERJEE K. Goal-HSVI: Heuristic Search Value Iteration for Goal-POMDPs // Proc of the 27th International Joint Conference on Artificial Intelligence. San Francisco, USA: IJCAI, 2018: 4764-4770.
[15] 杜波. 启发式概率值迭代算法:一种求解POMDP问题的近似框架.硕士学位论文.南京:南京大学, 2014.
(DU B.Heuristic Probabilistic Value Iteration: An Approximation Framework for POMDPs. Master Dissertation. Nanjing, China: Nanjing University, 2014.)
[16] 孙湧,仵博,冯延蓬.基于策略迭代和值迭代的POMDP算法.计算机研究与发展, 2008, 45(10): 1763-1768.
(SUN Y, WU B, FENG Y P.A Policy- and Value- Iteration Algorithm for POMDP. Journal of Computer Research and Development, 2008, 45(10): 1763-1768.)
[17] CHEN Y C, KOCHENDERFER M J, SPAAN M T J. Improving Offline Value-Function Approximations for POMDPs by Reducing Discount Factors // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington, USA: IEEE, 2018: 3531-3536.
[18] THRUN S.Monte Carlo POMDPs // Proc of the 12th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 1999: 1064-1070.
[19] SILVER D, VENESS J.Monte-Carlo Planning in Large POMDPs // Proc of the 23rd International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2010, II: 2164-2172.
[20] NYLUND K L, ASPAROUHOV T, MUTHÉN B O. Deciding on the Number of Classes in Latent Class Analysis and Growth Mixture Modeling: A Monte Carlo Simulation Study. Structural Equation Modeling: A Multidisciplinary Journal, 2007, 14(4): 535-569.
[21] 郑红燕,仵博,冯延蓬,等.基于信念点裁剪策略树的POMDP求解算法.信息与控制, 2013, 42(1): 53-57.
(ZHENG H Y, WU B, FENG Y P, et al. Belief Point-Based POMDP Solution for Policy Tree Pruning. Information and Control, 2013, 42(1): 53-57.)
[22] WOLF T B, KOCHENDERFER M J.Aircraft Collision Avoidance Using Monte Carlo Real-Time Belief Space Search. Journal of Intelligent and Robotic Systems, 2011, 64: 277-298.
[23] 章宗长,陈小平.杂合启发式在线POMDP规划.软件学报, 2013, 24(7): 1589-1600.
(ZHANG Z C, CHEN X P.Hybrid Heuristic Online Planning for POMDPs. Journal of Software, 2013, 24(7): 1589-1600.)
[24] ZHANG Z Z, HSU D, LEE W S.Covering Number for Efficient Heuristic-Based POMDP Planning // Proc of the 31st International Conference on Machine Learning. San Diego, USA: JMLR, 2014: 28-36.
[25] 石轲. 基于马尔可夫决策过程理论的Agent决策问题研究.硕士学位论文.合肥:中国科学技术大学, 2010.
(SHI K.Research on Agent Decision Problem Based on Markov Decision Process Theory. Master Dissertation. Hefei, China: University of Science and Technology of China, 2010.)
[26] KARKUS P, HSU D, LEE W S.QMDP-Net: Deep Learning for Planning under Partial Observability // Proc of the 31st International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2017: 4697-4707.
[27] YUN C, CHOI S.Visual Localization and POMDP for Autonomous Indoor Navigation[J/OL]. [2022-11-16].https://documents.pub/document/visual-localization-and-pomdp-for-autonomous-indoor-.html.
[28] VLASSIS N, LITTMAN M L, BARBER D.On the Computational Complexity of Stochastic Controller Optimization in POMDPs. ACM Transactions on Computation Theory, 2012, 4(4). DOI: 10.1145/2382559.2382563.
[29] TUTTLE E, GHAHRAMANI Z.Propagating Uncertainty in POMDP Value Iteration with Gaussian Processes[C/OL]. [2022-11-16].https://citeseerx.ist.psu.edu/doc/10.1.1.400.6737.
[30] WASHINGTON R.BI-POMDP: Bounded, Incremental, Partially-Observable Markov-Model Planning // Proc of the European Conference on Planning. Berlin, Germany: Springer, 1997: 440-451.
[31] ROSS S, PINEAU J, CHAIB-DRAA B.Online Policy Improvement in Large POMDPs via an Error Minimization Search[C/OL]. [2022-11-16].https://www.cs.cmu.edu/~sross1/publications/Ross-NESCAI07-AEMS.pdf.
[32] YE N, SOMANI A, HSU D, et al. DESPOT: Online POMDP Planning with Regularization. Journal of Artificial Intelligence Research, 2017, 58(1): 231-266.
[33] MNIH V, KAVUKCUOGLU K, SILVER D, ,et al. Playing Atari with Deep Reinforcement Learning[C/OL]. [2022-11-16]. https://arxiv.org/pdf/1312.. Playing Atari with Deep Reinforcement Learning[C/OL]. [2022-11-16]. https://arxiv.org/pdf/1312.5602.pdf.
[34] WANG Z Y, SCHAUL T, HESSEL M, et al. Dueling Network Architectures for Deep Reinforcement Learning // Proc of the 33rd International Conference on Machine Learning. San Diego, USA: JMLR, 2016: 1995-2003.
[35] VAN HASSELT V, GUEZ A, SILVER D.Deep Reinforcement Learning with Double Q-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 2016, 30(1): 2094-2100.
[36] HESSEL M, MODAYIL J, VAN HASSELT H, et al. Rainbow: Combining Improvements in Deep Reinforcement Learning // Proc of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence. Palo Alto, USA: AAAI, 2018: 3215-3222.
[37] HAARNOJA T, ZHOU A, HARTIKAINEN K, et al. Soft Actor-Critic Algorithms and Applications[C/OL].[2022-11-16]. https://arxiv.org/pdf/1812.05905.pdf.
[38] LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous Control with Deep Reinforcement Learning[C/OL].[2022-11-16]. https://arxiv.org/pdf/1509.02971.pdf.
[39] EGOROV M. Deep Reinforcement Learning with POMDPs[C/OL]. [2022-11-16]. http://cs229.stanford.edu/proj2015/363_report.pdf.
[40] HAUSKNECHT M, STONE P.Deep Recurrent Q-Learning for Partially Observable MDPs[C/OL]. [2022-11-16].https://arxiv.org/pdf/1507.06527.pdf.
[41] HOCHREITER S, SCHMIDHUBER J.Long Short-Term Memory. Neural Computation, 1997, 9(8): 1735-1780.
[42] FOERSTER J N, ASSAEL Y M, DE FREITAS N, et al. Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks[C/OL].[2022-11-16]. https://arxiv.org/pdf/1602.02672.pdf.
[43] ZHU P F, LI X, POUPART P, et al. On Improving Deep Rein-forcement Learning for POMDPs[C/OL].[2022-11-16]. https://arxiv.org/pdf/1704.07978.pdf.
[44] IGL M, ZINTGRAF L, LE T A, et al. Deep Variational Reinforcement Learning for POMDPs. Proceedings of Machine Learning Research, 2018, 80: 2117-2126.
[45] LE T A, IGL M, RAINFORTH T, et al. Auto-Encoding Sequential Monte Carlo[C/OL].[2022-11-16]. https://arxiv.org/pdf/1705.10306.pdf.
[46] WANG Y B, LIU B, WU J J, et al. DualSMC: Tunneling Diffe-rentiable Filtering and Planning under Continuous POMDPs // Proc of the 29th International Joint Conference on Artificial Intelligence. San Francisco, USA: IJCAI, 2020: 4190-4198.
[47] SINGH G, PERI S, KIM J, et al. Structured World Belief for Reinforcement Learning in POMDP. Proceedings of Machine Learning Research, 2021, 139: 9744-9755.
[48] CHEN X Y, MU Y M, LUO P, et al. Flow-Based Recurrent Belief State Learning for POMDPs. Proceedings of Machine Learning Research, 2022, 162: 3444-3468.
[49] DINH L, SOHL-DICKSTEIN J, BENGIO S.Density Estimation Using Real NVP[C/OL]. [2022-11-16].https://arxiv.org/pdf/1605.08803.pdf.
[50] TASSA Y, DORON Y, MULDAL A, et al. DeepMind Control Suite[C/OL].[2022-11-16]. https://arxiv.org/pdf/1801.00690.pdf.
[51] FERRER G, GARRELL A, SANFELIU A.Robot Companion: A Social-Force Based Approach with Human Awareness-Navigation in Crowded Environments // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington, USA: IEEE, 2014: 1688-1694.
[52] LÖHNER R. On the Modeling of Pedestrian Motion. Applied Ma-thematical Modelling, 2010, 34(2): 366-382.
[53] HELBING D, MOLNÁR P. Social Force Model for Pedestrian Dynamics. Physical Review E, 1995, 51(5): 4282-4286.
[54] SEDIGHI S, VAN NGUYEN D, KUHNERT K D.Guided Hybrid A-Star Path Planning Algorithm for Valet Parking Applications // Proc of the 5th International Conference on Control, Automation and Robotics. Washington, USA: IEEE, 2019: 570-575.
[55] CONLTER R C.Implementation of the Pure Pursuit Path Tracking Algorithm. Technical Report, CMU-RI-TR-92-01. Pittsburgh, USA: Carnegie Mellon University, 1992.
[56] CAI P P, LUO Y F, HSU D, et al. HyP-DESPOT: A Hybrid Pa-rallel Algorithm for Online Planning Under Uncertainty[C/OL].[2022-11-16]. https://arxiv.org/pdf/1802.06215v1.pdf.
[57] BOUTON M, COSGUN A, KOCHENDERFER M J.Belief State Planning for Autonomously Navigating Urban Intersections // Proc of the IEEE Intelligent Vehicles Symposium. Washington, USA: IEEE, 2017: 825-830.
[58] LIN X, ZHANG J C, SHANG J, et al. Decision Making Through Occluded Intersections for Autonomous Driving // Proc of the IEEE Intelligent Transportation Systems Conference. Washington, USA: IEEE, 2019: 2449-2455.
[59] HUBMANN C, QUETSCHLICH N, SCHULZ J, et al. A POMDP Maneuver Planner for Occlusions in Urban Scenarios // Proc of the IEEE Intelligent Vehicles Symposium. Washington, USA: IEEE, 2019: 2172-2179.
[60] PRUEKPRASERT S, ZHANG X Y, DUBUT J, et al. Decision Making for Autonomous Vehicles at Unsignalized Intersection in Presence of Malicious Vehicles // Proc of the IEEE Intelligent Transportation Systems Conference. Washington, USA: IEEE, 2019: 2299-2304.
[61] MEGHJANI M, LUO Y F, HO Q H, et al. Context and Intention Aware Planning for Urban Driving // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington, USA: IEEE, 2019: 2891-2898.
[62] ULBRICH S, MAURER M.Probabilistic Online POMDP Decision Making for Lane Changes in Fully Automated Driving // Proc of the 16th International IEEE Conference on Intelligent Transportation Systems. Washington, USA: IEEE, 2013: 2063-2070.
[63] MENTASTI S, MATTEUCCI M.Multi-layer Occupancy Grid Ma-pping for Autonomous Vehicles Navigation // Proc of the AEIT International Conference of Electrical and Electronic Technologies for Automotive. Washington, USA: IEEE, 2019. DOI: 10.23919/EETA.2019.8804556.
[64] KHONJI M, JASOUR A, WILLIAMS B C.Approximability of Constant-Horizon Constrained POMDP // Proc of the 28th International Joint Conference on Artificial Intelligence. San Francisco, USA: IJCAI, 2019: 5583-5590.
[65] DING W C, ZHANG L, CHEN J, et al. Safe Trajectory Generation for Complex Urban Environments Using Spatio-Temporal Se-mantic Corridor. IEEE Robotics and Automation Letters, 2019, 4(3): 2997-3004.
[66] CHEN X Y L, LI S J, MERSCH B, et al. Moving Object Segmentation in 3D LiDAR Data: A Learning-Based Approach Exploiting Sequential Data. IEEE Robotics and Automation Letters, 2021, 6(4): 6529-6536.
[67] LAGHMARA H, BOUDALI M T, LAURAIN T, et al. Obstacle Avoidance, Path Planning and Control for Autonomous Vehicles // Proc of the IEEE Intelligent Vehicles Symposium. Washington, USA: IEEE, 2019: 529-534.
[68] YANG Z C, GAO Z H, GAO F, et al. Lane Changing Assistance Strategy Based on an Improved Probabilistic Model of Dynamic Occu-pancy Grids. Frontiers of Information Technology and Electronic Engineering, 2021, 22(11): 1492-1504.
[69] ELFES A.Using Occupancy Grids for Mobile Robot Perception and Navigation. Computer, 1989, 22(6): 46-57.
[70] MERHY B A, PAYEUR P, PETRIU E M.Application of Segmented 2-D Probabilistic Occupancy Maps for Robot Sensing and Navigation. IEEE Transactions on Instrumentation and Measurement, 2008, 57(12): 2827-2837.
[71] TSARDOULIAS E G, ILIAKOPOULOU A, KARGAKOS A, et al. A Review of Global Path Planning Methods for Occupancy Grid Maps Regardless of Obstacle Density. Journal of Intelligent and Robotic Systems, 2016, 84(1/2/3/4): 829-858.
[72] YANG K, GAN S K, HUH J, et al. Optimal Spline-Based RRT Path Planning Using Probabilistic Map // Proc of the 14th International Conference on Control, Automation and Systems. Washington, USA: IEEE, 2014: 643-646.
[73] PENDLETON S D, LIU W, ANDERSEN H, et al. Numerical Approach to Reachability-Guided Sampling-Based Motion Planning Under Differential Constraints. IEEE Robotics and Automation Letters, 2017, 2(3): 1232-1239.
[74] KARAMAN S, WALTER M R, PEREZ A, et al. Anytime Motion Planning Using the RRT* // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2011: 1478-1483.
[75] FULGENZI C, SPALANZANI A, LAUGIER C, et al. Risk Based Motion Planning and Navigation in Uncertain Dynamic Environment[C/OL].[2022-11-20]. https://hal.inria.fr/inria-00526601/file/PPRRT.pdf.
[76] MA H, MENG F, YE C W, et al. Bi-Risk-RRT Based Efficient Motion Planning for Autonomous Ground Vehicles. IEEE Transactions on Intelligent Vehicles, 2022, 7(3): 722-733.
[77] YANG H X, XU X M, HONG J C.Automatic Parking Path Pla-nning of Tracked Vehicle Based on Improved A* and DWA Algorithms. IEEE Transactions on Transportation Electrification, 2022. DOI: 10.1109/TTE.2022.3199255.
[78] PAN Z C, YUAN M X, WANG R H, et al. D2WA: “Dynamic” DWA for Motion Planning of Mobile Robots in Dynamic Environments. International Journal of Dynamics and Control, 2022. DOI: 10.1007/s40435-022-01092-3.
[79] KOENIG S, LIKHACHEV M, FURCY D.Lifelong Planning A*. Artificial Intelligence, 2004, 155(1/2): 93-146.
[80] STENTZ A.Optimal and Efficient Path Planning for Partially Known Environments // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 1994: 3310-3317.
[81] YANG L, QI J T, SONG D L, et al. Survey of Robot 3D Path Planning Algorithms. Journal of Control Science and Engineering, 2016. DOI: 10.1155/2016/7426913.
[82] KIM D, KWON Y, YOON S E.Dancing PRM*: Simultaneous Planning of Sampling and Optimization with Configuration Free Space Approximation // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2018: 7071-7078.
[83] WANG Y J, LIU Z X, ZUO Z Q, et al. Trajectory Planning and Safety Assessment of Autonomous Vehicles Based on Motion Prediction and Model Predictive Control. IEEE Transactions on Vehi-cular Technology, 2019, 68(9): 8546-8556.
[84] SAROYA M, BEST G, HOLLINGER G A.Roadmap Learning for Probabilistic Occupancy Maps with Topology-Informed Growing Neural Gas. IEEE Robotics and Automation Letters, 2021, 6(3): 4805-4812.
[85] OK K, ANSARI S, GALLAGHER B, et al. Path Planning with Uncertainty: Voronoi Uncertainty Fields // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2013: 4596-4601.
[86] MCLEOD S, XIAO J.Navigating Dynamically Unknown Environments Leveraging Past Experience // Proc of the International Conference on Robotics and Automation. Washington, USA: IEEE, 2019: 29-35.
[87] JIMÉNEZ V, GODOY J, ARTUÑEDO A, et al. Object-Wise Com-parison of LiDAR Occupancy Grid Scan Rendering Methods. Robotics and Autonomous Systems, 2023, 161. DOI: 10.1016/j.robot.2023.104363.
[88] SUN N, FAN Z Q, QIU Q, et al. Map Construction Fusing Environmental Information and Motion Constraints // Proc of the Chinese Intelligent Automation Conference. Berlin, Germany: Springer, 2021: 637-644.
[89] ELHAFSI A, IVANOVIC B, JANSON L, et al. Map-Predictive Motion Planning in Unknown Environments // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2020: 8552-8558.
[90] BUI H D, LU Y J, PLAKU E.Improving the Efficiency of Sampling-Based Motion Planners via Runtime Predictions for Motion-Planning Problems with Dynamics // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington, USA: IEEE, 2022: 4486-4491.
[91] JUNG Y, SEO S W, KIM S W.Fast Point Clouds Upsampling with Uncertainty Quantification for Autonomous Vehicles // Proc of the International Conference on Robotics and Automation. Washington, USA: IEEE, 2022: 7776-7782.
[92] WANG L Z, YE H K, WANG Q H, et al. Learning-Based 3D Occupancy Prediction for Autonomous Navigation in Occluded Environments // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington, USA: IEEE, 2021: 4509-4516.
[93] DU TOIT N E, BURDICK J W. Probabilistic Collision Checking With Chance Constraints. IEEE Transactions on Robotics, 2011, 27(4): 809-815.
[94] ZHU H, ALONSO-MORA J.Chance-Constrained Collision Avoidance for MAVs in Dynamic Environments. IEEE Robotics and Automation Letters, 2019, 4(2): 776-783.
[95] DADKHAH N, METTLER B.Survey of Motion Planning Literature in the Presence of Uncertainty: Considerations for UAV Guidance. Journal of Intelligent and Robotic Systems, 2012, 65: 233-246.
[96] AOUDE G S, LUDERS B D, JOSEPH J M, et al. Probabilistically Safe Motion Planning to Avoid Dynamic Obstacles with Uncertain Motion Patterns. Autonomous Robots, 2013, 35: 51-76.
[97] SODHI P, HO B J, KAESS M.Online and Consistent Occupancy Grid Mapping for Planning in Unknown Environments // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington, USA: IEEE, 2019: 7879-7886.
[98] ARTUÑEDO A, VILLAGRA J, GODOY J, et al. Motion Planning Approach Considering Localization Uncertainty. IEEE Transactions on Vehicular Technology, 2020, 69(6): 5983-5994.
[99] LAU B, SPRUNK C, BURGARD W.Efficient Grid-Based Spatial Representations for Robot Navigation in Dynamic Environments. Robotics and Autonomous Systems, 2013, 61(10): 1116-1130.
[100] LÜZOW L, MENG Y, ARMIJOS A C, et al. Density Planner: Minimizing Collision Risk in Motion Planning with Dynamic Obstacles Using Density-Based Reachability[C/OL].[2022-11-16]. https://arxiv.org/pdf/2210.02131.pdf.
[101] BANFI J, WOO L, CAMPBELL M.Is It Worth to Reason About Uncertainty in Occupancy Grid Maps During Path Planning? // Proc of the International Conference on Robotics and Automation. Washington, USA: IEEE, 2022: 11102-11108.
[102] CHOHAN N, NAZARI M A, WYMEERSCH H, et al. Robust Trajectory Planning of Autonomous Vehicles at Intersections with Communication Impairments // Proc of the 57th Annual Allerton Conference on Communication, Control, and Computing. Washington, USA: IEEE, 2019: 832-839.
[103] JABR B A, AGGARWAL R, KUMAR M.Optimal Allocation of Autonomous Vehicles Using Chance-Constraints for Mapping a Semi-Structured Environment // Proc of the AIAA SCITECH Forum. Reston, USA: AIAA, 2022. DOI: 10.2514/6.2022-1413.
[104] SOUZA A, MAIA R S, AROCA R V, et al. Probabilistic Robotic Grid Mapping Based on Occupancy and Elevation Information // Proc of the 16th International Conference on Advanced Robotics. Washington, USA: IEEE, 2013. DOI: 10.1109/ICAR/2013.6766467
[105] BRAND M, MASUDA M, WEHNER N, et al. Ant Colony Optimization Algorithm for Robot Path Planning // Proc of the Inter-national Conference on Computer Design Ant Colony Optimization Algorithm for Robot Path Planning // Proc of the Inter-national Conference on Computer Design and Applications. Washing-ton, USA: IEEE, 2010: V3-436-V3-440.
[106] PUSSE F, KLUSCH M.Hybrid Online POMDP Planning and Deep Reinforcement Learning for Safer Self-Driving Cars // Proc of the IEEE Intelligent Vehicles Symposium. Washington, USA: IEEE, 2019: 1013-1020.
[107] 章宗长. 部分可观察马氏决策过程的复杂性理论及规划算法研究.博士学位论文.合肥: 中国科学技术大学, 2012.
(ZHANG Z C.Complexity Theory and Planning Algorithms in Partially Observable Markov Decision Processes. Ph.D. Dissertation. Hefei, China: University of Science and Technology of China, 2012.)
[108] 郑建阳. 无先验知识的部分可观测环境规划问题研究.硕士学位论文.厦门:厦门大学, 2019.
(ZHENG J Y.Research on Planning in Partially Observable Domains without Prior Knowledge. Master Dissertation. Xiamen, China: Xiamen University, 2019.)
[109] BARBOSA F S, LACERDA B, DUCKWORTH P, et al. Risk-Aware Motion Planning in Partially Known Environments // Proc of the 60th IEEE Conference on Decision and Control. Washington, USA: IEEE, 2021: 5220-5226.
[110] ZHITNIKOV A, INDELMAN V.Risk Aware Belief-Dependent Constrained POMDP Planning[C/OL]. [2022-11-16].https://arxiv.org/pdf/2209.02679v1.pdf.
[111] LEE K, KUM D.Collision Avoidance/Mitigation System: Motion Planning of Autonomous Vehicle via Predictive Occupancy Map. IEEE Access, 2019, 7: 52846-52857.
[112] FISHER A, CANNIZZARO R, COCHRANE M, et al. ColMap: A Memory-Efficient Occupancy Grid Mapping Framework. Robo-tics and Autonomous Systems, 2021, 142. DOI: 10.1016/j.robot.2021.103755.
[113] MARBLE J D, BEKRIS K E.Asymptotically Near-Optimal Pla-nning with Probabilistic Roadmap Spanners. IEEE Transactions on Robotics, 2013, 29(2): 432-444.
[114] 郭靖. 基于马氏决策理论的智能体决策问题研究.硕士学位论文.广州:广东工业大学, 2012.
(GUO J.Research on Agent Decision Issues Problem Based on Markov Decision Theory. Master Dissertation. Guangzhou, China: Guangdong University of Technology, 2012.)
[115] SCHEFTELOWITSCH D.The Complexity of Uncertainty in Mar-kov Decision Processes // Proc of the SIAM Conference on Control and Its Applications. Philadelphia, USA: SIAM, 2015: 303-310.
[116] DOYEN L, MASSART T, SHIRMOHAMMADI M.The Complexity of Synchronizing Markov Decision Processes. Journal of Computer and System Sciences, 2019, 100: 96-129.
[117] LEE H R, LEE T.Multi-agent Reinforcement Learning Algorithm to Solve a Partially-Observable Multi-agent Problem in Disaster Response. European Journal of Operational Research, 2021, 291(1): 296-308.
[118] THOMAS J, HERNÁNDEZ M P, PARLIKAD A K, et al. Network Maintenance Planning via Multi-agent Reinforcement Lear-ning // Proc of the IEEE International Conference on Systems, Man, and Cybernetics. Washington, USA: IEEE, 2021: 2289-2295.
[119] CUI J X, QIU H, CHEN D, et al. COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles // Proc of the IEEE/CVF Conference on Computer Vision and Pa-ttern Recognition. Washington, USA: IEEE, 2022: 17231-17241.
[120] AITTAHAR S, FRANÇOIS-LAVET V, LODEWEYCKX S, et al. Imitative Learning for Online Planning in Microgrids // Proc of the International Workshop on Data Analytics for Renewable Energy Integration. Berlin, Germany: Springer, 2015: 1-15.
[121] TIGAS P, FILOS A, MCALLISTER R, et al. Robust Imitative Planning: Planning from Demonstrations Under Uncertainty[C/OL]. [2022-11-16]. http://neurips.wad.vision/files/papers/Robust%20Imitative%20Planning:%20Planning%20from%20Demonstrations%20Under%20Uncertainty.pdf. |