Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A Personalized Travel Route Recommendation Model Using Deep Learning in Scenic Spots Intelligent Service Robots

A Personalized Travel Route Recommendation Model Using Deep Learning in Scenic Spots Intelligent... Hindawi Journal of Robotics Volume 2022, Article ID 3851506, 8 pages https://doi.org/10.1155/2022/3851506 Research Article A Personalized Travel Route Recommendation Model Using Deep Learning in Scenic Spots Intelligent Service Robots Qili Tang School of Economics and Management, Aba Teachers University, Aba, Sichuan 623002, China Correspondence should be addressed to Qili Tang; 20109637@abtu.edu.cn Received 8 February 2022; Accepted 19 March 2022; Published 21 April 2022 Academic Editor: Shan Zhong Copyright © 2022 Qili Tang.  is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.  is paper proposes a personalized tourist interest demand recommendation model based on deep neural network. Firstly, the basic information data and comment text data of tourism service items are obtained by crawling the relevant website data. Furthermore, word segmentation and word vector transformation are carried out through Jieba word segmentation tool and Skip- gram model, the semantic information between di‹erent data is deeply characterized, and the problem of very high vector sparsity is solved.  en, the corresponding features are obtained by using the feature extraction ability of DNN’s in-depth learning. On this basis, the user’s score on tourism service items is predicted through the model until a personalized recommendation list is generated. Finally, through simulation experiments, the recommendation accuracy and average reciprocal ranking of the proposed algorithm model and the other two algorithms in three di‹erent databases are compared and analyzed.  e results show that the overall performance of the proposed algorithm is better than the other two comparison algorithms. human intention. However, this method does not divide the 1. Introduction user’s access sequence into di‹erent interest segment se- Service robot refers to an autonomous or semiautonomous quences according to time, so the recommendation accuracy robot [1] that completes useful service activities instead of is low. For the dynamic traveling repairman problem and human beings, but does not engage in production work. Its dynamic vehicle routing problem, Reference [12] combined role is to replace service personnel and provide services with time-varying requirements, sent the available robots to required by human beings. Service robot contains many the nearest service request location, sent multiple robots for scienti—c knowledge including mechanical engineering, each service request arriving in the system, and proposed a automation, computer science, and control engineering new model-free operation strategy independent of load [2–4]. With the continuous development of arti—cial intel- factor. However, this method does not have model algorithm ligence, service robots are gradually moving towards intel- and cannot be applied to more complex situations. In order ligence [5, 6]. Due to the increasingly obvious trend of to realize long-term autonomous operation, Reference [13] proposed a modular general software framework for intel- personalized tourism service selection by users, the research on intelligent tourism service robot has become a hot spot in ligent mobile robot, which can use complex human voice intelligent service robot [7, 8], which is innovative and commands to interact with humans. However, this method forward-looking in the —eld of robot application in tourism only considers the home service robot. Reference [14] in- industry. troduced local attention and nonlinear attention to capture At present, the traditional method of providing services local and global project information at the same time. On by service personnel cannot meet people’s personalized this basis, a nonlinear attention similarity model (NASM) needs [9, 10]. Reference [11] developed a robot partner for was proposed for project-based collaborative —ltering information support and proposed a new method that can through local attention embedding. However, the algorithm Ÿexibly recommend all kinds of information according to cannot accurately capture human high-order sequence 2 Journal of Robotics behavior, and it is difficult to realize complex recommen- applied in various occasions and achieve good results, as well dation. Reference [15] proposed a personalized robot service as tourism service recommendation [18]. Although collab- system centered on robot thinking, which can migrate with orative filtering technology has many advantages, such as the user’s geospatial movement at any time, so that it can good processing of unstructured data and high degree of continue to grow with the user. However, this method does personalization and automation of recommendation, col- not consider the environmental factors of users, and the laborative filtering technology also has the problem of data personalized growth cycle is long. Reference [16] constructs sparsity [19]. Generally, in the application of recommen- the intelligent robot control system based on the principle of dation system, the data scored by users are insignificant for human-computer interaction and designs the corresponding the overall data, which will lead to some problems. In this model-based control algorithm to identify the dynamic case, the use of collaborative filtering technology often model of the robot. However, this method is difficult to cannot achieve good results. In other words, for collabo- obtain the prior distribution, and it is difficult to characterize rative filtering technology, if an item has less scores, it is the high-dimensional semantics of users. In Reference [17], difficult to recommend it to other users. In addition, if a user aiming at the path-planning problem of hospital service scores very little, it will be difficult to get some recom- robot in drug delivery, medical insurance order and other mendations. It is difficult to achieve excellent results by services, based on the automatic control robot with visual using traditional collaborative filtering technology. recognition ability, combined with the three-dimensional +erefore, this paper proposes a tourism service rec- reconstructed image and the route area shunting method ommendation model based on deep learning to solve the using edge calculation, an image edge detection algorithm related problems. +e proposed model is mainly divided into based on three-dimensional features is proposed. However, four modules: data preprocessing, construction of depth this method is only suitable for special people in specific prediction model, network training, and final recommen- areas and does not have the characteristics of in-depth dation list generation. +e principle and function of each personalized service. module are shown in Figure 1: Based on the above analysis, aiming at the problem of (1) Data acquisition and preprocessing—this module personalized travel route recommendation of intelligent mainly obtains the basic information data and service robot in the scenic spot, a personalized tourist in- comment text data of tourism service items by terest demand recommendation model based on deep crawling the relevant website data and then pre- learning using word embedding technology is proposed. +e processes these data. basic idea is to (1) reduce the sparsity of data vector and (2) Construction of prediction model—this module uses improve the recommendation accuracy of the algorithm by deep learning technology to predict users’ scores on preprocessing the original data and (2) build a depth pre- tourism service items. diction model to deeply mine the relationship between users and scenic spots. Compared with the traditional service (3) Training network—the module uses sample data to robot travel route recommendation method, the contribu- train the model network, mine the potential rela- tions of the proposed method are as follows: tionship between users and tourism service items, and learn the interaction between users and tourism (1) +e Skip-gram model in the word2vec word em- service items, so as to obtain a predictable model. bedding method is used to transform the word vector (4) Generate personalized recommendation list—the of data, and the effective extraction of topic feature main function of this module is to test the experi- vector, geographic factor feature vector, and user mental data. +e module inputs the experimental access feature vector is realized. data into the trained model. +e model predicts (2) +e proposed model uses the deep neural network to users’ scores on tourism service items, sorts them transform the recommendation of tourists’ interests according to the score, and finally generates a per- and needs into the task of binary classification, which sonalized recommendation list for each user to improves the ability of extracting the features of the complete the recommendation. original data and effectively enhances the perfor- mance of predicting users’ ratings. 2.2. Data Acquisition and Preprocessing. +e goal of data +e rest of this paper is organized as follows: the second acquisition mainly includes three aspects: self-built database, part introduces the personalized tourist interest demand data from Foursquare, and data from Tokyo [20, 21]. +e recommendation model based on deep neural network; the self-built database adopts the data set established by col- third part compares with the existing recommendation lecting a large amount of travel information from the model to realize the feasibility and optimization of the MaFengWo, with about 390000 user travel access records. method proposed in this paper; the fourth part is the Foursquare’s data includes the long-term (about 10 months) conclusion of this paper. check-in data collected in New York from May 2013 to March 2014, filtering out users with less than 8 access 2. Proposed Model records and locations with less than 8 visits. +ere are 1158 users and 5092 locations for the experiment, with a total of 2.1.OverallFramework. Collaborative filtering technology is the most used in the recommendation system, which can be 257221 check-in records in the Foursquare data set. +e data Journal of Robotics 3 Table 1: Recommended data set of tourism series. START Database Self-built Foursquare Tokyo Number of users 9328 1158 2095 Data collection Number of locations 1643 5092 8246 Number of check-in records 392031 257221 605893 Data Data cleaning preprocessing vectors on a word list. +e data can be expressed as Generate word embedding vector feature vectors using the frequency and TF-IDF value of words. A typical representation is One-Hot Build a predictive model network Encoding. However, this method essentially uses a vector containing only one 1 and the others are 0 to Construction of uniquely represent words, and its dimension is the Factorization machine model function prediction models number of words in the whole vocabulary list. However, the vocabulary list used in this represen- Train the constructed network model tation method is very large, so the dimension of feature vectors represented by this method is very Generate recommendation list large, which eventually leads to the problem of very high vector sparsity. Secondly, such methods cannot express the relationship between words and cannot Personalized recommendation results well reflect the deep semantic information between words and text. Here, the word embedding method is END used for word vector conversion [24]. Different from the traditional lexical feature expression methods, the Figure 1: Implementation flow of the proposed recommendation word embedding method can represent words by model. dense real number vectors in low dimensional space. +is cannot only represent words in vector form but also calculate the effective distance between two words from Tokyo are similar to the data from Foursquare, and the and describe the semantic information between same filtering operation is adopted for the data. Finally, there words. It is a very effective method to process text were 2095 users and 8246 locations for the experiment, with information. At present, word2vec method is the most a total of 605893 check-in records. widely used word embedding technology. Word2vec +e specific statistics of the three data sets are shown in model is divided into Skip-gram model and CBOW Table 1. model [25]. Skip-gram model predicts the generation probability of context vocabulary through target vo- 2.3. Vector Representation of Text. Preprocessing the data in cabulary, while CBOW model uses context vocabulary the above three databases mainly includes the following to predict the generation probability of target vo- steps: cabulary. +is paper mainly uses Skip-gram model for word vector representation, and its basic structure is (1) Chinese word segmentation technology is used for shown in Figure 2. word segmentation. In this paper, Jieba word seg- mentation tool is used to segment data [22]. Jieba As can be seen from Figure 2, Skip-Gram model is a word segmentation tool adopts the Chinese word neural network model, including input layer, hidden layer, segmentation algorithm of NShort, which is the and output layer. Firstly, the vocabulary is transformed into python implementation of the algorithm. It has the One-Hot encoding form and input into the input layer and characteristics of simple principle, easy to under- then calculated in the hidden layer. +e output layer outputs stand, low model resource occupation, and easy the probability of the target context vocabulary. When the training. In addition, NShort Chinese word seg- model training is completed, the weight from the input layer mentation algorithm has excellent efficiency in large- to the hidden layer can be used to represent the target word scale word segmentation application scenarios and is vocabulary. +is is because in the weights at this time, only widely used in various commercial fields. Moreover, the weights at the position of “1” in the one-hot encoder are the model supports incremental expansion. It is the activated, and the number of these weights is the same as the mainstream algorithm of Chinese word segmenta- number of hidden layer nodes so that the vector composed tion and is used by most search engine companies. of these weights can represent the target vocabulary. +e (2) Convert the divided data into word vectors. As a position of 1 of one-hot encoder of different words is dif- ferent, so the target word is uniquely mapped into a low- method of text feature representation, vocabulary is expressed as a feature vector, which is called word dimensional dense vector. +e kip-Gram model predicts the probability of target vector [23]. It is a common expression of word vectors to represent text information with feature vocabulary context vocabulary using the following formula (1): 4 Journal of Robotics calculation efficiency is often very low. +erefore, the existing models often use hierarchical softmax for efficient Q (t) Output calculation. Combined with hierarchical softmax, formula (1) is expanded into formula (2): k v v 1− z z ( ) i i T v T v P[S(χ)|χ] � 􏽙 􏽙 δ v x · 1 − δ v x , 􏼔 􏼐 􏼑􏼕 􏼔 􏼐 􏼑􏼕 (2) χ i−1 χ i−1 a∈S(χ) i�2 Intput where, k represents the path length of the context vocab- ulary in the output hierarchy tree. v represents the context- sensitive vocabulary of the target vocabulary χ. v represents the output vector of the target vocabulary. x represents the i−1 output word vector at the corresponding level under a projection context word. z represents the logistic output indicator variable, when v v v T v z � 0, it is expressed as P(z |v , x ) � δ(v x ), and Q (t+2) Q (t+1) Q (t-1) Q (t-2) i i i−1 χ i−1 v v v T v when z � 1, it is expressed as P(z |v , x ) � 1 − δ(v x ). i i χ i−1 χ i−1 Figure 2: Basic architecture of Skip-gram model. In the process of word embedding model training, some negative samples are often added to improve the training speed and improve the quality of the obtained word vector. P[S(χ) | χ] � 􏽙 P(v | χ), (1) At this time, the objective function of Skip-gram model a∈S(χ) training can be expressed as follows: where, S(χ) represents the context vocabulary of the target vocabulary χ. When using softmax to calculate P(v | χ), the χ T χ T G � 􏽘 􏽘 􏽘 􏼚y · log􏼔δ􏼒v x 􏼓􏼕 + 1 − y 􏼁 · log􏼔1 − δ􏼒v 􏼓􏼕􏼛. v v v 􏽥χ 􏽥χ (3) χ∈C 􏽥χ∈S(x) v∈ χ ∪M { } x , v , and y represent parameter vector, context word the order information of words can be well-preserved in the v 􏽥χ embedding vector, and logistic indicator variable, respec- generated word vector matrix K , which is of great help 1: m tively, in which the sampled vocabulary set of vocabulary χ is and a good advantage for further processing. represented by M . In deep learning tasks based on neural +is method is also used to process the comment in- 􏽥χ networks, the word embedding vector generated by Skip- formation of tourism service items. All comment texts gram model can be used as good input data. +is method is obtained from tourism service item v are also integrated into used to map word vectors. a single document z and then the document is trans- 1: m For the comment information that has been divided formed into word vector matrix K by the above method. 1: m into words, the ultimate goal is to express the comment Information other than the comments of users and information as a word vector matrix and input it into the tourism service items is called other information, which neural network. In order to achieve this goal, all com- includes the basic information of users and tourism service ments written by user v, that is, user comments, are items. +e age in the user’s basic information is normalized integrated into a single document and recorded as z . by x � x/120, the gender is directly normalized to the real 1: m Document z consists of m words in total, as shown in value of [0, 1], and the occupation and city are directly 1: m equation (4). converted to the word embedding vector. In addition to the comment text, the user’s historical evaluation information v v v v v K � Ψ z 􏼁 ⊕Ψ z 􏼁 ⊕Ψ z 􏼁 ⊕ · · ·⊕Ψ z 􏼁 , (4) and the evaluated item name also need to be considered. 1: m 1 2 3 m Because in terms of tourism service projects, the name of tourism service projects can also reflect the characteristics of where, z represents the first word in comment document, v v z , z represents the second word in comment document this project. For example, for scenic spots, if a scenic spot is 1: m 2 v v z , and so on. Function Ψ(z ) is used to map the first word called “XX mountain,” we can know that the scenic spot may 1: m 1 into a low-dimensional dense real number word vector be natural scenery. If a scenic spot is called “XX Museum,” through word embedding technology. Symbol ⊕ is an as- we know that the scenic spot may belong to buildings or sociation operation, which combines the word vectors of museums. +erefore, the user’s historical evaluation item each word by line to form a word vector matrix. Compared name is transformed into a word embedding vector. For with the traditional word bag technology, using this method tourism service items, the names and locations in its basic to process the formed word vector matrix maintains the information can be directly transformed into word em- order of words in the sentence to the greatest extent, so that bedding vectors. For tags, tags are mainly used to describe a Journal of Robotics 5 few sentences summarizing tourism service items, so its Input ID: Input: Point-of-Interest Name & Geographical Location processing is also similar to comment information. First, User & Point-of-Interest word segmentation is carried out and then word embedding vector transformation is carried out for the divided words. Determine the category of Visited Matrix: X × Y Point-of-Interest +rough the above operations, other information is also transformed into vector form. Theme feature: T X ,Y m n Geographical factor feature: G 2.4.NetworkConstruction. +is chapter proposes to use the deep learning model of DNN for personalized POI rec- ommendation. +e network model is shown in Figure 3 X Y Fully connected layer: T G m n and mainly includes two modules: feature extraction module and network learning module. Among them, the The first hidden layer feature extraction module uses word embedding tech- nology to extract and construct the features of location- based social networks. +e network learning module in- The Second hidden layer cludes network connection layer and network layer. +e function of the connection layer in the network learning The Softmax layer module is to fuse the extracted feature vectors. +e function of the network layer is to train the proposed Output: Prediction result model and predict the score of users’ preference for in- terest demand recommendation. Figure 3: Basic architecture of the proposed network model. In the process of building effective features recom- mended by tourists’ interest needs, the feature extraction (2) Prediction function. In the prediction stage, input module extracts topic feature vector, geographic factor feature vector, and user access feature vector through word the interest and demand point information of users, the network outputs a probability vector P, and embedding technology [26]. +e network connection layer recommend to users according to the probability in the network learning module fully connects the vectors ranking A of A. extracted by the feature extraction module and sends them top−i into the deep neural network. +e connection layer realizes the expansibility of the model. For example, if the relevant 2.5. Network Training. Taking the text features constructed context information needs to be added in the application, it in the previous stage as the input of the model, the final can be fully connected automatically through the full- prediction results of tourists’ interest and demand points can connection layer to send the characteristics of the input be obtained after training. layer into the network training. +e network layer in the +e connection layer of DLM learning module fully network learning module includes the following two connects the existing features. +e vector representation of functions: the full-connection of any user-tourist interest demand (1) Training function. In the training stage, the network point ⟨X , Y ⟩ is shown in the following equation (5). m n layer uses the deep neural network to learn and extract the implicit features, obtain the high-order Q � Me􏼂⟨X , Y ⟩, T, G􏼃, (5) 0 m n interaction between the features, and then take the where, T and G represent the user subject features and high-order feature results extracted from the implicit geographical factor features represented by the potential layer as the input of the softmax layer to learn the vector ⟨X , Y ⟩ of user tourist interest demand points, m n classification task. +e proposed personalized tourist respectively. Me represents the process of connecting all interest demand recommendation model based on feature vectors into a one-dimensional vector and sending it deep learning using word embedding technology into the model, which can be calculated in the hidden layer transforms the tourist interest demand recommen- of the model according to the following equation (6). dation into a binary classification task, in which the user’s check-in record is defined as a positive sample, Q � Dr􏼂Af W · Q + θ􏼁 􏼃, (6) 1 σ 0 and the tourist interest demand points not visited by the user are regarded as a negative sample. +e where, σ represents the number of hidden layers in the output result of the softmax layer in the model is a model. W represents the weight value of the hidden layer. θ two-dimensional probability vector P � [A, B], represents the offset value of the hidden layer. Af indicates where A represents the user’s preference probability the activation function. for the tourist interest demand point and B repre- In addition, the dropout layer is added in the training sents the user’s nonpreference probability for the process of each hidden layer to prevent over fitting problem. tourist interest demand point. +en, the cross en- In the output layer of the model, the predicted proba- tropy loss function is selected, and the gradient bility of the user’s interest demand for tourists can be ob- descent method is used to optimize the function. tained, as shown in formula (7) below. 6 Journal of Robotics Table 2: Performance comparison of three algorithms on self-built I � soft max W · Q + θ 􏼁, (7) O 1 O data sets. where, W represents the weight value of the output layer. Algorithm HR@15 MRR@15 HR@30 MRR@30 θ represents the offset value of the output layer. soft max +is paper 0.489 0.206 0.625 0.317 represents the normalized exponential function, and its Ref. [11] 0.365 0.158 0.486 0.205 output is two probability values, representing the probability Ref. [14] 0.327 0.139 0.437 0.194 that the user visits the tourist’s interest demand and the probability that the user does not visit. +e proposed model uses a combined regularization, and Table 3: Performance comparison of three algorithms on Four- the adjusted loss function is shown in formula (8): square data set. � � � � � �2 � �2 Algorithm HR@15 MRR@15 HR@30 MRR@30 � � � � L � smooth􏼐u − x􏼑 + α􏼒�c � + �c � 􏼓, (8) p 1 2 2 2 +is paper 0.403 0.317 0.576 0.395 Ref. [11] 0.315 0.231 0.374 0.284 where, u represents the final desired prediction result. α Ref. [14] 0.258 0.207 0.332 0.276 represents the regularization parameter. c represents the learning parameters of the aggregation layer. c represents the learning parameters of the layer of interest, and x Table 4: Performance comparison of three algorithms on Tokyo represents the input data. smooth(u − x) can be calculated data set. by formula (9). Algorithm HR@15 MRR@15 HR@30 MRR@30 􏼌 􏼌 􏼌 􏼌 1 2 ⎧ ⎪ 􏼌 􏼌 ⎪ 􏼌 􏼌 u − x , u − x < 1, ⎪ 􏼐 􏼑 􏼌 􏼌 +is paper 0.383 0.301 0.513 0.358 p p ⎪ 2 Ref. [11] 0.295 0.212 0.328 0.252 smooth u − x � (9) 􏼐 􏼑 Ref. [14] 0.226 0.194 0.295 0.231 ⎪ 􏼌 􏼌 􏼌 􏼌 ⎪ 􏼌 􏼌 1 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 u − x − , u − x ≥ 1. 􏼌 p 􏼌 􏼌 p 􏼌 where, Number Of Hits@N represents the correct quantity +e model is optimized through formula (8), and the in the prediction results. GT represents the total number of sorted A is output as the recommendation result, where i top−i prediction results. N represents the total number of nodes in represents the number of recommendation results of tourist the network model. rank represents the predicted result interest and demand. sequence. Finally, according to the prediction results obtained by the model, the recommendation results are determined according to the user access probability. +e higher the 3.2.ExperimentalResultsandAnalysis. +e algorithm in Ref. probability, the more likely the user is to access. When [11, 14] are selected for comparative analysis. Firstly, the self- determining the value i, select the corresponding first i built data set is used as the training set, and the number of prediction results according to the ranking results of nodes in the network model is set to 15 and 30, respectively. probability to recommend to users. +e results are shown in Table 2. +en, the Foursquare data set is used as the training set, 3. Experiment and Analysis and the number of nodes in the network model is set to 15 and 30, respectively, to verify the three different algorithms. +e software environment of the experiment is 64 bit +e results are shown in Table 3. Ubuntu 16 Python 3 and tensor flow deep learning platform Finally, based on the data set of Tokyo, the number of installed under 04 operating system. +e computer hardware nodes in the network model is set to 15 and 30, respectively, configuration used in the experiment is Intel (R) core (TM) to verify three different algorithms. +e results are shown in i7 CPU and single NVIDIA GTX Titan GPU. Table 4. As can be seen from Tables 2–4, the accuracy HR@N and average reciprocal ranking MRR@N of the proposed al- 3.1. Evaluating Indicator. In order to verify the effectiveness gorithm on three different data sets are the best. +is is of the proposed method, two evaluation indexes, accuracy because the algorithm proposed in this paper uses the Skip- HR@N and average reciprocal ranking MRR@N, are used. gram model to improve the extraction performance of user +e calculation of accuracy HR@N and average reciprocal access feature vector, uses deep neural network to process ranking MRR@N are shown in equation (10) and (11), tourists’ interest demand recommendation, which can better respectively. capture the sequence relationship, and better track users’ footprints and users’ points of interest, which greatly im- Number Of Hits@N (10) HR@N � , proves the recommendation performance. In the process of GTGT intelligent recommendation, the method in Ref. [11] does N not consider the sequential process of tourists in the tourism 1 1 MRR@N � 􏽘 , (11) process and does not divide the user access sequence into N rank i�1 different interest segments according to time. Although the Journal of Robotics 7 changes,” Frontiers of Information Technology & Electronic method in Ref. [14] can model local sequence behavior, it has Engineering, vol. 20, no. 3, pp. 307–317, 2019. some limitations in capturing high-order sequence behavior. [10] J. X. Wang and Z. S. Shi, “A speech interaction system based For tourists’ complex interest point selection, feature ex- on cloud service under ROS,” in Proceedings of the 38TH traction cannot be carried out. Chinese Control Conference (CCC), pp. 4721–4725, IEEE, Guangzhou, China, July 2019. 4. Conclusion [11] S. Yamamoto and N. Kubota, “Development of smart device interlocked robot partners for information support and +is paper proposes a personalized tourist interest demand smart recommendation,” in Proceedings of the 2020 IEEE recommendation model based on deep learning and com- International Conference on Fuzzy Systems (FUZZ-IEEE), pares the proposed algorithm model with the other two pp. 338–344, Electr Network, Glasgow, UK, July 2020. algorithms through simulation experiments. +e proposed [12] C. Ruch, J. Gachter, J. Hakenberg, and E. Frazzoli, “+e +1 model uses the deep neural network to transform the rec- method: model-free adaptive repositioning policies for ommendation of tourists’ interests and needs into the task of robotic multi-agent systems,” IEEE Transactions on Net- binary classification, which improves the ability of extracting work Science And Engineering, vol. 7, no. 4, pp. 3171–3184, the features of the original data and effectively enhances the performance of predicting users’ ratings. [13] J. B. Yi, T. Kang, D. Song, and S.-J. Yi, “Unified software +e future work will deeply study the method of real- platform for intelligent home service robots,” Applied Sci- ences-Basel, vol. 10, no. 17, pp. 354–361, 2020. time personalized travel route recommendation by im- [14] Z. P. Shan, Y. Q. Lei, D. F. Zhang, and J. Zhou, “NASM: proving the algorithm speed and the personalized route nonlinearly attentive similarity model for recommendation recommendation method for group travel. system via locally attentive embedding,” IEEE Access, vol. 7, no. 5, pp. 70689–70700, 2019. Data Availability [15] L. Hu, Y. G. Jiang, F. X. Wang, K. Hwang, M. S. Hossain, and G. Muhammad, “Follow me robot-mind: cloud brain based +e data used to support the findings of this study are in- personalized robot service with migration,” Future Generation cluded within the article. Computer Systems-Ae International Journal of Escience, vol. 107, no. 6, pp. 324–332, 2020. Conflicts of Interest [16] S. R. Pan, “Design of intelligent robot control system based on human-computer interaction,” International Journal of Sys- +e author declares that there are no conflicts of interest tem Assurance Engineering And Management, vol. 12, no. 3, regarding the publication of this paper. pp. 11–18, 2021. [17] L. K. Fan, X. C. Li, C. S. Guo, and B. Jia, “Path control of References panoramic visual recognition for intelligent robots based- edge computing,” Computer Communications, vol. 178, no. 8, [1] A. C. Ma, Z. W. Meng, and X. R. Ding, “Performance review pp. 64–73, 2021. of intelligent guidance robot at the outpatient clinic setting,” [18] Y. Ishida, T. Morie, and H. Tamukoh, “A hardware intelligent Cureus, vol. 13, no. 8, pp. 45–52, 2021. processing accelerator for domestic service robots,” Advanced [2] Z. Z. Yin, D. P. Wang, and J. H. Liu, “A method of con- Robotics, vol. 34, no. 14, pp. 947–957, 2020. structing robotics service platform for assisting handicapped [19] F. Lu, M. Huang, X. L. Li, G. Tian, H. Wu, and W. Si, or elderly people,” Journal of Robotics, vol. 2020, Article ID “Learning and development of home service robots’ service 4259175, 6 pages, 2020. cognition based on a learning mechanism,” Applied Sciences- [3] H. L. Qiu, M. L. Li, B. Y. Shu, and B. Bai, “Enhancing hos- Basel, vol. 10, no. 2, pp. 259–266, 2020. pitality experience with service robots: the mediating role of [20] D. Yang, D. Zhang, V. W. Zheng, and Z. Yu, “Modeling user rapport building,” Journal of Hospitality Marketing & Man- activity preference by leveraging user spatial temporal char- agement, vol. 29, no. 3, pp. 247–268, 2019. acteristics in LBSNs,” IEEE Transactions on Systems, Man, and [4] S. B. Kelley, B. W. Lane, and J. M. DeCicco, “Pumping the Cybernetics: Systems, vol. 45, no. 1, pp. 129–142, 2014. brakes on robot cars: current urban traveler willingness to [21] Y. G. Wang, X. M. Cai, C. L. Xu, and L. Jun, “Rise of the consider driverless vehicle,” Sustainability, vol. 11, no. 18, machines: examining the influence of professional service pp. 167–176, 2019. robots attributes on consumers’ experience,” Journal of [5] H. X. Lin, O. H. X. Chi, and D. Gursoy, “Antecedents of Hospitality And Tourism Technology, vol. 12, no. 4, customers’ acceptance of artificially intelligent robotic device pp. 609–623, 2021. use in hospitality services,” Journal of Hospitality Marketing & [22] A. Onan, “Two-stage topic extraction model for bibliometric Management, vol. 29, no. 5, pp. 530–549, 2020. data analysis based on word embeddings and clustering,” [6] H. Wu, X. J. Wu, Q. Ma, and G. Tian, “Cloud robot: semantic IEEE Access, vol. 7, pp. 145614–145633, 2019. map building for intelligent service task,” Applied Intelligence, [23] A. Onan and S. Korukog˘lu, “A feature selection model vol. 49, no. 2, pp. 319–334, 2019. based on genetic rank aggregation for text sentiment [7] I. P. Tussyadiah, F. J. Zach, and J. X. Wang, “Do travelers trust classification,” Journal of Information Science, vol. 43, no. 1, intelligent service robots?” Annals of Tourism Research, pp. 25–38, 2017. vol. 81, no. 14, pp. 125–134, 2020. [24] A. Onan, “Biomedical text categorization based on ensemble [8] S. Park, “Multifaceted trust in tourism service robots,” Annals of Tourism Research, vol. 81, no. 7, pp. 168–175, 2020. pruning and optimized topic modelling,” Computational and Mathematical Methods in Medicine, vol. 2018, Article ID [9] W. Shuai and X. P. Chen, “KeJia: towards an autonomous service robot with tolerance of unexpected environmental 2497471, 22 pages, 2018. 8 Journal of Robotics [25] D. Lian, C. Zhao, X. Xie, G. Sun, E. Chen, and Y. Rui, “GeoMF: joint geographical modeling and matrix factorization for point-of-interest recommendation,” in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Dis- covery and Data Mining, pp. 831–840, New York, NY, USA, August 2014. [26] R. Akabane and Y. Kato, “Pedestrian trajectory prediction based on transfer learning for human-following mobile robots,” IEEE Access, vol. 9, no. 12, pp. 126172–126185, http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Robotics Hindawi Publishing Corporation

A Personalized Travel Route Recommendation Model Using Deep Learning in Scenic Spots Intelligent Service Robots

Journal of Robotics , Volume 2022 – Apr 21, 2022

Loading next page...
 
/lp/hindawi-publishing-corporation/a-personalized-travel-route-recommendation-model-using-deep-learning-dLOadyIWHW

References (29)

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2022 Qili Tang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-9600
eISSN
1687-9619
DOI
10.1155/2022/3851506
Publisher site
See Article on Publisher Site

Abstract

Hindawi Journal of Robotics Volume 2022, Article ID 3851506, 8 pages https://doi.org/10.1155/2022/3851506 Research Article A Personalized Travel Route Recommendation Model Using Deep Learning in Scenic Spots Intelligent Service Robots Qili Tang School of Economics and Management, Aba Teachers University, Aba, Sichuan 623002, China Correspondence should be addressed to Qili Tang; 20109637@abtu.edu.cn Received 8 February 2022; Accepted 19 March 2022; Published 21 April 2022 Academic Editor: Shan Zhong Copyright © 2022 Qili Tang.  is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.  is paper proposes a personalized tourist interest demand recommendation model based on deep neural network. Firstly, the basic information data and comment text data of tourism service items are obtained by crawling the relevant website data. Furthermore, word segmentation and word vector transformation are carried out through Jieba word segmentation tool and Skip- gram model, the semantic information between di‹erent data is deeply characterized, and the problem of very high vector sparsity is solved.  en, the corresponding features are obtained by using the feature extraction ability of DNN’s in-depth learning. On this basis, the user’s score on tourism service items is predicted through the model until a personalized recommendation list is generated. Finally, through simulation experiments, the recommendation accuracy and average reciprocal ranking of the proposed algorithm model and the other two algorithms in three di‹erent databases are compared and analyzed.  e results show that the overall performance of the proposed algorithm is better than the other two comparison algorithms. human intention. However, this method does not divide the 1. Introduction user’s access sequence into di‹erent interest segment se- Service robot refers to an autonomous or semiautonomous quences according to time, so the recommendation accuracy robot [1] that completes useful service activities instead of is low. For the dynamic traveling repairman problem and human beings, but does not engage in production work. Its dynamic vehicle routing problem, Reference [12] combined role is to replace service personnel and provide services with time-varying requirements, sent the available robots to required by human beings. Service robot contains many the nearest service request location, sent multiple robots for scienti—c knowledge including mechanical engineering, each service request arriving in the system, and proposed a automation, computer science, and control engineering new model-free operation strategy independent of load [2–4]. With the continuous development of arti—cial intel- factor. However, this method does not have model algorithm ligence, service robots are gradually moving towards intel- and cannot be applied to more complex situations. In order ligence [5, 6]. Due to the increasingly obvious trend of to realize long-term autonomous operation, Reference [13] proposed a modular general software framework for intel- personalized tourism service selection by users, the research on intelligent tourism service robot has become a hot spot in ligent mobile robot, which can use complex human voice intelligent service robot [7, 8], which is innovative and commands to interact with humans. However, this method forward-looking in the —eld of robot application in tourism only considers the home service robot. Reference [14] in- industry. troduced local attention and nonlinear attention to capture At present, the traditional method of providing services local and global project information at the same time. On by service personnel cannot meet people’s personalized this basis, a nonlinear attention similarity model (NASM) needs [9, 10]. Reference [11] developed a robot partner for was proposed for project-based collaborative —ltering information support and proposed a new method that can through local attention embedding. However, the algorithm Ÿexibly recommend all kinds of information according to cannot accurately capture human high-order sequence 2 Journal of Robotics behavior, and it is difficult to realize complex recommen- applied in various occasions and achieve good results, as well dation. Reference [15] proposed a personalized robot service as tourism service recommendation [18]. Although collab- system centered on robot thinking, which can migrate with orative filtering technology has many advantages, such as the user’s geospatial movement at any time, so that it can good processing of unstructured data and high degree of continue to grow with the user. However, this method does personalization and automation of recommendation, col- not consider the environmental factors of users, and the laborative filtering technology also has the problem of data personalized growth cycle is long. Reference [16] constructs sparsity [19]. Generally, in the application of recommen- the intelligent robot control system based on the principle of dation system, the data scored by users are insignificant for human-computer interaction and designs the corresponding the overall data, which will lead to some problems. In this model-based control algorithm to identify the dynamic case, the use of collaborative filtering technology often model of the robot. However, this method is difficult to cannot achieve good results. In other words, for collabo- obtain the prior distribution, and it is difficult to characterize rative filtering technology, if an item has less scores, it is the high-dimensional semantics of users. In Reference [17], difficult to recommend it to other users. In addition, if a user aiming at the path-planning problem of hospital service scores very little, it will be difficult to get some recom- robot in drug delivery, medical insurance order and other mendations. It is difficult to achieve excellent results by services, based on the automatic control robot with visual using traditional collaborative filtering technology. recognition ability, combined with the three-dimensional +erefore, this paper proposes a tourism service rec- reconstructed image and the route area shunting method ommendation model based on deep learning to solve the using edge calculation, an image edge detection algorithm related problems. +e proposed model is mainly divided into based on three-dimensional features is proposed. However, four modules: data preprocessing, construction of depth this method is only suitable for special people in specific prediction model, network training, and final recommen- areas and does not have the characteristics of in-depth dation list generation. +e principle and function of each personalized service. module are shown in Figure 1: Based on the above analysis, aiming at the problem of (1) Data acquisition and preprocessing—this module personalized travel route recommendation of intelligent mainly obtains the basic information data and service robot in the scenic spot, a personalized tourist in- comment text data of tourism service items by terest demand recommendation model based on deep crawling the relevant website data and then pre- learning using word embedding technology is proposed. +e processes these data. basic idea is to (1) reduce the sparsity of data vector and (2) Construction of prediction model—this module uses improve the recommendation accuracy of the algorithm by deep learning technology to predict users’ scores on preprocessing the original data and (2) build a depth pre- tourism service items. diction model to deeply mine the relationship between users and scenic spots. Compared with the traditional service (3) Training network—the module uses sample data to robot travel route recommendation method, the contribu- train the model network, mine the potential rela- tions of the proposed method are as follows: tionship between users and tourism service items, and learn the interaction between users and tourism (1) +e Skip-gram model in the word2vec word em- service items, so as to obtain a predictable model. bedding method is used to transform the word vector (4) Generate personalized recommendation list—the of data, and the effective extraction of topic feature main function of this module is to test the experi- vector, geographic factor feature vector, and user mental data. +e module inputs the experimental access feature vector is realized. data into the trained model. +e model predicts (2) +e proposed model uses the deep neural network to users’ scores on tourism service items, sorts them transform the recommendation of tourists’ interests according to the score, and finally generates a per- and needs into the task of binary classification, which sonalized recommendation list for each user to improves the ability of extracting the features of the complete the recommendation. original data and effectively enhances the perfor- mance of predicting users’ ratings. 2.2. Data Acquisition and Preprocessing. +e goal of data +e rest of this paper is organized as follows: the second acquisition mainly includes three aspects: self-built database, part introduces the personalized tourist interest demand data from Foursquare, and data from Tokyo [20, 21]. +e recommendation model based on deep neural network; the self-built database adopts the data set established by col- third part compares with the existing recommendation lecting a large amount of travel information from the model to realize the feasibility and optimization of the MaFengWo, with about 390000 user travel access records. method proposed in this paper; the fourth part is the Foursquare’s data includes the long-term (about 10 months) conclusion of this paper. check-in data collected in New York from May 2013 to March 2014, filtering out users with less than 8 access 2. Proposed Model records and locations with less than 8 visits. +ere are 1158 users and 5092 locations for the experiment, with a total of 2.1.OverallFramework. Collaborative filtering technology is the most used in the recommendation system, which can be 257221 check-in records in the Foursquare data set. +e data Journal of Robotics 3 Table 1: Recommended data set of tourism series. START Database Self-built Foursquare Tokyo Number of users 9328 1158 2095 Data collection Number of locations 1643 5092 8246 Number of check-in records 392031 257221 605893 Data Data cleaning preprocessing vectors on a word list. +e data can be expressed as Generate word embedding vector feature vectors using the frequency and TF-IDF value of words. A typical representation is One-Hot Build a predictive model network Encoding. However, this method essentially uses a vector containing only one 1 and the others are 0 to Construction of uniquely represent words, and its dimension is the Factorization machine model function prediction models number of words in the whole vocabulary list. However, the vocabulary list used in this represen- Train the constructed network model tation method is very large, so the dimension of feature vectors represented by this method is very Generate recommendation list large, which eventually leads to the problem of very high vector sparsity. Secondly, such methods cannot express the relationship between words and cannot Personalized recommendation results well reflect the deep semantic information between words and text. Here, the word embedding method is END used for word vector conversion [24]. Different from the traditional lexical feature expression methods, the Figure 1: Implementation flow of the proposed recommendation word embedding method can represent words by model. dense real number vectors in low dimensional space. +is cannot only represent words in vector form but also calculate the effective distance between two words from Tokyo are similar to the data from Foursquare, and the and describe the semantic information between same filtering operation is adopted for the data. Finally, there words. It is a very effective method to process text were 2095 users and 8246 locations for the experiment, with information. At present, word2vec method is the most a total of 605893 check-in records. widely used word embedding technology. Word2vec +e specific statistics of the three data sets are shown in model is divided into Skip-gram model and CBOW Table 1. model [25]. Skip-gram model predicts the generation probability of context vocabulary through target vo- 2.3. Vector Representation of Text. Preprocessing the data in cabulary, while CBOW model uses context vocabulary the above three databases mainly includes the following to predict the generation probability of target vo- steps: cabulary. +is paper mainly uses Skip-gram model for word vector representation, and its basic structure is (1) Chinese word segmentation technology is used for shown in Figure 2. word segmentation. In this paper, Jieba word seg- mentation tool is used to segment data [22]. Jieba As can be seen from Figure 2, Skip-Gram model is a word segmentation tool adopts the Chinese word neural network model, including input layer, hidden layer, segmentation algorithm of NShort, which is the and output layer. Firstly, the vocabulary is transformed into python implementation of the algorithm. It has the One-Hot encoding form and input into the input layer and characteristics of simple principle, easy to under- then calculated in the hidden layer. +e output layer outputs stand, low model resource occupation, and easy the probability of the target context vocabulary. When the training. In addition, NShort Chinese word seg- model training is completed, the weight from the input layer mentation algorithm has excellent efficiency in large- to the hidden layer can be used to represent the target word scale word segmentation application scenarios and is vocabulary. +is is because in the weights at this time, only widely used in various commercial fields. Moreover, the weights at the position of “1” in the one-hot encoder are the model supports incremental expansion. It is the activated, and the number of these weights is the same as the mainstream algorithm of Chinese word segmenta- number of hidden layer nodes so that the vector composed tion and is used by most search engine companies. of these weights can represent the target vocabulary. +e (2) Convert the divided data into word vectors. As a position of 1 of one-hot encoder of different words is dif- ferent, so the target word is uniquely mapped into a low- method of text feature representation, vocabulary is expressed as a feature vector, which is called word dimensional dense vector. +e kip-Gram model predicts the probability of target vector [23]. It is a common expression of word vectors to represent text information with feature vocabulary context vocabulary using the following formula (1): 4 Journal of Robotics calculation efficiency is often very low. +erefore, the existing models often use hierarchical softmax for efficient Q (t) Output calculation. Combined with hierarchical softmax, formula (1) is expanded into formula (2): k v v 1− z z ( ) i i T v T v P[S(χ)|χ] � 􏽙 􏽙 δ v x · 1 − δ v x , 􏼔 􏼐 􏼑􏼕 􏼔 􏼐 􏼑􏼕 (2) χ i−1 χ i−1 a∈S(χ) i�2 Intput where, k represents the path length of the context vocab- ulary in the output hierarchy tree. v represents the context- sensitive vocabulary of the target vocabulary χ. v represents the output vector of the target vocabulary. x represents the i−1 output word vector at the corresponding level under a projection context word. z represents the logistic output indicator variable, when v v v T v z � 0, it is expressed as P(z |v , x ) � δ(v x ), and Q (t+2) Q (t+1) Q (t-1) Q (t-2) i i i−1 χ i−1 v v v T v when z � 1, it is expressed as P(z |v , x ) � 1 − δ(v x ). i i χ i−1 χ i−1 Figure 2: Basic architecture of Skip-gram model. In the process of word embedding model training, some negative samples are often added to improve the training speed and improve the quality of the obtained word vector. P[S(χ) | χ] � 􏽙 P(v | χ), (1) At this time, the objective function of Skip-gram model a∈S(χ) training can be expressed as follows: where, S(χ) represents the context vocabulary of the target vocabulary χ. When using softmax to calculate P(v | χ), the χ T χ T G � 􏽘 􏽘 􏽘 􏼚y · log􏼔δ􏼒v x 􏼓􏼕 + 1 − y 􏼁 · log􏼔1 − δ􏼒v 􏼓􏼕􏼛. v v v 􏽥χ 􏽥χ (3) χ∈C 􏽥χ∈S(x) v∈ χ ∪M { } x , v , and y represent parameter vector, context word the order information of words can be well-preserved in the v 􏽥χ embedding vector, and logistic indicator variable, respec- generated word vector matrix K , which is of great help 1: m tively, in which the sampled vocabulary set of vocabulary χ is and a good advantage for further processing. represented by M . In deep learning tasks based on neural +is method is also used to process the comment in- 􏽥χ networks, the word embedding vector generated by Skip- formation of tourism service items. All comment texts gram model can be used as good input data. +is method is obtained from tourism service item v are also integrated into used to map word vectors. a single document z and then the document is trans- 1: m For the comment information that has been divided formed into word vector matrix K by the above method. 1: m into words, the ultimate goal is to express the comment Information other than the comments of users and information as a word vector matrix and input it into the tourism service items is called other information, which neural network. In order to achieve this goal, all com- includes the basic information of users and tourism service ments written by user v, that is, user comments, are items. +e age in the user’s basic information is normalized integrated into a single document and recorded as z . by x � x/120, the gender is directly normalized to the real 1: m Document z consists of m words in total, as shown in value of [0, 1], and the occupation and city are directly 1: m equation (4). converted to the word embedding vector. In addition to the comment text, the user’s historical evaluation information v v v v v K � Ψ z 􏼁 ⊕Ψ z 􏼁 ⊕Ψ z 􏼁 ⊕ · · ·⊕Ψ z 􏼁 , (4) and the evaluated item name also need to be considered. 1: m 1 2 3 m Because in terms of tourism service projects, the name of tourism service projects can also reflect the characteristics of where, z represents the first word in comment document, v v z , z represents the second word in comment document this project. For example, for scenic spots, if a scenic spot is 1: m 2 v v z , and so on. Function Ψ(z ) is used to map the first word called “XX mountain,” we can know that the scenic spot may 1: m 1 into a low-dimensional dense real number word vector be natural scenery. If a scenic spot is called “XX Museum,” through word embedding technology. Symbol ⊕ is an as- we know that the scenic spot may belong to buildings or sociation operation, which combines the word vectors of museums. +erefore, the user’s historical evaluation item each word by line to form a word vector matrix. Compared name is transformed into a word embedding vector. For with the traditional word bag technology, using this method tourism service items, the names and locations in its basic to process the formed word vector matrix maintains the information can be directly transformed into word em- order of words in the sentence to the greatest extent, so that bedding vectors. For tags, tags are mainly used to describe a Journal of Robotics 5 few sentences summarizing tourism service items, so its Input ID: Input: Point-of-Interest Name & Geographical Location processing is also similar to comment information. First, User & Point-of-Interest word segmentation is carried out and then word embedding vector transformation is carried out for the divided words. Determine the category of Visited Matrix: X × Y Point-of-Interest +rough the above operations, other information is also transformed into vector form. Theme feature: T X ,Y m n Geographical factor feature: G 2.4.NetworkConstruction. +is chapter proposes to use the deep learning model of DNN for personalized POI rec- ommendation. +e network model is shown in Figure 3 X Y Fully connected layer: T G m n and mainly includes two modules: feature extraction module and network learning module. Among them, the The first hidden layer feature extraction module uses word embedding tech- nology to extract and construct the features of location- based social networks. +e network learning module in- The Second hidden layer cludes network connection layer and network layer. +e function of the connection layer in the network learning The Softmax layer module is to fuse the extracted feature vectors. +e function of the network layer is to train the proposed Output: Prediction result model and predict the score of users’ preference for in- terest demand recommendation. Figure 3: Basic architecture of the proposed network model. In the process of building effective features recom- mended by tourists’ interest needs, the feature extraction (2) Prediction function. In the prediction stage, input module extracts topic feature vector, geographic factor feature vector, and user access feature vector through word the interest and demand point information of users, the network outputs a probability vector P, and embedding technology [26]. +e network connection layer recommend to users according to the probability in the network learning module fully connects the vectors ranking A of A. extracted by the feature extraction module and sends them top−i into the deep neural network. +e connection layer realizes the expansibility of the model. For example, if the relevant 2.5. Network Training. Taking the text features constructed context information needs to be added in the application, it in the previous stage as the input of the model, the final can be fully connected automatically through the full- prediction results of tourists’ interest and demand points can connection layer to send the characteristics of the input be obtained after training. layer into the network training. +e network layer in the +e connection layer of DLM learning module fully network learning module includes the following two connects the existing features. +e vector representation of functions: the full-connection of any user-tourist interest demand (1) Training function. In the training stage, the network point ⟨X , Y ⟩ is shown in the following equation (5). m n layer uses the deep neural network to learn and extract the implicit features, obtain the high-order Q � Me􏼂⟨X , Y ⟩, T, G􏼃, (5) 0 m n interaction between the features, and then take the where, T and G represent the user subject features and high-order feature results extracted from the implicit geographical factor features represented by the potential layer as the input of the softmax layer to learn the vector ⟨X , Y ⟩ of user tourist interest demand points, m n classification task. +e proposed personalized tourist respectively. Me represents the process of connecting all interest demand recommendation model based on feature vectors into a one-dimensional vector and sending it deep learning using word embedding technology into the model, which can be calculated in the hidden layer transforms the tourist interest demand recommen- of the model according to the following equation (6). dation into a binary classification task, in which the user’s check-in record is defined as a positive sample, Q � Dr􏼂Af W · Q + θ􏼁 􏼃, (6) 1 σ 0 and the tourist interest demand points not visited by the user are regarded as a negative sample. +e where, σ represents the number of hidden layers in the output result of the softmax layer in the model is a model. W represents the weight value of the hidden layer. θ two-dimensional probability vector P � [A, B], represents the offset value of the hidden layer. Af indicates where A represents the user’s preference probability the activation function. for the tourist interest demand point and B repre- In addition, the dropout layer is added in the training sents the user’s nonpreference probability for the process of each hidden layer to prevent over fitting problem. tourist interest demand point. +en, the cross en- In the output layer of the model, the predicted proba- tropy loss function is selected, and the gradient bility of the user’s interest demand for tourists can be ob- descent method is used to optimize the function. tained, as shown in formula (7) below. 6 Journal of Robotics Table 2: Performance comparison of three algorithms on self-built I � soft max W · Q + θ 􏼁, (7) O 1 O data sets. where, W represents the weight value of the output layer. Algorithm HR@15 MRR@15 HR@30 MRR@30 θ represents the offset value of the output layer. soft max +is paper 0.489 0.206 0.625 0.317 represents the normalized exponential function, and its Ref. [11] 0.365 0.158 0.486 0.205 output is two probability values, representing the probability Ref. [14] 0.327 0.139 0.437 0.194 that the user visits the tourist’s interest demand and the probability that the user does not visit. +e proposed model uses a combined regularization, and Table 3: Performance comparison of three algorithms on Four- the adjusted loss function is shown in formula (8): square data set. � � � � � �2 � �2 Algorithm HR@15 MRR@15 HR@30 MRR@30 � � � � L � smooth􏼐u − x􏼑 + α􏼒�c � + �c � 􏼓, (8) p 1 2 2 2 +is paper 0.403 0.317 0.576 0.395 Ref. [11] 0.315 0.231 0.374 0.284 where, u represents the final desired prediction result. α Ref. [14] 0.258 0.207 0.332 0.276 represents the regularization parameter. c represents the learning parameters of the aggregation layer. c represents the learning parameters of the layer of interest, and x Table 4: Performance comparison of three algorithms on Tokyo represents the input data. smooth(u − x) can be calculated data set. by formula (9). Algorithm HR@15 MRR@15 HR@30 MRR@30 􏼌 􏼌 􏼌 􏼌 1 2 ⎧ ⎪ 􏼌 􏼌 ⎪ 􏼌 􏼌 u − x , u − x < 1, ⎪ 􏼐 􏼑 􏼌 􏼌 +is paper 0.383 0.301 0.513 0.358 p p ⎪ 2 Ref. [11] 0.295 0.212 0.328 0.252 smooth u − x � (9) 􏼐 􏼑 Ref. [14] 0.226 0.194 0.295 0.231 ⎪ 􏼌 􏼌 􏼌 􏼌 ⎪ 􏼌 􏼌 1 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 u − x − , u − x ≥ 1. 􏼌 p 􏼌 􏼌 p 􏼌 where, Number Of Hits@N represents the correct quantity +e model is optimized through formula (8), and the in the prediction results. GT represents the total number of sorted A is output as the recommendation result, where i top−i prediction results. N represents the total number of nodes in represents the number of recommendation results of tourist the network model. rank represents the predicted result interest and demand. sequence. Finally, according to the prediction results obtained by the model, the recommendation results are determined according to the user access probability. +e higher the 3.2.ExperimentalResultsandAnalysis. +e algorithm in Ref. probability, the more likely the user is to access. When [11, 14] are selected for comparative analysis. Firstly, the self- determining the value i, select the corresponding first i built data set is used as the training set, and the number of prediction results according to the ranking results of nodes in the network model is set to 15 and 30, respectively. probability to recommend to users. +e results are shown in Table 2. +en, the Foursquare data set is used as the training set, 3. Experiment and Analysis and the number of nodes in the network model is set to 15 and 30, respectively, to verify the three different algorithms. +e software environment of the experiment is 64 bit +e results are shown in Table 3. Ubuntu 16 Python 3 and tensor flow deep learning platform Finally, based on the data set of Tokyo, the number of installed under 04 operating system. +e computer hardware nodes in the network model is set to 15 and 30, respectively, configuration used in the experiment is Intel (R) core (TM) to verify three different algorithms. +e results are shown in i7 CPU and single NVIDIA GTX Titan GPU. Table 4. As can be seen from Tables 2–4, the accuracy HR@N and average reciprocal ranking MRR@N of the proposed al- 3.1. Evaluating Indicator. In order to verify the effectiveness gorithm on three different data sets are the best. +is is of the proposed method, two evaluation indexes, accuracy because the algorithm proposed in this paper uses the Skip- HR@N and average reciprocal ranking MRR@N, are used. gram model to improve the extraction performance of user +e calculation of accuracy HR@N and average reciprocal access feature vector, uses deep neural network to process ranking MRR@N are shown in equation (10) and (11), tourists’ interest demand recommendation, which can better respectively. capture the sequence relationship, and better track users’ footprints and users’ points of interest, which greatly im- Number Of Hits@N (10) HR@N � , proves the recommendation performance. In the process of GTGT intelligent recommendation, the method in Ref. [11] does N not consider the sequential process of tourists in the tourism 1 1 MRR@N � 􏽘 , (11) process and does not divide the user access sequence into N rank i�1 different interest segments according to time. Although the Journal of Robotics 7 changes,” Frontiers of Information Technology & Electronic method in Ref. [14] can model local sequence behavior, it has Engineering, vol. 20, no. 3, pp. 307–317, 2019. some limitations in capturing high-order sequence behavior. [10] J. X. Wang and Z. S. Shi, “A speech interaction system based For tourists’ complex interest point selection, feature ex- on cloud service under ROS,” in Proceedings of the 38TH traction cannot be carried out. Chinese Control Conference (CCC), pp. 4721–4725, IEEE, Guangzhou, China, July 2019. 4. Conclusion [11] S. Yamamoto and N. Kubota, “Development of smart device interlocked robot partners for information support and +is paper proposes a personalized tourist interest demand smart recommendation,” in Proceedings of the 2020 IEEE recommendation model based on deep learning and com- International Conference on Fuzzy Systems (FUZZ-IEEE), pares the proposed algorithm model with the other two pp. 338–344, Electr Network, Glasgow, UK, July 2020. algorithms through simulation experiments. +e proposed [12] C. Ruch, J. Gachter, J. Hakenberg, and E. Frazzoli, “+e +1 model uses the deep neural network to transform the rec- method: model-free adaptive repositioning policies for ommendation of tourists’ interests and needs into the task of robotic multi-agent systems,” IEEE Transactions on Net- binary classification, which improves the ability of extracting work Science And Engineering, vol. 7, no. 4, pp. 3171–3184, the features of the original data and effectively enhances the performance of predicting users’ ratings. [13] J. B. Yi, T. Kang, D. Song, and S.-J. Yi, “Unified software +e future work will deeply study the method of real- platform for intelligent home service robots,” Applied Sci- ences-Basel, vol. 10, no. 17, pp. 354–361, 2020. time personalized travel route recommendation by im- [14] Z. P. Shan, Y. Q. Lei, D. F. Zhang, and J. Zhou, “NASM: proving the algorithm speed and the personalized route nonlinearly attentive similarity model for recommendation recommendation method for group travel. system via locally attentive embedding,” IEEE Access, vol. 7, no. 5, pp. 70689–70700, 2019. Data Availability [15] L. Hu, Y. G. Jiang, F. X. Wang, K. Hwang, M. S. Hossain, and G. Muhammad, “Follow me robot-mind: cloud brain based +e data used to support the findings of this study are in- personalized robot service with migration,” Future Generation cluded within the article. Computer Systems-Ae International Journal of Escience, vol. 107, no. 6, pp. 324–332, 2020. Conflicts of Interest [16] S. R. Pan, “Design of intelligent robot control system based on human-computer interaction,” International Journal of Sys- +e author declares that there are no conflicts of interest tem Assurance Engineering And Management, vol. 12, no. 3, regarding the publication of this paper. pp. 11–18, 2021. [17] L. K. Fan, X. C. Li, C. S. Guo, and B. Jia, “Path control of References panoramic visual recognition for intelligent robots based- edge computing,” Computer Communications, vol. 178, no. 8, [1] A. C. Ma, Z. W. Meng, and X. R. Ding, “Performance review pp. 64–73, 2021. of intelligent guidance robot at the outpatient clinic setting,” [18] Y. Ishida, T. Morie, and H. Tamukoh, “A hardware intelligent Cureus, vol. 13, no. 8, pp. 45–52, 2021. processing accelerator for domestic service robots,” Advanced [2] Z. Z. Yin, D. P. Wang, and J. H. Liu, “A method of con- Robotics, vol. 34, no. 14, pp. 947–957, 2020. structing robotics service platform for assisting handicapped [19] F. Lu, M. Huang, X. L. Li, G. Tian, H. Wu, and W. Si, or elderly people,” Journal of Robotics, vol. 2020, Article ID “Learning and development of home service robots’ service 4259175, 6 pages, 2020. cognition based on a learning mechanism,” Applied Sciences- [3] H. L. Qiu, M. L. Li, B. Y. Shu, and B. Bai, “Enhancing hos- Basel, vol. 10, no. 2, pp. 259–266, 2020. pitality experience with service robots: the mediating role of [20] D. Yang, D. Zhang, V. W. Zheng, and Z. Yu, “Modeling user rapport building,” Journal of Hospitality Marketing & Man- activity preference by leveraging user spatial temporal char- agement, vol. 29, no. 3, pp. 247–268, 2019. acteristics in LBSNs,” IEEE Transactions on Systems, Man, and [4] S. B. Kelley, B. W. Lane, and J. M. DeCicco, “Pumping the Cybernetics: Systems, vol. 45, no. 1, pp. 129–142, 2014. brakes on robot cars: current urban traveler willingness to [21] Y. G. Wang, X. M. Cai, C. L. Xu, and L. Jun, “Rise of the consider driverless vehicle,” Sustainability, vol. 11, no. 18, machines: examining the influence of professional service pp. 167–176, 2019. robots attributes on consumers’ experience,” Journal of [5] H. X. Lin, O. H. X. Chi, and D. Gursoy, “Antecedents of Hospitality And Tourism Technology, vol. 12, no. 4, customers’ acceptance of artificially intelligent robotic device pp. 609–623, 2021. use in hospitality services,” Journal of Hospitality Marketing & [22] A. Onan, “Two-stage topic extraction model for bibliometric Management, vol. 29, no. 5, pp. 530–549, 2020. data analysis based on word embeddings and clustering,” [6] H. Wu, X. J. Wu, Q. Ma, and G. Tian, “Cloud robot: semantic IEEE Access, vol. 7, pp. 145614–145633, 2019. map building for intelligent service task,” Applied Intelligence, [23] A. Onan and S. Korukog˘lu, “A feature selection model vol. 49, no. 2, pp. 319–334, 2019. based on genetic rank aggregation for text sentiment [7] I. P. Tussyadiah, F. J. Zach, and J. X. Wang, “Do travelers trust classification,” Journal of Information Science, vol. 43, no. 1, intelligent service robots?” Annals of Tourism Research, pp. 25–38, 2017. vol. 81, no. 14, pp. 125–134, 2020. [24] A. Onan, “Biomedical text categorization based on ensemble [8] S. Park, “Multifaceted trust in tourism service robots,” Annals of Tourism Research, vol. 81, no. 7, pp. 168–175, 2020. pruning and optimized topic modelling,” Computational and Mathematical Methods in Medicine, vol. 2018, Article ID [9] W. Shuai and X. P. Chen, “KeJia: towards an autonomous service robot with tolerance of unexpected environmental 2497471, 22 pages, 2018. 8 Journal of Robotics [25] D. Lian, C. Zhao, X. Xie, G. Sun, E. Chen, and Y. Rui, “GeoMF: joint geographical modeling and matrix factorization for point-of-interest recommendation,” in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Dis- covery and Data Mining, pp. 831–840, New York, NY, USA, August 2014. [26] R. Akabane and Y. Kato, “Pedestrian trajectory prediction based on transfer learning for human-following mobile robots,” IEEE Access, vol. 9, no. 12, pp. 126172–126185,

Journal

Journal of RoboticsHindawi Publishing Corporation

Published: Apr 21, 2022

There are no references for this article.