DeepDyve requires Javascript to function. Please enable Javascript on your browser to continue.
Evaluation of the Students’ Learning Status in the Foreign Language Classroom Based on Machine Vision
Evaluation of the Students’ Learning Status in the Foreign Language Classroom Based on Machine...
Yao, Lin;Qin, Qiongfang
2022-09-27 00:00:00
Hindawi Journal of Robotics Volume 2022, Article ID 6432133, 12 pages https://doi.org/10.1155/2022/6432133 Research Article Evaluation of the Students’ Learning Status in the Foreign Language Classroom Based on Machine Vision Lin Yao and Qiongfang Qin School of Foreign Languages, Guilin Tourism University, Guilin 541006, China Correspondence should be addressed to Lin Yao; yaolin@gltu.edu.cn Received 20 July 2022; Revised 29 August 2022; Accepted 6 September 2022; Published 27 September 2022 Academic Editor: Shahid Hussain Copyright © 2022 Lin Yao and Qiongfang Qin. �is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In order to improve the e‚ectiveness of the evaluation of student’s learning status in foreign language classrooms, this paper applies machine vision to classroom teaching. �rough an in-depth analysis of the relative motion relationship between the end marker points of classroom feature recognition and the center point of the machine vision system window, this paper …rst proposes an autonomous tracking motion algorithm of the machine vision system window based on the preset …eld of view parameters. Moreover, this paper realizes the motion function of the window to track the marker points autonomously, completes the simulation analysis through two sets of planned trajectories and two sets of master hands to collect the actual trajectories, and veri…es the correctness and feasibility of the algorithm. �e research study shows that the algorithm based on machine vision proposed in this paper can e‚ectively judge the real-time state of students in the foreign language classroom. learning, and how to stimulate students’ learning motivation 1. Introduction and improve students’ interest in learning. In addition, it has Classroom learning is one of the main forms for university a certain inspiring e‚ect on the current curriculum teaching students to acquire relevant professional knowledge and reform and the cultivation of innovative talents [2]. In there is still a lot of knowledge acquired by undergraduate addition, state refers to appearance characteristics and ac- students through classroom learning. At the same time, tion modality. In literature [3], the author believes that the classroom learning is also one of the important ways to classroom learning state refers to the sum of the physical cultivate students’ thinking styles. It is the basis for other characteristics, action behavior, and psychological activities teaching links and students’ self-study and it enables stu- of students in the course of classroom learning due to the dents to better understand and master how to learn e‚ec- combined e‚ect of subjective and objective reasons. Pro- tively. Moreover, with the foundation of classroom learning, fessor Liu Guiqiu divides the classroom learning state of college students into pre-class study preparation state, in- students can expand and extend the space for self-learning, make better use of extracurricular time for learning activ- class listening state, and after-class learning e‚ect state. In ities, and promote their comprehensive learning and de- literature [4], the author believes that the state of classroom velopment. �rough the detection of students’ classroom learning refers to the process of listening, thinking, and learning, it re‹ects students’ absorption and implementation teacher-student interaction in the classroom. Moreover, he e‚ect of classroom learning and their attitude towards believes that all aspects of preparation or preview before learning and indirectly re‹ects the learning status of today’s class, as well as learning e‚ect and review after class, are only college students [1]. �e study of undergraduate students’ the learning state closely related to students’ classroom classroom learning status and students’ self-evaluation of learning state. �e two should be distinguished. �e learning learning status has a certain signi…cance for understanding state is a broader state, which includes the classroom the current students’ learning psychology, factors a‚ecting learning state, pre-class learning, and after-class learning. 2 Journal of Robotics breaks the constraints of time and space, and teachers and However, the classroom learning state is a state with limited conditions, which refers to the learning state in the course of students are separated from each other. (erefore, the evaluation method of online learning must be different from listening to the class [5]. At the same time, classroom learning status cannot represent the learning status of stu- the evaluation method for traditional teaching methods. (e dents. (is research investigates and analyzes students’ online learning state includes not only the learning prepa- classroom learning status, after-school learning status as an ration state before students engage in learning activities but extension of classroom learning, the detection mechanism of also the learning psychological state and learning environ- classroom learning effect evaluation, and students’ self- ment state of students engaged in learning activities and also evaluation, so as to provide a reference for improving includes the learning achievement state after students en- gage in learning activities [12]. (e online learning state students’ learning status, improving teachers’ teaching quality, and promoting the improvement of school teaching evaluation index system of literature [13] mainly selects the learning state of students engaged in learning activities, level [6]. With the support of a series of emerging technologies which can be evaluated from five indicators: attention state, human-computer interaction state, emotional state, social such as smart classrooms, online learning platforms, and “artificial intelligence + education”, learning methods have network state, and cognitive state. Attention is the direction shown diverse characteristics. Traditional face-to-face and concentration of mental activities on a certain object teaching methods are constantly being challenged, and and is a common psychological feature accompanied by online teaching methods that span spaces have become a psychological processes such as perception, memory, new wave. Online learning realizes the push of educational thinking, and imagination. Attention state is the measure of and teaching resources through Internet technology, this direction and concentration; the computer interaction state is a measure of the degree of interaction between online breaking the limitations of time and space on learners’ learning [7]. (e significance of online learning is not only to learners and online learning platforms, such as login fre- quency, online time, and click rate. [14]; the emotional state create a learning method across time and space but also to enable more high-quality educational resources to be shared refers to the emotional experience of learners in the learning process, such as happiness, pain, curiosity, interest, and by the majority of learners through the Internet and to provide learners with personalized teaching services. Today, boredom; the social network status refers specifically to online learning has become one of the most important ways learners’ behaviors such as communication, discussion, of learning and online learning status is one of the important interaction, and collaboration with teachers and other factors affecting learners’ online learning performance, and learners in the online learning community. For example, it is also a problem that education researchers cannot ignore teacher-student interaction in the teaching process can be [8]. (rough the use of analysis and evaluation technology, it the data of questioning rate and feedback rate, student re- sponse rate, and active questioning rate that are analyzed is of great significance to objectively evaluate the learning status of online learners for improving teaching quality and [15]; and cognitive state refers specifically to the learner’s understanding and mastery of knowledge and skills. learning efficiency. (erefore, more and more educational researchers are paying more and more attention to online (e key to optimizing the evaluation effect lies in the learning status and its related evaluation research. (e evaluation criteria of students’ participation status, and all learning state is the sum of the attention state, emotional the value judgments made by teachers need to be carried out state, motivational state, and so on shown by the learner in according to this. (e process of formulating evaluation the learning process and learning results [9]. (e complexity standards also reflects the teaching philosophy of teachers. of the learner’s learning state determines that the researcher (erefore, in the process of reconstructing the dynamic cannot only evaluate it based on a single index but must be evaluation standards, it is first necessary to comprehensively based on the whole and use the multi-index comprehensive examine the growth of students in terms of intelligence and nonintelligence factors and the classroom status, so as to evaluation method to make an overall evaluation and comparison, in order to conduct a more comprehensive determine the specific content and design classroom par- ticipation that is consistent with multidimensional devel- evaluation [10]. Radar chart has been successfully applied in many fields such as financial performance evaluation, power opment based on core literacy. Evaluation scheme to quality evaluation, enterprise competitive advantage eval- optimize the evaluation effect [16]. uation, teaching informatization evaluation, and teacher In the past, the focus of teaching evaluation has always classroom teaching quality evaluation due to its simplicity been teacher-centered. (e focus of everyone’s attention is and intuitiveness and the ability to compare multiple index how to improve the quality of teaching from the perspective variables at the same time [11]. of teachers. However, not enough attention has been paid to the student’s participation in teaching activities and their Learning is the active construction of knowledge by learners. Learning status refers to the physical and psy- effects. According to the modern teaching concept, the center of classroom teaching should be student-centered and chological functional status of students in the learning sit- uation, mainly including the status of brain wakefulness and everything should be for students. We not only pay attention to whether the teacher’s lectures are in place but also pay concentration, emotional status, and physical function status. [9]. (e strength of the online learning state is directly attention to the learning status of students in the whole related to the quality of the learner’s learning effect. Online learning process. (e study of the learning status is of great learning is different from traditional learning methods. It help for students to establish a correct learning concept, Journal of Robotics 3 in in in in correct learning attitude, improve methods, improve carried out. Second, the motion amount θ , θ , θ , d of 4 5 6 8 learning efficiency, and avoid academic failure. As a teacher, the first four active joints is substituted into the positive they can also provide timely help and guidance according to kinematics solution of the marker point, and as shown in the student’s learning status. In addition, the advancement formula (1), the pose matrix T of the marker points A in N0 of this work will have a profound impact on the work of and B at the end 1 and 2 of the classroom feature recognition college students, teaching work, curriculum reform and even in the global coordinate system can be obtained. Finally, the teaching management [17]. position vector is extracted from the positive solution of the College students have different majors and different marked point, so as to obtain the movement trajectory λ 8 needs for English. Only the liberal arts can be divided into in P of the marked point, as shown in formula (2). N0 office English, business English, legal English, financial λ 8 λ 4 5 6 7 8 T � T · T · T · T · T in N0 in N0 in 4 in 5 in 6 in 7 English, and many other categories. Professional English λ λ λ cannot exist as an independent language. (ey are spe- n O a p x x x in x in in in ⎢ ⎥ ⎡ ⎢ ⎤ ⎥ cialized subjects under the English language. Different ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ λ ⎥ ⎢ ⎥ ⎢ ⎥ professional English must have commonalities in the English ⎢ ⎥ (1) ⎢ ⎥ ⎢ n o a p ⎥ ⎢ ⎥ ⎢ y in y in y in y ⎥ ⎢ in ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ language. (is requires us to stick to the basic skills of the � ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ λ λ λ ⎥ ⎢ ⎥ language, master the basic grammar and vocabulary, and ⎢ ⎥ ⎢ ⎥ ⎢ n O a p ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ in z in z in z in z ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ have a certain ability for language expression. (e foun- 0 0 0 1 dation of basic English directly affects students’ learning of professional English [18]. λ 8 λ λ λ In order to improve the effectiveness of the evaluation of (2) P � p p p . x y z in N0 in in in student’s learning status in foreign language classrooms, this 1 8 paper applies machine vision to classroom teaching, eval- When λ � 1, the position vector P represents the in N0 uates students’ classroom status through intelligent feature marker point A at the end 1 of class feature recognition. recognition, and improves the evaluation effect of students’ When λ � 2, the position vector P represents the in N0 learning status. marker point B at the end 1 of class feature recognition. After knowing the position vector of the marker point and the center point of the end of the machine vision system, the 2. Feature Tracking of the Students’ field of view parameters can be calculated. Learning Status (e window replacement of the window tracking algo- 2.1. Automatic Window Tracking Motion Strategy for the rithm of the machine vision system can be divided into four Machine Vision System Automatic Window Tracking processes, as shown in Figure 1. (1) Initial alignment refers to Motion Algorithm of the Machine Vision System Based on controlling the origin L of the end tool of the machine vision Preset Visual Field Parameters. (e marker point at the end system to move from the initial position to the initial of classroom feature recognition must be as close to the end debugging point L after the preoperative positioning of the of the microdevice as possible, so as to ensure that the dobby system is completed, so as to ensure that the center of motion information of the marker point can be obtained the window coincides with the midpoint E of the marked centrally within the window of the machine vision system, points A and B. (2) Window adjustment refers to the process and its position has a variety of options. As shown in the in which the desired teaching window is obtained by con- kinematic coordinate system of the end link of the class- trolling the freedom of movement of the machine vision room feature recognition in the figure, in the forward system arm and the zoom ratio of the window after the initial kinematic model of the end of the classroom feature rec- alignment is completed. (3) (e current window means that ognition, since the wrist of the microdevice has 3 degrees of the obtained machine vision system window is defined as the in freedom, the motion trajectory curve of the origin O of initial window after the window adjustment is completed. (4) the end tool coordinate system is too complicated (in- (e target window refers to the movement of the marker cluding 6 active degrees of freedom), which is not con- points A and B at the end of classroom feature recognition ducive to the extraction of key information. However, the during the teaching operation, which will cause the change of in motion trajectory curve of the origin O of the pitch joint the basic visual field parameter ξ. By establishing a tracking coordinate system of the microdevice is relatively greatly motion algorithm, the machine vision system arm is auto- simplified (in essence, it contains 3 useful degrees of matically guided to adjust the end point position, thereby freedom), so it is the best choice for identifying the end ensuring the invariance of ξ and obtaining the target window. (5) (e dynamic replacement process between the current marker points as classroom features. in Obtaining the trajectory of the marker point O is window and the target window also realizes that the window mainly divided into the following steps: first, on the basis of of the machine vision system autonomously tracks the the known pose of the coordinate system of the end tool for movement of the end marker point of classroom feature the recognition of two classroom features, the motion recognition and ensures the stability of the basic visual field amount of each active joint is calculated through the inverse parameter ξ. (us, the operator’s desired field of view of the kinematics solution of the end of the classroom feature window is always maintained. (e autonomous tracking recognition, and the forward and inverse kinematics movement of the window is essentially a fine-tuning process modeling of the end of the classroom feature recognition is of the center of the window. 4 Journal of Robotics Machine vision Machine vision camera arm camera arm M M 1 L R F R F L X L LS Lo ξ a γ a Device arm A Device arm A Device arm B Device arm B s ξ Y LS Lo M Z M A LS A Z Lo M M s s A o o X Eo o B Eo C C X o o Eo Z Z Eo Eo x C A:Initial centring B:Wndow adjustment Machine vision Machine vision camera arm camera arm 1 M R L F R L 12 L1 2 S Device arm A Device arm A Device arm B Device arm B ξ Y ξ Y s 12 s L2 M Z M Z A L2 A L2 M M s s D D E E A 2 1 o A 1 E2 E1 B B 2 1 α C Y α C Y 2 F2 1 E1 E2 Z E1 Lorem ipsum C: Target window D: Current window Figure 1: Schematic diagram of the window replacement process of the window tracking algorithm of the machine vision system. As shown in Figure 1, c is the superwide angle of view of that the position vectors of the telecentric fixed point M the machine vision system, and points C and D are the and the endpoint L of the machine vision system are as 0 0 R projection points of the marked points A and B on the follows: 0 0 plane α, respectively. (e coordinate system x y z at E E E T 0 0 0 ⎧ ⎪ x y z P � , ⎨ M M M point E moves up along the centerline of the machine vision M L L L (5) system, which can coincide with the visual coordinate T x y z P � . L L L R R R R system x y z , x y z , x y z , x y z of the win- L L L L L L L L L L L L s s s 0 0 0 a a a b b b ����� → � ���� → � dow, and the plane α coincides with the coordinate system (e expectation is ‖M L ‖ � ‖M L ‖ in the L R L s plane x y . (e position vector of the marker points A E E 0 0 0 initial alignment session. According to the principle of and B in the global coordinate system X Y Z can be 0 0 0 0 three-point collinearity, the L position vector can be calculated according to the movement trajectory of the end solved by (6) and used as the desired position of the tool system based on the classroom characteristics and is set machine vision system arm, so as to complete the initial as follows: alignment adjustment. ����� → ⎧ ⎪ x y z P � , ⎨ � � A A A A 0 0 0 0 ���� → � M E ������ → � � L 0 � � (3) � � � � M L � · M L . (6) ⎪ L S ������ →� � L R� ⎩ � � � � x y z M E P � . B B B � � B L 0 0 0 0 It can be known that the position vector of the point E is When the position vector of the point E is substituted as follows: into the inverse kinematics solution of the machine vision system arm, and the positive kinematics solution is input at P � P + P . (4) the joint angle, the pose matrix of the point E in the global E A B 0 0 0 base system X Y Z at the end of the machine vision system 0 0 0 From the initial positioning information and the forward can be calculated. It is set to be T , and its position vector kinematics of the machine vision system, it can be known and attitude matrix are P and R , respectively. E E 0 0 Journal of Robotics 5 � � � � 1 ����� → � � ���� → � � R P − 1 � � � � E E ⎧ ⎪ 0 0 � � � � ⎪ ξ � ∠C L D � 2 × tan C D / L E , T � . (7) a 0 a 0 � 0 0� � a 0� ⎪ 0 ⎪ 0 1 ⎪ 1×3 � � � � In the formula, ����� → � � ���� → � � − 1 � � � � � � � � ξ � ∠C L D � 2 × tan C D / L E , � � � � b 0 b 0 0 0 b 0 (15) T 2 ⎧ ⎪ x y z ⎪ ⎪ P � , E E E E ⎪ ⎪ 0 0 0 E n E o E a ⎪ 0 x 0 x 0 x ⎪ � � � � (8) ⎢ ⎥ 1 ����� → � � ���� → � � ⎡ ⎢ ⎤ ⎥ ⎪ ⎢ ⎥ − 1 ⎪ ⎢ ⎥ � � � � ⎢ ⎥ ⎪ ⎢ ⎥ � � � � ⎢ ⎥ ξ � 2 × tan C D / L E . ⎢ ⎥ ⎪ ⎢ ⎥ � � � � R � ⎢ E n E o E a ⎥. 1 1 1 1 1 ⎪ ⎢ ⎥ E ⎢ 0 y 0 y 0 y ⎥ ⎢ ⎥ ⎪ 0 ⎣ ⎦ 2 E n E o E a 0 z 0 z 0 z (e position vectors of the target markers A and B in 2 2 the global base system X Y Z can be obtained as follows: (en, it can be known that the position vectors of the 0 0 0 marker points A and B in the window center coordinate 0 0 ⎧ ⎪ x y y P � , ⎨ A A A 2 2 2 system x y z can be calculated by the following E E E 0 0 0 (16) formula. P � x y y . B B B 2 2 2 In the formula, (e midpoint E of the marked points A and B is as 2 2 2 ⎪ E E E E ⎧ 0 0 0 0 ⎪ follows: P � x y z , A A A A 0 0 0 0 (9) ⎪ 1 ⎩ E E E E x y z (17) 0 0 0 0 P � P + P � . E A B E E E 2 2 2 P � x y z . 2 2 2 B B B 2 0 0 0 0 Points C and D are the projection points of the marked Obviously, the position vectors of points C and D in 2 2 0 0 points A and B in the coordinate system x y z of the the coordinate system x y z of the center of the view- 2 2 E E E E E E 2 2 2 0 0 0 center of the window, and the plane β coincides with the point are as follows: plane x y of the coordinate system. In the same way, the E E T 2 2 ���� → � E E E ⎧ ⎪ 0 0 0 ⎪ projection of the vector C D in the coordinate system P � x y 0 , 2 2 C C C 0 0 0 (10) x y z can be obtained. ⎪ E E E 2 2 2 ⎩ E E E 0 0 0 � � � � P � x y 0 . ���� → � � 1 ����� → � � D D D 0 � � � � 0 0 � � � � L E � C D . (18) � � � � 2 2 2 2 2 × tan(ξ/2) (e position vector of the endpoint L of the machine vision system in the window adjustment in Figure 1 is as Combining formulas (5) and (17), it can be known that ����� → follows: the vector M E is as follows: L 2 ����� → x y z (11) P � . L L L L (19) 0 0 0 M E � x − x y − y z − z . 0 L 2 E M E M E M 2 L 2 L 2 L ��� → � It can be known from the simultaneous equations (en, it can be known that the vector L E can be 2 2 (10)–(12) that in the window center coordinate system ���� → � obtained by the following formula: x y z , the vectors C D and L E are as follows: E E E 0 0 0 0 0 0 0 ����� → ���� → � � � E E ��� → � ���� → � � 0 0 M E ⎧ ⎪ � � L 2 ⎨ C D � P − P , � � 0 0 D C L E � L E · � �. 0 0 � � (20) 2 2 2 2 ������ →� (12) � � ⎪ � � M E ⎩ � L 2� L E � P − P . 0 0 E L 0 0 It is set to as follows: (en, the basic visual field parameter ξ � ∠C L D of the 0 0 0 ��� → � T window is as follows: x y z (21) L E � . 2 2 L E L E L E 2 2 2 2 2 2 � � � � 1 ����� → � � ���� → � � − 1 � � � � � � � � ξ � 2 × tan C D / L E . (13) � 0 0� � 0 0� (en, the target position vector of the endpoint L of the machine vision system is as follows: (en, the basic visual field parameter ξ � ∠C L D of the 0 0 0 window is as follows: (22) P � x − x y − y z − z . L E L E E L E E L E 2 2 2 2 2 2 2 2 2 2 � � � � ����� → � � 1 � ��� → � � − 1 � � � � � � � � (14) ξ � 2 × tan C D / L E . In Figure 1, point F is the end point of the trocar tube � 0 0� � 0 0� through which the machine vision system passes. (e In the process of initial window adjustment, by adjusting endpoint L of the machine vision system cannot be retracted the intervention length of the machine vision system along into the trocar tube in the algorithm and point F is the the line of sight of the machine vision system, the size of the retraction limit point of point L. (e distance between the basic field of view parameter angle can be changed. Similarly, endpoint F of the trocar tube and the telecentric fixed point ���� → � the sizes of ξ and ξ during the adjustment process can be is l ; then, the vector M F can be obtained by the following a b F L known as follows: formula: 6 Journal of Robotics Data set construction Image input Student behavior database Label (c,x,y,w,h) Netwok structure feature extraction Box matches Region candidates box Region candidate box Focal loss Region candidate box Confidence prediction (Somax loss) TOU matches, resulting in Location prediction positive and negative samples (Smooth L1 loss) Student classroom behavior identification Student behavioral data model Figure 2: Student status evaluation system. 920 920 900 900 880 880 840 840 400 400 380 380 350 350 360 360 340 340 300 300 320 320 250 250 YD (mm) 300 YD (mm) 300 XD (mm) XD (mm) Label point A Label point A Label point B Label point B Label point L Label point L (a) (b) Figure 3: Marker movement track and endpoint tracking track. ZO (mm) Predictor e Algorithmic model is trained for backpropagation ZO (mm) Journal of Robotics 7 900 900 860 860 840 840 400 400 380 380 350 350 360 360 340 340 300 300 320 320 250 250 YD (mm) YD (mm) 300 XD (mm) XD (mm) Label point A Label point A Label point B Label point B Label point L Label point L (a) (b) 920 920 900 900 880 880 860 860 350 350 340 340 YD (mm) 300 YD (mm) 300 XD (mm) XD (mm) Label point A Label point A Label point B Label point B Label point L Label point L (c) (d) Figure 4: Marker motion trajectories obtained using the Phantom master hand and endpoint tracking trajectories based on the machine vision system. ����� → machine vision system tracks the movement of the marker ���→ � M E T L 2 M F � l · � � � x y z . points A and B at the end of the classroom feature recog- (23) M F M F M F L F ������ →� L L L � � � � M E � L 2� nition, and ensures the stability of the ξ value at all times. (en, the position vector of point F in the global base 3. Student Status Evaluation System system is as follows: (24) (is paper combines the intelligent student status recog- P � x − x y − y z − z . F M M F M M F M M F L L L L L L nition algorithm based on machine vision proposed in the (e derivation process of the above formula shows that second part to construct a student status evaluation system after adjusting and obtaining the basic visual field parameter based on machine vision, and the system is shown in ξ, the origin L of the visual coordinate system at the end of the Figure 2. ZO (mm) ZO (mm) ZO (mm) ZO (mm) 8 Journal of Robotics 60 100 50 80 40 60 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 012 3 456 Time order time (s) Time order time (s) Label point A Label point A Label point B Label point B (a) (b) 50 120 40 80 30 60 01 2 3 45 6 7 8 05 10 15 Time order time (s) Time order time (s) Label point A Label point A Label point B Label point B (c) (d) Figure 5: Movement speed of marker trajectories 1–4. As can be seen from the figure, the realization of the non-maximum suppression, redundant boxes are elimi- nated, the best position box for detecting student behavior recognition of student’s classroom behavior includes three steps: dataset construction, algorithm model is obtained, and the five types of student behavior states of training, and student classroom behavior recognition. (e raising hands, sleeping, answering, writing, and listening first is dataset construction. Five types of behavioral states are recognized. of students, such as raising hands, sleeping, answering, In order to verify the correctness of the window writing, and listening to lectures, were marked. (e tracking motion algorithm of the machine vision system, student behavior dataset is then trained with a substan- within the range of the motion space of the marker point tially improved algorithm. During the training process, at the end of the classroom feature recognition, based on the input student behavior state pictures are forwarded to the sine and cosine function, the marker point motion the SSD network for feature extraction. (e candidate trajectory 1 obtained by formula (25) is planned, which is boxes of different prediction layers are matched with the shown in Figure 2(a), and the marker point motion tra- jectory 2 obtained by formula (26), which is shown in ground-truth boxes, and the error of each candidate box category confidence prediction and position offset pre- Figure 2(b). diction is output. At the same time, the corresponding x � 21 sin(t) + 310, x � −25 cos(t) + 360, ⎧ ⎪ As ⎧ ⎪ Bs ⎪ ⎪ weights are adjusted by backpropagation of the calculated ⎨ ⎨ y � 26 cos(t) + 290, y � 25 sin(t) + 300, (25) loss until the loss function drops to a small stable value, As Bs ⎪ ⎪ ⎪ ⎪ ⎩ ⎩ and the model training is completed. Finally, the iden- z � 20 sin(2t) + 860, z � 20 cos(2t) + 870. AS Bs tification of students’ classroom behavior status is carried out. When the video frame to be detected is input into the In the formula, the unit of position trajectory is smart classroom recording and broadcasting system, a mm, t ∈ [0, π/2]s, which represents the movement time of series of detection frames are generated on the image the planned trajectory, and the basic visual field parameter frame through the trained parameter model. (rough obtained by an adjustment is set to ξ � 37.6583deg. Track 3 label point speed Track 1 label point speed Track 4 label point speed Track 2 label point speed Journal of Robotics 9 50 40 -20 -40 -10 012 3 4 5 6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Time order time (s) Time order time (s) Theta /deg Theta /deg Theta /deg Theta /deg (d8-150) mm (d8-150) mm (a) (b) 40 60 30 40 10 0 0 -20 -10 -40 -20 -60 0213 45 6 7 8 01510 5 Time order time (s) Time order time (s) Theta /deg 5 Theta /deg Theta /deg 6 Theta /deg (d8-150) mm (d8-150) mm (c) (d) Figure 6: (e result of the kinematic inverse solution of the end tracking trajectory based on the machine vision system. and the sampling period is set to 10 ms. On the basis of x � 15 sin(2t) + cos(t) + 5 sin(5t) + 330, ⎧ ⎪ Ac compensating for the absolute base position, the 1 :1 in- y � 18 cos(t) + 3 sin(4t) + 290, cremental master-slave mapping method is used to simulate Ac the motion trajectories 3 and 4 of the marker points at the z � 10 sin(3t) + 13 cos(t) + 900, Ac end of classroom feature recognition, as shown in Figures 3 (26) and 4. Since the “filtering algorithm” is not used to eliminate x � 18 sin(2t) + 3 cos(5t) + 300, ⎧ ⎪ Bc the jitter of the master hand trajectory, there is a high- y � 17 cos(t) + 6 sin(5t) + 320, Bc frequency noise signal of the original operation jitter in the trajectory. At the same time, it is also equivalent to adding z � 20 cos(3t) + 3 sin(t) + 900. Bc interference noise in the simulation, which is beneficial to In the formula, the unit of the position trajectory is check the basic performance of the motion algorithm. mm, t ∈ [0, 2π]s, which also represents the movement time On the basis of the initial adjustment and setting of the of the planned trajectory, and the adjusted basic field of view basic field of view parameters ξ, the trajectory curve of parameter is set to ξ � 37.6531deg. the origin (endpoint L) of the visual coordinate system of the Using the PhantomOmni, master hand can obtain a machine vision system can be calculated. (e velocity curves marker trajectory that is closer to the teaching operation, of the movement trajectories 1–4 of the marked points A and Track 3 laparoscopic active Track 1 laparoscopic active joint reverse solution joint reverse solution Track 4 laparoscopic active Track 2 laparoscopic active joint reverse solution joint reverse solution 10 Journal of Robotics 28 23 0 12 3 4 5 6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Time order time (s) Time order time (s) Angle ALB Angle ELB Angle ALB Angle ELB Angle ALE Angle CLD-20 Angle ALE Angle CLD-20 (a) (b) 28 50 0 5 10 15 0 5 10 15 Time order time (s) Time order time (s) Angle ALB Angle ELB Angle ALB Angle ELB Angle ALE Angle CLD-20 Angle ALE Angle CLD-20 (c) (d) Figure 7: (e result of the window angle judgment condition of the end tracking trajectory based on the machine vision system. B are shown in Figure 4. (e maximum speeds of tracks 1–4 shown in Figures 3 and 4 need to meet two conditions. (1) are 55 mm/s, 80 mm/s, 40 mm/s, and 110 mm/s, respectively, (e end-tracking trajectory of the machine vision system is and the minimum speeds are 25 mm/s, 5 mm/s, 0 mm/s, and used as the expected motion curve, and the kinematic in- 0 mm/s, respectively. It can be seen that the mark point verse solution of the arm of the machine vision system is la la la trajectory used in the simulation is a very difficult operation input, and the obtained motion amount θ , θ , d of the 5 6 8 curve in actual teaching. At the same time, trajectories 3 and active joint must be within the joint motion range, that is, the 4 are not filtered, and the velocity curve contains the in- “kinematic inverse solution judgment condition” is satisfied. fluence of high-frequency noise, which is of great practical (2) (e geometric relationship between the marker point significance to verifying the correctness and feasibility of the A, B and the tracking point L in the target window as shown window tracking algorithm. in Figure 1 must meet the following window angle In order to verify the correctness and feasibility of the determination conditions: the ultrawide angle of the 3D window tracking algorithm, the simulation trajectories machine vision system with a viewing angle of 0 is c � Track 1 decision angle verification (deg) Track 3 decision angle verification (deg) Track 4 decision angle verification (deg) Track 2 decision angle verification (deg) Journal of Robotics 11 Table 1: (e evaluation effect of the system on the students’ learning status in foreign language classrooms. Number Recognition effect Number Recognition effect Number Recognition effect 1 87.77 18 84.51 35 86.01 2 88.03 19 89.44 36 87.85 3 88.14 20 84.69 37 84.01 4 84.71 21 91.09 38 85.20 5 85.24 22 88.52 39 84.27 6 90.67 23 91.47 40 91.22 7 87.94 24 87.65 41 87.26 8 91.82 25 84.56 42 90.69 9 88.79 26 86.42 43 91.79 10 89.17 27 91.13 44 84.87 11 86.69 28 91.80 45 88.24 12 84.04 29 88.06 46 88.39 13 87.33 30 86.77 47 86.87 14 85.77 31 90.95 48 88.96 15 89.96 32 85.99 49 86.63 16 91.16 33 85.92 50 90.77 17 87.24 34 91.62 51 86.38 110deg, ∠A L B < c, ∠A L E < 0.5c, ∠E L B < 0.5c, ∠A expressions, but their construction standards and methods 2 2 2 2 2 2 2 2 2 2 L B < 90deg, ∠A L E < 45deg, ∠E L B < 45deg, and the are not uniform. In addition, expression classification, as the 2 2 2 2 2 2 2 2 basic field of view parameter ξ � ∠C L D remains the same core problem of expression recognition and the primary task 2 2 2 as the initial adjustment setting value. (e movement speed of building an expression library, has not been well solved. In of the marker point trajectory 1–4 is shown in Figure 5. order to improve the effectiveness of the evaluation of student’s learning status in foreign language classrooms, this Figures 6 and 7 show the verification results of the simulation trajectories 1–4 applying the kinematic inverse paper applies machine vision to classroom teaching and solution judgment conditions and the window angle judg- evaluates students’ classroom status through intelligent ment conditions. It can be seen that the inverse solutions of feature recognition. (e research results show that the al- the active joints of the arm of the machine vision system are gorithm based on machine vision proposed in this paper can all within the motion range, and the window angle value and effectively judge the real-time status of students in the basic field of view parameters both meet the judgment classroom. conditions, thus verifying the correctness and feasibility of the window tracking motion algorithm. Data Availability (e abovementioned research study verifies that the algorithm based on machine vision proposed in this paper (e labeled dataset used to support the findings of this study can have a good application foundation in the evaluation of is available from the corresponding author upon request. students’ status in foreign language classrooms. On this basis, through multiple sets of simulation experiments, this Conflicts of Interest paper explores the accuracy of the student state evaluation system based on machine vision. (e results of the student (e authors declare that they have no conflicts of interest. learning status evaluation shown in Table 1 are obtained. From the abovementioned research, it can be seen that Acknowledgments the algorithm based on machine vision proposed in this paper can effectively judge the real-time status of students in (is work was supported by Guilin Tourism University. the classroom and has an important auxiliary role for teachers to make teaching plans in a timely manner. References [1] L. YanRu, “An artificial intelligence and machine vision-based 4. Conclusion evaluation of physical education teaching,” Journal of Intel- ligent and Fuzzy Systems, vol. 40, no. 2, pp. 3559–3569, 2021. In the foreign language classroom teaching environment, the [2] H. Li, J. Yin, M. Yin et al., “Automatic diagnosis of strawberry recognition of students’ facial expressions is helpful to know water stress status based on machine vision,” International the students’ learning status in time. With the deepening of Journal of Agricultural and Biological Engineering, vol. 12, students’ facial expression recognition research, more and no. 1, pp. 159–164, 2019. more researchers realize that high-quality facial expression [3] P. Ong, W. K. Lee, and R. J. H. Lau, “Tool condition mon- database plays an important role in training effective rec- itoring in CNC end milling using wavelet neural network ognition models and accurately understanding students’ based on machine vision,” International Journal of Advanced learning behaviors and states. So far, scholars at home and Manufacturing Technology, vol. 104, no. 1, pp. 1369–1379, abroad have established many databases related to student 2019. 12 Journal of Robotics [4] C. Firestone, “Performance vs. competence in human-ma- chine comparisons,” Proceedings of the National Academy of Sciences, vol. 117, no. 43, pp. 26562–26571, 2020. [5] J. E. Smith and N. Pinter-Wollman, “Observing the un- watchable: integrating automated sensing, naturalistic ob- servations and animal social network analysis in the age of big data,” Journal of Animal Ecology, vol. 90, no. 1, pp. 62–75, [6] F. Abedini, M. Bahaghighat, and M. S’hoyan, “Wind turbine tower detection using feature descriptors and deep learning,” Facta Universitatis – Series: Electronics and Energetics, vol. 33, no. 1, pp. 133–153, 2019. [7] L. Kyriakides, B. P. Creemers, and E. Charalambous, “Searching for differential teacher and school effectiveness in terms of student socioeconomic status and gender: implica- tions for promoting equity,” School Effectiveness and School Improvement, vol. 30, no. 3, pp. 286–308, 2019. [8] Y. Wu, J. W. Liu, C. Z. Zhu et al., “Computational intelligence in remote sensing image registration: a survey,” International Journal of Automation and Computing, vol. 18, no. 1, pp. 1–17, [9] T. D. Holmlund, K. Lesseig, and D. Slavit, “Making sense of “STEM education” in K-12 contexts,” International journal of STEM education, vol. 5, no. 1, pp. 1–18, 2018. [10] S. Hartman-Caverly, “Human nature is not a machine: on liberty, attention engineering, and learning analytics,” Library Trends, vol. 68, no. 1, pp. 24–53, 2019. [11] Y. Zhu, T. Gao, L. Fan et al., “Dark, beyond deep: a paradigm shift to cognitive ai with humanlike common sense,” Engi- neering, vol. 6, no. 3, pp. 310–345, 2020. [12] K. D. Pearson, G. Nelson, M. F. Aronson et al., “Machine learning using digitized herbarium specimens to advance phenological research,” BioScience, vol. 70, no. 7, pp. 610–620, [13] K. H. Keskinbora, “Medical ethics considerations on artificial intelligence,” Journal of Clinical Neuroscience, vol. 64, pp. 277–282, 2019. [14] X. Zhao, M. Jia, P. Ding, C. Yang, D. She, and Z. Liu, “In- telligent fault diagnosis of multichannel motor-rotor system based on multimanifold deep extreme learning machine,” IEEE, vol. 25, no. 5, pp. 2177–2187, 2020. [15] K. M. Leander and S. K. Burriss, “Critical literacy for a posthuman world: when people read, and become, with machines,” British Journal of Educational Technology, vol. 51, no. 4, pp. 1262–1276, 2020. [16] H. Lyu, L. Chen, Y. Wang, and J. Luo, “Sense and sensibility: characterizing social media users regarding the use of con- troversial terms for covid-19,” IEEE Transactions on Big Data, vol. 7, no. 6, pp. 952–960, 2020. [17] L. Cheng and T. Yu, “A new generation of AI: a review and perspective on machine learning technologies applied to smart energy and electric power systems,” International Journal of Energy Research, vol. 43, no. 6, pp. 1928–1973, 2019. [18] A. Moubayed, M. Injadat, A. Shami, and H. Lutfiyya, “Student engagement level in an e-learning environment: clustering using k-means,” American Journal of Distance Education, vol. 34, no. 2, pp. 137–156, 2020.
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png
Journal of Robotics
Hindawi Publishing Corporation
http://www.deepdyve.com/lp/hindawi-publishing-corporation/evaluation-of-the-students-learning-status-in-the-foreign-language-AME27AnPne