The Korean Society Fishries And Sciences Education
[ Article ]
THE JOURNAL OF FISHERIES AND MARINE SCIENCES EDUCATION - Vol. 37, No. 6, pp.1528-1537
ISSN: 1229-8999 (Print) 2288-2049 (Online)
Print publication date 31 Dec 2025
Received 25 Nov 2025 Revised 10 Dec 2025 Accepted 16 Dec 2025
DOI: https://doi.org/10.13000/JFMSE.2025.12.37.6.1528

Exploring University Students’ Acceptance of AI Chatbots in Language Learning: An Extended TAM Model Based on the Interaction Hypothesis

Yuan FENG ; Xin ZHAO ; Gyun HEO
Pukyong National University(student)
Pukyong National University(professor)
언어 학습에서 대학생들의 AI 챗봇 수용에 대한 탐색: 상호작용 가설을 기반 확장 기술수용모형 연구
펑위안 ; 자오신 ; 허균
국립부경대학교(학생)
국립부경대학교(교수)

Correspondence to: 051-629-5970, gyunheo@pknu.ac.kr

Abstract

With the growing use of generative AI in language education, AI chatbots have become key tools in university English learning. Drawing on the Technology Acceptance Model(TAM) and the Interaction Hypothesis in second language acquisition to propose an integrated framework. The framework examines how comprehensible input, negotiation of meaning, pushed output, and feedback influence learners’ perceived ease of use, perceived usefulness, and behavioral intention to use AI chatbots. A total of 527 valid responses were collected through a questionnaire survey. Structural equation modeling was employed for data analysis. The results show that all core paths of TAM are supported. Perceived ease of use significantly enhances both perceived usefulness and behavioral intention. Comprehensible input, pushed output, and feedback significantly affect perceived usefulness. Feedback also significantly increases perceived ease of use. Although negotiation of meaning shows no significant direct effect, it demonstrates a meaningful indirect effect through perceived usefulness. Mediation analyses further reveal that all four interaction-related factors influence behavioral intention, with feedback exhibiting a notable chain mediation effect. This study provides theoretical insights into the acceptance mechanisms of AI-based language learning tools. It also offers practical implications for improving learner experience through interface design.

Keywords:

AI chatbots, Technology acceptance model(TAM), Interaction Hypothesis, Structural equation modeling (SEM), Second language acquisition(SLA)

I. Introduction

In recent years, AI technologies driven by large language models have rapidly entered higher education contexts. AI chatbots represented by ChatGPT have become important learning tools for university students in various learning-related tasks (Zhai, 2022). Compared with traditional language learning resources, AI chatbots offer multiple advantages including instant feedback and interactive support (Huang et al., 2022), giving them strong potential in EFL (English as a Foreign Language) contexts (Kohnke et al., 2023). As generative AI evolves, students’ use of such tools has steadily increased (Tala et al., 2024). Similar trends have been reported in Korean higher education, where studies note growing use of AI-based learning tools (Park, 2024). However, most previous studies have focused on system features or learner-related factors rather than interaction processes emphasized in SLA. To address this gap, the present study integrates interaction-based constructs into a technology acceptance framework.

AI chatbots have been applied to tasks such as vocabulary learning and writing support (Huang et al., 2023), which many students believe enhances the flexibility and efficiency of language practice. Consequently, numerous studies have investigated learners’ adoption of ChatGPT and other AI tools from a technology acceptance perspective, particularly examining perceived usefulness (PU), perceived ease of use (PEOU), and behavioral intention (BI) (Zou & Huang, 2023).

However, from a language-learning perspective, the value of AI chatbots goes beyond their functional features. Language learning is inherently interaction-dependent, and second language acquisition theories emphasize the importance of comprehensible input (Krashen, 1982, 1985), pushed output (Swain, 1985, 1995), and interactional processes such as negotiation of meaning and feedback (Long, 1996). Accordingly, in AI-mediated learning environments, mechanisms such as comprehensible input (CI), negotiation of meaning (NM), pushed output (PO), and feedback (FB) continue to play essential roles, directly shaping learners’ judgments of the learning value of such technologies.

Although both technology acceptance and SLA research offer substantial findings, studies that systematically integrate SLA interaction mechanisms into TAM remain limited, and existing ChatGPT acceptance research still focuses largely on tool features or learner traits rather than interaction experiences. Prior studies also emphasize the centrality of interaction, with Godwin-Jones (2022) noting that understanding AI-mediated interaction is essential for system design, and Kohnke et al. (2023) highlighting that acceptance depends heavily on the quality of interaction during authentic tasks, highlighting the need for a framework that incorporates both technological attributes and linguistic interaction features.

Building on this foundation, this study incorporates four constructs from the Interaction Hypothesis (CI, NM, PO, FB) into the TAM as external variables. The model posits that PEOU affects PU and both predict BI, while interaction-related variables influence PU (and PEOU in the case of FB), thereby shaping BI. Using SEM, the study examines how linguistic interaction mechanisms and technology acceptance processes jointly determine learners’ adoption of AI chatbots in language learning.

Guided by the theoretical framework described above, this study seeks to address three core questions: (1) What is the structural relationship among PEOU, PU, and BI when university students use AI chatbots for language learning? (2) How do the four interaction mechanisms from the Interaction Hypothesis influence learners’ PU and PEOU? (3) What mediating roles do PU and PEOU play between Interaction Hypothesis factors and BI? By addressing these questions, the study aims to offer a clearer foundation for the design and application of AI chatbots in language learning.


Ⅱ. Research Methods

1. Research Model

This study aims to construct a university student acceptance model for AI chatbots in language learning by integrating the TAM model with key interaction mechanisms from the Interaction Hypothesis. The proposed research model not only focuses on the classical relationships among PEOU, PU, and BI, but also incorporates four interaction-related elements - CI, NM, PO, FB. The specific structure of the model is presented in [Fig. 1].

[Fig. 1]

Research Model.

2. Research Instruments

This study employed a questionnaire as the primary research instrument. The questionnaire consisted of seven latent variables: CI, NM, PO, FB, PEOU, PU, and BI. The CI, NM, PO, and FB items were drawn from instruments commonly used in SLA field (Isbell & Lee, 2022; Nakatani, 2006; López-Páez, 2020; Wiboolyasarin et al., 2020), while the TAM variables were adapted from the classic scales developed by Davis (1989) and Venkatesh and Davis (2000). To ensure clarity and content validity in the context of AI chatbots mediated language learning, minor wording modifications were made to several items.

All questionnaire items were measured on a five-point Likert scale (1=strongly disagree, 5=strongly agree), and reliability analyses were subsequently conducted to examine the internal consistency of the seven latent variables. All Cronbach’s α coefficients exceeded .70, meeting the commonly accepted threshold and indicating good internal consistency, as shown in <Table 1>. These results demonstrate that the scales employed in this study exhibit satisfactory reliability within the sample.

Reliability Test Results

3. Data Collection

Data were collected online via the Wenjuanxing platform using a convenience sampling approach, targeting full-time undergraduate students at a Chinese university. Participants provided informed consent before completing the questionnaire. A total of 579 responses were obtained, and 527 valid cases were retained after data cleaning (91.0% valid rate). The sample included students from various majors and grade levels, offering reasonable representativeness for examining university students’ use of AI chatbots in language learning. The data collection process adhered to academic ethical standards, ensuring participant privacy and data security. Detailed demographic information is presented in <Table 2>.

Basic Information of Subjects

4. Data Analysis

Data analysis proceeded in two stages. First, SPSS was used to assess reliability, and CFA in AMOS was conducted to evaluate convergent and discriminant validity, using commonly applied model fit indices. Based on the validated measurement model, SEM was then employed to test the hypothesized relationships in the extended TAM framework, including the effects of CI, NM, PO, and FB on PU and PEOU, as well as the predictive roles of PU and PEOU on BI. Path coefficients and their significance levels were used to determine support for the proposed mechanisms of learners’ acceptance of AI chatbots.


Ⅲ. Research Results

1. Measurement Model

To examine the reliability and validity of the latent constructs, the CFA analysis was first conducted. The results which are shown in <Table 3> indicate that χ²/df=1.731, which is below the threshold of 3. Additionally, the values of CFI= .963, TLI= .956, RMSEA= .037, and SRMR= .035 demonstrate a good overall model fit, with all indices meeting recommended standards.

Model Fit of Measurement Model

Building on this, the study further assessed convergent validity, as presented in <Table 4>. The standardized factor loadings for all measurement items ranged from .669 to .828 and were all statistically significant (p < .001), indicating that each item effectively reflected its corresponding latent construct. The CR values ranged from .788 to .842, all exceeding the recommended threshold of .70 (Hair et al., 2009). Regarding convergent validity, the AVE values for six of the seven latent variables fell between .530 and .603, surpassing the recommended cutoff of .50. Although the AVE value for FB was slightly below this threshold, this may be due to the multidimensional nature of interaction behaviors and the limited number of items used to measure this construct. Importantly, its CR value reached .831 and all factor loadings exceeded .660, suggesting that the overall measurement quality remains within an acceptable range. Therefore, the measurement model in this study demonstrates satisfactory reliability and convergent validity.

Convergent Validity Results

Discriminant validity was evaluated using the Fornell & Larcker (1981) criterion. As shown in <Table 5>, the square root of the AVE for each variable was greater than its correlations with other constructs, indicating that the model demonstrates satisfactory discriminant validity.

Discriminant Validity Analysis

2. Structural Model Fit

To evaluate the adequacy of the extended structural model proposed in this study, the structural equation model was assessed using AMOS. When the primary fit indices meet or closely approach recommended thresholds, the model is generally considered to demonstrate acceptable fit to the data (Hu & Bentler, 1999). As shown in <Table 6>, the overall model fit is satisfactory, with all indices reaching recommended standards or falling within acceptable ranges. In terms of absolute fit indices, the model yielded χ²/df= 1.938, which is below the threshold of 3, indicating a good level of fit between the model and the data. The SRMR value of .059 suggests small residuals and good overall fit. Regarding incremental fit indices, both TLI= .944 and CFI= .951 exceed the commonly accepted standard of .90, indicating strong relative fit compared with the independence model. Additionally, the RMSEA value of .042, which is below .05, further demonstrates a good approximate fit of the structural model.

Model Fit Indices

3. Path Analysis Results

[Fig. 2] presents the standardized path coefficients of the model, including the effects of the four interaction-hypothesis variables on PEOU and PU, as well as the predictive relationships of PEOU and PU on BI within the core TAM pathways.

[Fig. 2]

Results of the Structural Equation Modeling

The results show that all hypothesized paths in the model were successfully estimated, with the corresponding coefficients and significance levels reported in <Table 7>. In predicting PU, the path coefficient for CI was β= .243 (p < .001), for PO was β= .191 (p < .001), and for FB was β= .149 (p= .003), all of which reached statistical significance. The path coefficient for NM was β= .085 (p= .058), which did not reach the significance threshold. In predicting PEOU, the path coefficient for FB was β= .276 (p < .001), indicating a significant effect. Within the primary TAM pathways, PEOU significantly predicted PU (β= .490, p < .001); PU significantly predicted BI (β= .272, p < .001); and PEOU also significantly predicted BI (β= .252, p < .001).

Results of Path Analysis

4. Mediation Analysis

Following Hayes (2017), this study used a bootstrap procedure with 5,000 resamples to test indirect effects, evaluating significance through bias-corrected 95% confidence intervals. Indirect effects were considered significant when the interval excluded zero. As shown in <Table 8>, all indirect paths were significant.

Mediation Analysis Results

The four external variables exerted significant indirect effects on behavioral intention through perceived usefulness, including CI (β= .073, p < .001), NM (β= .025, p < .05), PO (β= .051, p < .001), and FB (β= .048, p < .01). These results indicate that the external variables influence learners’ behavioral intention by shaping their perceptions of the usefulness of AI technology, thereby confirming the theoretical mechanism proposed by TAM model.

In addition, the chained mediation pathway from feedback to behavioral intention through perceived ease of use and PU yielded a significant indirect effect (β= .044, p < .001). This indicates that FB not only directly enhances learners’ perceptions of the usefulness of the technology, but also improves their PEOU, which in turn further strengthens PU and ultimately promotes BI.


Ⅳ. Conclusion

Drawing on the TAM model and the Interaction Hypothesis from the field of SLA, this study developed an integrated framework to examine students’ acceptance of AI chatbots. Using SEM based on 527 valid responses, the study identified the pathways through which interaction-related factors influence cognitive perceptions and BI.

First, the findings provide strong support for the core relationships proposed by the TAM model. Both PU and PEOU significantly predicted learners’ BI, and PEOU exerted a significant positive influence on PU. This result is consistent with a substantial body of empirical TAM research (Venkatesh & Davis, 2000) and aligns with the extended model proposed by Venkatesh and Bala (2008), which emphasizes that ease of use is an important antecedent of PU. Within the context of AI technologies, the present findings suggest that when learners perceive AI systems as easier to interact with, they are more likely to recognize their practical value and develop stronger intentions to adopt them.

Moreover, CI, PO, and FB all significantly enhanced learners’ PU of AI chatbots, underscoring the critical role of interaction quality in technology adoption. The effect of CI shows that when a system presents information in clearer and more processable ways, learners more readily recognize its functional value, consistent with Long’s (1996) notion of comprehensible input in interaction. The significant effect of PO indicates that chatbots promoting active language production are perceived as providing stronger learning support, aligning with findings that participatory generation increases user satisfaction (Lee et al., 2022). In addition, FB not only enhances learners’ judgments of usefulness but also strengthens PEOU, demonstrating its dual role in reducing interaction uncertainty and improving operational fluency, consistent with research showing that explanatory feedback fosters trust in AI systems (Carvalho, 2017). Taken together, these findings highlight that interaction quality is a core foundation shaping learners’ perceptions of usefulness and ease of use in AI-mediated language learning.

However, the direct effect of NM on PU did not reach statistical significance. Although NM is traditionally viewed as essential for meaning construction (Pica, 1994; Varonis & Gass, 1985), its role appears more limited in AI-mediated contexts. A likely reason is that current AI dialogue systems still lack genuine bidirectional negotiation, with interactions remaining more directive than collaborative (Lin et al., 2024). As a result, AI-generated “negotiation” may not provide the contingent and co-constructed support assumed in SLA theories, and frequent clarification moves may even be perceived by learners as signs of system incompetence. Despite the non-significant direct effect, NM showed a meaningful indirect effect on BI through PU, reflecting a suppression effect (Zhao et al., 2010) in which its influence emerges only when other variables are considered.

The mediation analysis showed that all five indirect paths were statistically significant, indicating that interaction-related factors influence BI primarily through PU. CI, NM, PO, and FB each exerted significant indirect effects on BI via PU, demonstrating a full mediation pattern. This result is consistent with TAM, which posits that external variables shape adoption mainly by influencing usefulness judgments (Venkatesh & Davis, 2000). It also aligns with King and He’s (2006) meta-analysis identifying PU as the most robust predictor of BI across technological contexts. Overall, these findings suggest that the mechanisms proposed by the Interaction Hypothesis affect learners’ adoption intentions insofar as they activate cognitive evaluations of the technology’s value.

At the same time, the chained mediation from feedback to BI through PEOU and PU was confirmed, indicating that feedback influences adoption through a multi-stage cognitive process. This pattern extends TAM by showing that external variables may exert cumulative rather than single-dimensional effects, a view consistent with UTAUT’s claim that facilitating conditions can affect usage through multiple pathways (Venkatesh et al., 2003). Practically, this suggests that effective feedback not only enhances immediate interaction experiences but also strengthens learners’ longer-term perceptions of ease of use and usefulness, thereby increasing their willingness to adopt AI systems. As such, high-quality feedback becomes a cost-effective design element in AI-based language learning environments.

The core contribution of this study lies in repositioning AI technology adoption from an interaction-oriented perspective. By integrating interaction principles from second language acquisition into technology acceptance research, the study extends existing models and underscores the independent role of interaction processes in shaping learners’ adoption judgments. The findings offer a framework for understanding how AI technologies are interpreted and evaluated in authentic learning contexts and provide the theoretical basis for developing adoption models centered on interaction quality.

At the practical level, this study suggests that designers should enhance system comprehensibility, coherence, and interaction support so that learners can make clearer value judgments through authentic engagement with AI chatbots. In classroom settings, structured interaction tasks and guided prompts may further help students make effective use of chatbot feedback. Nevertheless, several limitations should be acknowledged. Because the sample consisted solely of Chinese university students and relied on self-reported data, the findings may reflect culturally specific perceptions. In addition, the NM items used in this study primarily represent communication strategies rather than fully capturing the essence of meaning negotiation in AI-mediated interaction. Future research should include more diverse populations and adopt longitudinal or multi-context designs, ideally supplemented with objective indicators such as behavioral logs. Learners could also be classified based on their AI literacy or prior chatbot experience to examine whether acceptance pathways vary across competence groups. Moreover, exploring how task types, cultural contexts, and individual characteristics shape interaction experiences would provide deeper theoretical insights into interaction-oriented technology adoption.

Acknowledgments

이 논문은 2025학년도 국립부경대학교 국립대학육성사업 지원비(PhiNX 보호학문 차세대육성)에 의하여 연구 되었음.

References

  • Carvalho J(2017). The Design of an Educationally Beneficial Immediate Feedback System. Doctoral dissertation, University of Guelph.
  • Dahri NA, Yahaya N, Al-Rahmi WM, Aldraiweesh A, Alturki U, Almutairy S, Shutaleva A and Soomro RB(2024). Extended TAM Based Acceptance of AI-Powered ChatGPT for Supporting Metacognitive Self-Regulated Learning in Education: Amixed-Methods study. Heliyon, 10(8), e29317. [https://doi.org/10.1016/j.heliyon.2024.e29317]
  • Davis FD(1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319~340. [https://doi.org/10.2307/249008]
  • Fornell C and Larcker DF(1981). Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research, 18(1), 39~50. [https://doi.org/10.2307/3151312]
  • Godwin-Jones R(2022). Partnering with AI: Intelligent Writing Assistance and Instructed Language Learning. Language Learning & Technology, 26(2), 5~24. [https://doi.org/10.64152/10125/73474]
  • Hair JF, Black WC, Babin BJ and Anderson, RE (2009). Multivariate Data Analysis (7th ed.). Pearson.
  • Hayes AF(2017). Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach (2nd ed.). Guilford Press.
  • Hu L and Bentler PM(1999). Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1~55. [https://doi.org/10.1080/10705519909540118]
  • Huang W, Hew KF and Fryer LK(2022). Chatbots for Language Learning-Are They Really Useful? A Systematic Review of Chatbot-Supported Language Learning. Journal of Computer Assisted Learning, 38(1), 237~257. [https://doi.org/10.1111/jcal.12610]
  • Huang X, Zou D, Cheng G, Chen X and Xie H(2023). Trends, Research Issues and Applications of Artificial Intelligence in Language Education. Educational Technology & Society, 26(1), 112~131.
  • Isbell D. and Lee J(2022). Self-Assessment of Comprehensibility and Accentedness in Second Language Korean. Language Learning, 72(3), 806~852. [https://doi.org/10.1111/lang.12497]
  • King WR and He J(2006). A Meta-Analysis of the Technology Acceptance Model. Information & Management, 43(6), 740~755. [https://doi.org/10.1016/j.im.2006.05.003]
  • Kohnke L, Moorhouse BL and Zou D(2023). ChatGPT for Language Teaching and Learning. RELC Journal, 54(2), 537~550. [https://doi.org/10.1177/00336882231162868]
  • Krashen SD(1982). Principles and Practice in Second Language Acquisition. Pergamon Press.
  • Krashen SD(1985). The Input Hypothesis: Issues and Implications. Longman.
  • Lee M, Liang P and Yang Q(2022). CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1~19. [https://doi.org/10.1145/3491102.3502030]
  • Lin J, Tomlin N, Andreas J and Eisner J(2024). Decision-Oriented Dialogue for Human-AI Collaboration. Transactions of the Association for Computational Linguistics, 12, 892~911. [https://doi.org/10.1162/tacl_a_00679]
  • Long MH(1996). The Role of the Linguistic Environment in Second Language Acquisition. In W. C. Ritchie and T. K. Bhatia (Eds.), Handbook of second language acquisition (pp. 413~468). Academic Press. [https://doi.org/10.1016/B978-012589042-7/50015-3]
  • López Páez K.(2020). The Impact of Oral Pushed Output on Intermediate Students' L2 Oral Production. GIST Education and Learning Research Journal, 20, 85~108. [https://doi.org/10.26817/16925777.773]
  • Nakatani Y(2006). Developing an Oral Communication Strategy Inventory. The modern language journal, 90(2), 151~168. [https://doi.org/10.1111/j.1540-4781.2006.00390.x]
  • Park S(2024). Grounded Theoretical Analysis of the Use of Generative Artificial Intelligence. Journal of Fisheries and Marine Sciences Education, 36(5), 992~1003. [https://doi.org/10.13000/jfmse.2024.10.36.5.992]
  • Pica T(1994). Research on Negotiation: What Does It Reveal about Second-Language Learning Conditions, Processes, and Outcomes? Language Learning, 44(3), 493~527. [https://doi.org/10.1111/j.1467-1770.1994.tb01115.x]
  • Shahzad MF, Xu S and Asif M(2024). Factors Affecting Generative Artificial Intelligence, such as ChatGPT, use in Higher Education: An Application of Technology Acceptance Model. British Educational Research Journal, 51(2), 489~513. [https://doi.org/10.1002/berj.4084]
  • Swain M(1985). Communicative Competence: Some Roles of Comprehensible Input and Comprehensible Output in Its Development. In S. Gass & C. Madden (Eds.), Input in second language acquisition (pp.235~253). Newbury House.
  • Swain M(1995). Three Functions of Output in Second Language Learning. In G. Cook & B. Seidlhofer (Eds.), Principle and practice inapplied linguistics (pp. 125-144). Oxford University Press.
  • Tala ML, Müller CN, Albastroiu I, State O and Gheorghe G(2024). Exploring University Students’ Perceptions of Generative Artificial Intelligence in Education, Amfiteatru Economic, 26(65), 71~88. [https://doi.org/10.24818/EA/2024/65/71]
  • Varonis EM and Gass SM(1985). Non-Native/Non-Native Conversations: A model for Negotiation of Meaning. Applied Linguistics, 6(1), 71~90. [https://doi.org/10.1093/applin/6.1.71]
  • Venkatesh V and Bala H(2008). Technology Acceptance Model 3 and A Research Agenda on Interventions. Decision Sciences, 39(2), 273~315. [https://doi.org/10.1111/j.1540-5915.2008.00192.x]
  • Venkatesh V and Davis FD(2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186~204. [https://doi.org/10.1287/mnsc.46.2.186.11926]
  • Venkatesh V, Morris MG, Davis GB and Davis FD(2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425~478. [https://doi.org/10.2307/30036540]
  • Wiboolyasarin W, Wiboolyasarin K and Jinowat N(2020). Learners’ Oral Corrective Feedback Perceptions and Preferences in Thai as A Foreign Language Tertiary Setting. Journal of Language and Linguistic Studies, 16(2), 912~929. [https://doi.org/10.17263/jlls.759344]
  • Zhai X(2022). ChatGPT User Experience: Implications for Education. SSRN Electronic Journal. [https://doi.org/10.2139/ssrn.4312418]
  • Zhao X, Lynch Jr JG and Chen Q(2010). Reconsidering Baron and Kenny: Myths and Truths about Mediation Analysis. Journal of Consumer Research, 37(2), 197~206. [https://doi.org/10.1086/651257]
  • Zou M and Huang L(2023). To Use or Not to Use? Understanding Doctoral Students’ Acceptance of ChatGPT in Writing Through Technology Acceptance model. Frontiers in Psychology, 14, 1259531. [https://doi.org/10.3389/fpsyg.2023.1259531]

[Fig. 1]

[Fig. 1]
Research Model.

[Fig. 2]

[Fig. 2]
Results of the Structural Equation Modeling

<Table 1>

Reliability Test Results

Latent Variables Items Cronbach’s α
CI 3 0.790
NM 4 0.818
PO 4 0.842
FB 5 0.829
PEOU 3 0.798
PU 3 0.817
BI 3 0.788

<Table 2>

Basic Information of Subjects

Category Frequency Ratio
Gender Male 299 56.7%
Female 228 43.3%
Grade 1st year 102 19.4%
2nd year 116 22.0%
3rd year 136 25.8%
4th year 173 32.8%
Major STEM Fields 323 61.3%
Humanities & Social Sciences 204 38.7%

<Table 3>

Model Fit of Measurement Model

Index χ²/df SRMR TLI CFI RMSEA
Value 1.731 .035 .956 .963 .037
Criterion < 3 < .05 ≥ .90 ≥ .90 < .05

<Table 4>

Convergent Validity Results

Latent Variable χ²/df SRMR TLI CFI
CI CI1 .828 .565 .794
CI2 .735
CI3 .684
NM NM1 .776 .530 .818
NM2 .759
NM3 .682
NM4 .691
PO PO1 .762 .571 .842
PO2 .749
PO3 .766
PO4 .745
FB FB1 .669 .496 .831
FB2 .676
FB3 .676
FB4 .784
FB5 .711
PEOU PEOU1 .791 .570 .799
PEOU2 .743
PEOU3 .730
PU PU1 .727 .603 .819
PU2 .832
PU3 .776
BI BI1 .742 .554 .788
BI2 .745
BI3 .746

<Table 5>

Discriminant Validity Analysis

Variable CI NM PO FB PEOU PU BI
Note: The diagonal shows the square root of AVE; off-diagonal values are inter-construct correlations.
CI .752            
NM .109 .728          
PO .170 .144 .756        
FB .166 .152 .220 .705      
PEOU .352 .283 .231 .259 .755    
PU .440 .267 .361 .360 .638 .777  
BI .113 .138 .115 .129 .425 .444 .744

<Table 6>

Model Fit Indices

Index χ²/df SRMR TLI CFI RMSEA
Value 1.938 .059 .944 .951 .042
Criterion < 3 < .08 ≥ .90 ≥ .90 < .05

<Table 7>

Results of Path Analysis

Path β S.E. C.R. p Result
Note:*p <.05, **p <.01,***p< .001
CI→PU .243*** .042 5.057 < .001 Supported
NM→PU .085 .038 1.897 .058 Not Supported
PO→PU .191*** .037 4.095 < .001 Supported
FB→PU .149** .047 3.022 .003 Supported
FB→PEOU .276*** .062 5.044 < .001 Supported
PEOU→PU .490*** .047 8.733 < .001 Supported
PU→BI .272*** .084 4.045 < .001 Supported
PEOU→BI .252*** .072 3.716 < .001 Supported

<Table 8>

Mediation Analysis Results

Path IN Bootstrap SE 95% CI p
Lower Upper
CI→PU→ BI .073 .024 .033 .128 .001
NM→PU→ BI .025 .015 .003 .063 .025
PO→PU→ BI .051 .018 .021 .095 .001
FB→PU→ BI .048 .023 .013 .107 .005
FB→PEOU→PU→ BI .044 .015 .020 .084 .001