Measuring Service Quality of a Higher Educational Institute towards Student Satisfaction

Selim Ahmed, Muhammad Mehedi Masud

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Measuring Service Quality of a Higher Educational Institute towards Student Satisfaction

Selim Ahmed1, Muhammad Mehedi Masud2,

1Department of Business Administration, International Islamic University Malaysia, Jalan Gombak, Kuala Lumpur, Malaysia

2Faculty of Economics and Administration, University of Malaya, Kuala Lumpur, Malaysia

Abstract

Service quality is increasingly recognized as an important aspect of academic programmes. This is because service quality has become a major strategy for improving competitiveness in an educational institution. This research conducted a quantitative survey on students’ perception of a higher educational institute in Malaysia namely Graduate School of Management, IIUM based on service quality performance. This paper mainly focused on critical factors of service quality of academic programmes which are offered by the Graduate School of Management, IIUM. Based on research findings and analysis, this study explored seven dimensions of service quality namely: administrative service, tangibles, academic programmes, academic staff, delivery of teaching, assurance, and empathy. In this study, authors used SPSS – 18 and AMOS -16 version to explore seven dimensions of service quality through exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). These seven dimensions were tested to measure the relationships with overall students’ satisfaction of Graduate School of Management (GSM), IIUM by using structural equation modeling (SEM) analysis.

At a glance: Figures

Cite this article:

  • Ahmed, Selim, and Muhammad Mehedi Masud. "Measuring Service Quality of a Higher Educational Institute towards Student Satisfaction." American Journal of Educational Research 2.7 (2014): 447-455.
  • Ahmed, S. , & Masud, M. M. (2014). Measuring Service Quality of a Higher Educational Institute towards Student Satisfaction. American Journal of Educational Research, 2(7), 447-455.
  • Ahmed, Selim, and Muhammad Mehedi Masud. "Measuring Service Quality of a Higher Educational Institute towards Student Satisfaction." American Journal of Educational Research 2, no. 7 (2014): 447-455.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Education, as an instrument of national development has often been regarded as the responsibility of the government on its citizen. Thus, in many countries, the provision of schools, colleges and universities are mainly public sector driven. Over the past decades however, in the case of Malaysia, liberalization of education saw many private universities and colleges being established with the aim of generating revenue from education tourism, as well as to stem the flow of Malaysians going abroad for higher education. While the twin objectives of profit and educational excellence is clear in the case of private institutions of higher learning (HEIs), the same cannot be generalized for public higher education. Although public higher education is well-regulated in the country and quality assurance is closely monitored by the Ministry of Higher Education, however, the case for profit is almost alien to a highly subsidised education so familiar to public graduating schools in the country.

The Graduate School of Management (formerly known as Management Centre) is one of the most successful graduate schools in Malaysia. This graduate school established in 1993 under the Kulliyyah of Economics and Management Sciences (KENMS). The main purpose of the Graduate School of Management (GSM) is to create Muslim business leaders with high Islamic moral values by providing quality education services. The mission and vision of this graduate school is to be excellent educational institute in the business management for the government organizations, non-profit organizations, and corporate sectors from the regional of South-East Asia and other countries. Situating itself as an establishment that offers graduate management education aimed at the executive market, the GSM has been strongly emphasising on service quality in the delivery of its programmes. It is the goal of the GSM to prepare business leaders imbued with Islamic ethical values by providing education that is both balanced and holistic in approach while continuing to be a centre of excellence in management for the local as well as international corporate sector, government organisations and non-profit organisations. The GSM offers three executive programmes, namely, the Master of Management (MMgt), the Master of Business Administration (MBA), and Doctorate in Business Administration (DBA). As the GSM is operating in a highly competitive market due to the presence of a large number of both public and private universities offering the same programmes in the country, it is imperative to identify critical factors of service quality that are perceived as important by its students so as to further improve its services and meet their expectations, and to remain highly viable in a very competitive market. The main objective of this study is to identify and evaluate service quality of GSM academic programmes by using structural equation modeling (SEM). This study specifically aims to:

i. Identify student’s satisfaction level regarding the quality of service of the GSM academic programmes

ii. Measure service quality of academic programmes offered by the GSM

iii. Identify critical factors of service quality at the GSM

2. Literature Review

Service quality refers to the customer’s judgement about a service overall performance [1] or the customer’s overall impression of the relative weakness or supremacy of the organisation and its services [2]. Two types of service quality are identified by Martinez and Martinez [3]. First is technical service quality which is related to the outcome of the performance or what the customer receives in the service encounter, and second is functional service quality which refers to the subjective perception of how the service is delivered. Service quality, according to Dedeke [4] represents the capability to meet and exceed the results that the provider and the customer mutually defined and embraced at the beginning of a service encounter, which entails the need to conceptualise the term as cognitive, post-purchase phenomenon. The primary characteristic of service quality is its commitment to measure how delivery service level matches customers’ expectation, which, in turn, is inter-related with customers’ satisfaction [5].

In education, Abdullah [6, 7] maintains that, the main attributes of quality service, which is the responsibilities of academics, include positive attitude, good communication skills, provision of sufficient consultation, and the ability to provide regular feedback to the students. In a competitive education market, the organisation must think of ways to continuously improve the services they deliver to students [8] in order to meet customers’ demands on an ongoing basis. This is even more true in a situation where, according to Emiliani, rising official recognition of academic programme in a competitive market are causing part-time students to demand greater value in graduate business education.

University students obtain benefits from the institution through a range of services made available for the purpose of developing their career skills, personal growth, and personal potential [9]. Cooperation between the academic institution and the students is beneficial to both parties. This cooperation should be extended to include collaboration of the teaching staff with students through lectures, counselling, support, and guidance, which entails positive characteristics among the former, such as, being knowledgeable, enthusiastic, approachable, friendly with their students, having sufficient communication and teaching skills and the ability to adopt the most suitable teaching methods [10].

Five dimensions of service quality applicable to higher education are proposed by [11, 12]. First, tangibility, which refers to physical facilities, equipment, and appearance of personnel; second, reliability, that is, the ability to perform the promised service dependably and accurately; third, responsiveness, which is willingness to help and provide prompt service. Fourth, assurance, which refers to knowledge of service, courtesy of employees and their ability to convey trust and confidence; and fifth, empathy, or caring, individualised attention that the firm provides its customers.

3. Methodology

3.1. Research Framework

In this study, authors developed a research framework (see Figure 1 below) and hypotheses based on Exploratory Factor Analysis (EFA) results which is shown in Table A-1 (Appendix A).

Figure 1. Research model of academic programmes service quality, the GSM, IIUM

3.1.1. Hypotheses

Based on the research model (see Figure 1), the following twelve alternative hypotheses between dependent variables and independent variables have been developed:

H1: Tangibles directly influence to the academic feedback

H2: Administrative service has direct impact on academic feedback

H3: Empathy directly and positively influence to the academic feedback

H4: Academic feedback has direct impact on academic programmes

H5: Responsiveness of academic staffs directly influence on academic programmes

H6: Tangibles directly and positively influence to the academic programmes

H7: Academic service has direct and positive relationship with academic programmes

H8: Academic programmes directly influence to the student satisfaction

H9: Responsiveness of academic staffs directly influence to the overall student satisfaction

H10: Tangibles directly influence to the overall student satisfaction

H11: Empathy has positive impact on overall student satisfaction

H12: Assurance directly influence to the overall student satisfaction


3.1.2. Research Instrument

This study was based on the HedPERF research instrument developed by Abdullah [6, 7] which exhibited high reliability and validity based on the results of the dimension of service quality with Cronbach’s Alpha values ranging between 0.81 and 0.92. The instrument comprised two sections. Section A section consisted of 42 items regarding students’ satisfaction on the service quality of academic programmes of the GSM which are measured using a 5-point Likert scale. Section B comprised students’ demographic information such as gender, age group, programme, nationality, status, educational background, and sponsorship. A number of statistical techniques were employed using the SPSS software package version 17.0 and AMOS 16. Specifically, the variables were measured using factor analysis and structural equation analysis (SEM).


3.1.3.Sampling and Data Collection

The data were collected by means of self-administered questionnaire distributed to GSM students (N = 300) which yielded 257 responses. For the purpose of optimising data precision, incomplete responses were eliminated, leaving 221 (74 %) complete questionnaires for analysis.

3.2. Data Analysis
3.2.1. Descriptive Statistics

The results show a greater (52.5 %) male than female (47.5 %) representation from the total number of usable complete questionnaires (221) received. Most (86 or 38.9%) of the respondents were aged 25 years or below; followed by those aged 26-30 years (75 or 33.9 %), and 31-35 years (38 or 17.2 %). The highest age groups 36-40 years and 40 years-and-above consisted of the lowest number of participants, that is, 13 (5.9 %) and 9 (4.1 %), respectively. More than half (52.9 %) of the respondents were Malaysian as against non-Malaysian students (47.1 %). Out of the total number, 60.2 % of the respondents were participating in the MBA programme, while the rest (39.8 %) belonged to the MMgt programme. Nearly two-thirds (67.9%) of the respondents were full-time and the remaining (32.1 %) were part-time students.

In terms of educational background, 33.9 % of the respondents had a background in business administration, 12.7 % engineering, 7.7 % science and mathematics, 6.8 % economics, 12.2 % social sciences, 2.3 % revealed knowledge, 3.6 % law, and 20.8 % others. As regards employment, the biggest categories were the private sector with 82 (37.1 %) respondents, followed by others 69 (31.2 %). The remaining 31.7 % worked in the public sector (17.6 %), self-employment (10.9 %), and NGO (3.2 %). There were more male than female students, and more MBA than MMgt students studying in GSM. Most of these students are 25 years old or younger and studying full-time at GSM. Table 1 shows the profile of the respondents.

Table 1. Demographic Profile of the Respondents


3.2.2. Exploratory Factor Analysis

Exploratory Factor Analysis (EFA) was run on using varimax rotation. The Kaiser-Meyer-Olkin (KMO) statistic (0.922) indicated a superb measure of sampling adequacy [13]. This was supported by the Bartlett’s test of sphericity χ2 (561) = 4078.156, p < 0.001, which indicated that correlations between items (variables) were sufficiently large and, therefore, factor analysis was appropriate [13] (see Table A-2, Appendix A). The assessment of eigenvalues revealed seven factors with eigenvalues greater than one and which in combination explained 64.28 % of the total variance (see Table A-3, Appendix A). The items’s initial EFA loadings on their factors are shown in Table A-1 (Appendix A). Although the minimum acceptable value is 0.30, only factor loadings of greater than 0.40 represent substantive values and are regarded as important, while those greater than 0.50 are considered significant [14]. The retained items showed significant values of greater than 0.5, with the exception of two variables in the assurance factor. Neverheless, with values of 0.469 and 0.468, they are still considered important [13]. This was further supported by the communalities, all of which have values greater than 0.50, thus, suggesting all 34 variables have met acceptable levels of explanation.


3.2.3. Tests for Measurement Model

Prior to testing the structural equation model, CFA was performed on the entire set of measurement items simultaneously. The process of evaluating the measurement model resulted in deleting eight terms based on the factor loadings only factor loadings of less than 0.40 [13, 15]. The resulting measurement model had an adequate model-to-data fit: normed chi square (1437.872/506) = 2.842, less than 3; p = 0.000; CFI = 0.836, but close to 0.90; and RMSEA = 0.078, less than 0.08.

Based on the measurement model, reliability and construct validity were evaluated. Cronbach’s Alpha measures the reliability coefficient, which indicates the consistency of the entire scale [15], or the overall reliability of the questionnaire [13]. The results from this study showed that the overall α is .957, which indicated that the questionnaire was reliable and consistent [13, 15]. All seven dimensions of the academic programmes’ had Cronbach’s α > 0.7, which exceeds the minimum 0.30 value for inter-item correlations, suggesting high correlations among items [15] (see Table 2 below).

Table 2. Construct Validity of Academic Programmes of Service Quality

The construct validity evaluates “the extent to which a set of measured items actually reflects the theoretical latent construct those items are designed to measure” [15]. This study has considered all four components of the construct validity, namely convergent validity, discriminant validity, nomological validity, and face validity. For convergent validity, this study assesses factor loadings, average variance extracted (AVE), and construct reliability (CR). A standardised factor loading of 0.50 or higher, ideally 0.70 or higher, provides strong evidence of convergent validity [13, 14, 15]. In the measurement model, all the items had significant factor loadings, most of them greater than 0.60, suggesting adequate convergent validity. From the CFA, the AVE, which is the mean variance extracted for the items loading on a construct is calculated. The measurement model showed AVE values of greater than 0.5 for each construct, suggesting an adequate convergence [15]. The construct reliability of this study indicates adequate convergence with values ≥ .7, thus indicating good reliability [14, 15].


3.2.4. Discriminant Validity

Fornell and Larcker [16] suggest a way of assessing the discriminant validity of two or more factors by comparing the AVE of each construct with the shared variance between constructs. In this study, all the constructs’ scale items were less than AVE, which indicated that discriminant validity is supported.


3.2.5. Nomological Validity

Nomological validity is tested to evaluate if “the correlations among the costructs in a measurement theory makes sense” [15]. Based on the covariance table (see Table B-1, Appendix B), the p-values of all the constructs were positive and significant, indicating that the consructs were logical correlated.


3.2.6. Face Validity

Face validity needs to “be established prior to theoretical testing when using CFA” [15]. The items that cluster on the same components suggest that component 1 represents Administrative Service, component 2 Tangibles, component 3 Academic Programmes, component 4 Academic Feedback, component 5 Responsiveness of Academic Staffs, component 6 Assurance, and component 7 Empathy.

Administrative service factor refers to services provided by the administrative staffs at the GSM, such as, communication with the students, good knowledge about the academic systems, keeping accurate students’ records, and maintaining confidentiality

Tangibles represent to the physical facilities, equipment, and appearance of the GSM personnel

Academic programmes relate to services at GSM based on standard courses, flexible time-tables, and provision of academic guidelines through expert academic staffs

Academic feedback corresponds to academic staff’s positive attitude, good communication skill, allowing sufficient consultation, and ability to provide regular feedback to students

Responsiveness of academic staffs depicts the academic staff willingness or readiness to provide service

Assurance describes professionalism, such as, staff accomplishment of assigned tasks, teaching capacity, professional experience, treatment by teachers, accessibility, and friendliness of administrative staffs

Empathy refers to the provision of individualised and personalised attention to students with clear understanding of their specific and growing needs while keeping their best interest at heart


3.2.7. Test for Structural Model and Hypotheses

A structural model was tested to examine the relationship among academic programmes service quality, academic feedback, and overall students’ satisfaction (see Figure 2 below). The model had an adequate fit to the data: chi square per degree of freedom (16.64/6) = 2.773, less than 3; CFI = 0.992, greater than 0.90; p = 0.011, greater than p ≥ 0.005; and RMSEA = 0.077, less than 0.8 [15, 17, 18]. As shown in Figure 2, the R square for the three dependent (endogenous) variables were satisfaction = 0.55, academic programme = 0.53 and academic feedback = 0.38, which indicated that a large percentage of the variance in the dependent factors was explained by the independent (exogenous) factors. All hypotheses were supported in the SEM based on the significant level (p = <0.005) (see Table 3 below).

Figure 2. Structural equation model of service quality of academic programmes (Note: AcaFedbk = Academic Feedback, ResAcaSff = Responsiveness of academic staffs, AdmServ = Administrative Service, AcadProg = Academic Programmes)

The SEM model shows that five factors, namely assurance (β = 0.257), empathy(β = 0.156), tangibles (β = 0.262), responsiveness of academic staffs (β = 0.197), and academic programmes (β = 0.127) have positive and significant influence on statisatfaction; whereas administrative service and academic feedback do not have direct influence on satisfaction. Responsiveness, tangibles, administrative service, and academic feedback factors positively and directly affect academic programme (β = 0.244; β = 0.182; β = 0.295; and β = 0.224, respectively). Three factors positively affect academic feedback: tangibles, β = 0.421, administrative service, β = 0.182, and empathy, β = 0.203 (see Figure 2 and Table 3). Based on previous tests, we found that academic feedback and administrative service have negative direct influence on satisfaction. Therefore, we do not consider the two factors for direct relationship with satisfaction in our structural model.

Based on the results shown in Table 4 below, there is a little difference between the measurement model and SEM, which indicates overall model fit. In addition, the goodness-of-fit indices reveal that the SEM has a very good fit.

Table 4. Comparison Between Measurement Model and SEM

4. Conclusion and Implications

The graduate school has been interested in predicting more accurately the satisfaction level of its students in order to establish a better foundation for their marketing efforts. Using factor analysis has allowed seven dimensions of service quality to be identified from 42 items that were used by the original study conducted by Abdullah [6, 7]. The seven dimensions of service quality of academic programmes of the graduate school were administrative service, tangibles, academic programmes, academic feedback, responsiveness of academic staffs, assurance, and empathy. By using SEM, five service quality factors, namely responsiveness of academic staffs, tangibles, empathy, assurance, and academic programmes, have been identified to have direct and significant effect on satisfaction. Two factors, namely academic feedback and administrative service do not have direct influence on satisfaction; however these two factors positively and indirectly influence satisfaction. Empathy is found to have stronger indirect influence on satisfaction through academic feedback and academic programmes (β = 0.203 + 0.224+0.127/3 = 0.184) compared to direct influence (β = 0.156). On the other hand, tangibles factor strongly and directly affects satisfaction (β = 0.262) rather than indirectly through academic feedback and academic programme (β = 0.421+0.224+0.127/3 = 0.256) or through academic programme (β = 0.182+0.127/2 = 0.154). Similarly, responsiveness of academic staffs has a stronger direct effect on satisfaction (β = 0.197) compared to indirect effect through academic programme (β = 0.244+0.127/2 = 0.185). Therefore, if the graduate school wants to achieve greater students’ satisfaction, they have to focus on showing interest in solving students’ individual problems and academic staffs should show more care for the students. Additionally, the graduate school management has to provide up-to-date equipment and physical facilities to improve academic feedback and academic programmes. Finally, the graduate school should focus on responsiveness of academic staffs to improve the academic programme by ensuring that academic staffs show a better positive attitude towards and communicate more with students.

4.1. Limitation and Future Research

The present study has some limitations for the generalization with other studies. However, this study allows to the researchers to understand how measuring instruments of service quality in a higher educational institute which makes this research a unique contribution to the services marketing literature. Given that the present study is limited to one higher educational institute, this assertion would need to be validated by further research. Future studies may apply the measurement instrument in other graduate schools, in other countries, and with different types of higher educational institutions in order to test whether the results obtained are general and consistent across different samples.

References

[1]  Zeithaml, V.A., “Consumer perceptions of price, quality, and value: a Means- end model and synthesis of evidence”, Journal of Marketing, 52(3), 2-22. 1988.
In article      CrossRef
 
[2]  Bitner, M.J. and Hubbert, A.R., “Encounter satisfaction versus overall satisfaction versus quality”, London: Sage Publications, Inc. 1994.
In article      
 
[3]  Martinez, J.A. and Martinez, L., “Some insights on conceptualizing and measuring service quality”, Journal of Retailing and Consumer Services, 17(1), 29-42. 2010.
In article      CrossRef
 
[4]  Dedeke, A., “Service Quality: A fulfillment-oriented and interactions-centred approach”, Managing Service Quality, 13,(4), 276-289. 2003.
In article      CrossRef
 
[5]  Kang, G.D., James, J. and Alexandris, K., ”Measurement of internal service quality: Application of the SERVQUAL battery to internal service quality”, Managing Service Quality, 12(5), 278-291. 2002.
In article      CrossRef
 
[6]  Abdullah, F., “HEdPERF versus SERVPERF: The quest for ideal measuring instrument of service quality in higher education sector”, Quality Assurance in Education, 13(4), 305-328. 2005.
In article      CrossRef
 
[7]  Abdullah, F., “Measuring service quality in higher education: three instruments compare”, International Journal of Research & Method in Education, 29(1), 71-89. 2006.
In article      CrossRef
 
[8]  Emiliani, M.L., “Using kaizen to improve graduate business school degree programs”, Quality Assurance in Education,13(1), 37-52. 2005.
In article      CrossRef
 
[9]  Gutman, J. and Miaoulis, G., “Communicating a quality position in service delivery: An application in higher education”, Managing Service Quality, 13(2), 105-111. 2003.
In article      CrossRef
 
[10]  Voss, R., Gruber, T. and Szmigin, I., “Service quality in higher education: The role of student expectations”, Journal of Business Research, 60(9), 949-959. 2007.
In article      CrossRef
 
[11]  Parasuraman, A., Berry, L.L. and Zeithaml, V.A., “Refinement and reassessment of the SERVQUAL scale”, Journal of Retailing, 67(4), 420-450. 1991.
In article      
 
[12]  Gallifa , J. and Batalle, P., “Student perceptions of service quality in a multi-campus higher education system in Spain”, Quality Assurance in Education, 18(2), 156-170. 2010.
In article      CrossRef
 
[13]  Field, A., “Discovering statistics using SPSS”, 3rd Edition: London: SAGE Publications Ltd. 2009.
In article      
 
[14]  Sharma, S., “Applied Multivariate Techniques”, New York: John Wiley & Sons. 1996.
In article      
 
[15]  Hair, J.F., Black, W.C., Babin, B.J. and Anderson, R.E., “Multivariate data analysis: A global perspective”, New Jersey: Pearson Prentice Hall. 2010.
In article      
 
[16]  Farrell, A.M., “Insufficient discriminant validity: A comment on Bove, Pervan, Beatty, and Shiu (2009)”, Journal of Business Research. 63(3), 324-327. 2010.
In article      CrossRef
 
[17]  Byrne, B.M., “Structural equation modeling with AMOS: Basic concepts, application, and programming”, 2nd Edition: New York: Routledge. 2010.
In article      
 
[18]  Kline, R.B., “Principles and practice of structural equation modeling”, 3rd edition, New York: Guilford Press. 2011.
In article      
 

Appendix A

Table A-1. Measurement Scales and Initial Factor Loadings of service quality of the GSM’s Academic Programmes (EFA)

Table A-1. Measurement Scales and Initial Factor Loadings of service quality of the GSM’s Academic Programmes (EFA) (Continued)

Table A-2. KMO and Bartlett's Test on Service Quality of the GSM’s Academic Programmes

Appendix B

Table B-1. Results of Covariances Correlation

  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn