RGA: AN ANNUAL FACULTY PERFORMANCE EVALUATION SYSTEM FOR A SELECTED HIGHER EDUCATION INSTITUTION
PUBLISHED IN AD SAPIENTIAM VOLUME X 2016, COPYRIGHT: ISSN 2012-290X
ARNEL C. MAGHINAY, RODRIGO N. GANGOSO AND GIOVANI N. TEN
ABSTRACT
The ‘RGA: An Annual Performance Evaluation (APE) System for a Selected Higher Education Institution’ aimed to develop a software that would automate, standardize the process of employee evaluation, and provide feedback and reporting mechanisms. Specifically, this research aimed to achieve the following objectives: (1) Design an automated system based on the criteria established in the faculty manual for Annual Performance Evaluation (APE); (2) Determine the level of importance of the system features in terms of Effectiveness, Efficiency, Quality, Timeliness and Productivity; (3) Determine the level of satisfaction when using the (a) manual and (b) automated system in the Faculty Annual Performance Evaluation; (4) Determine whether a significant difference exist in the level of satisfaction between the manual and automated system in its overall performance and in terms of: Effectiveness, Efficiency, Quality, Timeliness and Productivity and; (5)to formulate a cost-benefit analysis.
The researchers followed the Standard Software Development Life Cycle (SDLC) to develop the software. To achieve objectives 2, 3 and 4, a survey questionnaire was distributed to seventy-four (74) research participants to evaluate the level of importance of the evaluation criteria, and the performance of the manual and automated system.
Results showed that the users were moderately satisfied in manual method of processing the APE whereas they were highly satisfied in the automated method of processing the APE. There was a significant difference between their level of satisfaction using the automated and the manual system. Moreover, it was also projected that the system, based on the cost-benefit analysis, pays for itself. The cost-benefit analysis indicated that the automated APE is a worthwhile investment to undertake. The researchers further recommend that a parallel test should be made to further assess the correctness of the output and stress test the integrity of the system when multiple users use the system at the same time.
