Enhancement of teaching and learning quality through assessment for learning: a case in chemical engineering

Tiprawee Tongtummachat (Department of Chemical Engineering, Faculty of Engineering, Mahidol University, Bangkok, Thailand)
Attasak Jaree (Department of Chemical Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand)
Nattee Akkarawatkhoosith (Department of Chemical Engineering, Faculty of Engineering, Mahidol University, Bangkok, Thailand)

Journal of Research in Innovative Teaching & Learning

ISSN: 2397-7604

Article publication date: 6 February 2024

246

Abstract

Purpose

This article presents our experience in implementing the assessment for learning process (AfL) to enhance the teaching–learning quality, which has faced numerous challenges impacting educational quality. The effectiveness of this technique is demonstrated through a case study conducted in a core course of chemical engineering.

Design/methodology/approach

The article shares insights into the systematic course design and planning processes that were discussed and developed through AfL practices. Significant emphasis is placed on implementing formative and summative student self-assessment surveys as simple yet effective methods to meet this purpose. Quantitative data were collected and analyzed over three consecutive academic years (2020–2022) using various statistical parameters such as percentage, interquartile range and the program’s numerical goal (%G).

Findings

The AfL process via formative and summative surveys could significantly and effectively improve teaching–learning quality. These findings assist educators in identifying appropriate teaching methods and recognizing areas of weakness and strength, thereby facilitating continuous improvement in the teaching–learning quality. Validation methods, including quizzes and numerical grades, were employed to practically verify the outcome obtained from the questionnaires.

Practical implications

The AfL techniques demonstrated in this study can be directly implemented or adapted for various educational fields to enhance the teaching–learning quality.

Originality/value

The practical implementation of AfL in an engineering context has hardly been reported, particularly in chemical engineering. This work represents the practical implementation of AfL to enhance engineering field education.

Keywords

Citation

Tongtummachat, T., Jaree, A. and Akkarawatkhoosith, N. (2024), "Enhancement of teaching and learning quality through assessment for learning: a case in chemical engineering", Journal of Research in Innovative Teaching & Learning, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JRIT-09-2023-0137

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Tiprawee Tongtummachat, Attasak Jaree and Nattee Akkarawatkhoosith

License

Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

The engineering field in higher education faces numerous challenges that impact the quality of education (Gillett, 2001; Nithyanandam, 2020). The rapid growth of novel technologies has led to the creation and updating of vast knowledge. To provide high-quality education, lecturers must stay up-to-date with the latest knowledge and expertise required in engineering and related fields. It is also essential to integrate current research and the industry trends into the curriculum to broaden the horizons of learners. Additionally, within the chemical engineering domain, various branches such as biotechnology, biomedical engineering and nanotechnology have emerged as prevalent applications (Varma and Grossmann, 2014). Foundational knowledge of these areas should be introduced when relevant to the discussed subject matter (Voronov et al., 2017). Equipping learners with lifelong learning skills is crucial for their success in professional engineering careers (Dawe et al., 2021). Technological advancements have also had an impact on educational practices. Examples of this transformation include textbooks (both physical and digital) and different learning formats (classroom-based and e-learning) (Bascuñana et al., 2023; Díaz-Sainz et al., 2021). As documented in the literature, learners’ learning style preferences have also been influenced (Li et al., 2019). Considering all these factors, student assessment plays a vital role in ensuring teaching quality and performance.

Different generations of learners have influenced the teaching and learning process in higher education courses (Rodríguez et al., 2019). Traditional teaching strategies are no longer effective and efficient for the new generation of learners, specifically Generation Z (Gen Z). Gen Z, a digital-first generation, possesses distinct characteristics that set them apart from previous generations (Generations X and Y). Surrounded by technology and digital devices, Gen Z exhibits a preference for a different learning style. For instance, they have a shorter attention span (Alvarado et al., 2020) and prefer active-based learning (Vizcaya-Moreno and Pérez-Cañaveras, 2020). Consequently, employing appropriate assessment techniques becomes crucial in evaluating the success of the teaching and learning process.

The teaching–learning system faces yet another challenge in the face of unpredictable situations, such as the COVID-19 pandemic. This global crisis has brought about a drastic change in the educational landscape, causing significant disruptions and transformations in traditional education methods. In response, e-learning platforms have emerged as the dominant force, replacing conventional approaches (Fülöp et al., 2023). The sudden shift to e-learning during the pandemic necessitated the continuation of academic curricula without proper preparation and guidance. Numerous e-learning issues have been extensively discussed in the literature across various educational fields (Mavengere et al., 2021), underscoring the pivotal role of assessment in educational development.

As mentioned earlier, lecturers face challenges in understanding course objectives and course learning outcomes (CLOs) due to the new generation of learners (Gen Z), rapid technological changes, learner diversity and unpredictable situations like the COVID-19 pandemic. Achieving these goals relies heavily on the effectiveness of the practical teaching and learning process (Th et al., 2022). Therefore, there is a need for simple and efficient assessment methods to monitor and evaluate teaching and learning performances.

Among various educational assessment methods to choose from Villanueva et al. (2017), formative assessments have extensively been acknowledged as guidelines for enhancing teaching plans and improving the quality of the learning process (Bennett, 2011; Schildkamp et al., 2020). Between formative assessment approaches (assessment for learning (AfL) and data-based decision-making (DBDM)), the AfL process was recognized to be relatively more practical in overcoming these issues (Taras, 2002; Van der Kleij et al., 2015). However, the achievement of this process depended on the process and quality of student feedback (Gikandi et al., 2011); Hernández (2012)). The accuracy and validity of the results also needed to be addressed (Tejeiro et al., 2012). In addition, the practical implementation of AfL to engineering context has hardly been reported, particularly in chemical engineering, which needed to be further studied.

In this study, the continuous improvement of teaching and learning quality through AfL was demonstrated experimentally via a case study, which has rarely been studied in engineering.

AfL was carried out based on the students’ self-assessments, which were designed and implemented. The two types of student self-assessment were conducted in this work; (1) formative students’ self-assessment and (2) summative students’ self-assessment. The findings from this study can assist lecturers in selecting appropriate teaching methods, identifying areas of weakness and strength and continuously improving the teaching–learning process to meet the criteria set by The Accreditation Board for Engineering and Technology (2023) (ABET) (https://www.abet.org/).

This study provides an illustrative example and comprehensive guidelines on applying AfL to enhance the learning and teaching quality, addressing the challenges posed by factors such as Gen Z students, digital technologies and e-learning and face-to-face platforms. Furthermore, the study demonstrates the continuous improvement of the course in line with the ABET requirements. The validation methods employed in the research were thoroughly performed and discussed. Specifically, these assessments were applied to evaluate the teaching and learning quality in the context of EGCH316 Chemical Engineering Economics and Cost Estimation, a required course in chemical engineering. It is our belief that the techniques employed in this study can be directly implemented or adapted in various engineering fields.

2. Methodology

2.1 Course details

The research focused on the case study of EGCH316 Chemical Engineering Economics and Cost Estimation, which is one of the chemical engineering courses offered by the Department of Chemical Engineering at Mahidol University. This particular annual course was taught once a week for duration of three hours. Detailed information about the course can be found in Table 1. The quantitative data for the research were gathered over three consecutive academic years, from 2020 to 2022. During the academic year 2020, the course was initially conducted in the traditional onsite teaching mode during the midterm period. However, due to the second outbreak of COVID-19 in the country, the teaching mode was switched to online for the remaining duration of the course. In the subsequent academic year, 2021, online instruction became the only available choice for teaching the course. Finally, in the academic year 2022, the entire course was conducted in person under the “new normal” environment, following the necessary protocols and guidelines.

2.2 Course design

The course design process we adopted for this study was centered on outcomes-based education (OBE), as depicted in Figure 1. The course design workflow commenced with the establishment of well-defined course learning outcomes (CLOs). These CLOs were carefully formulated and adjusted to align with various criteria such as program learning outcomes (PLOs), course objectives and student expectations. Throughout the teaching–learning process, we employed a continuous approach to achieve the CLOs. Formative assessment (AfL) was utilized consistently during the course to evaluate the effectiveness of teaching and learning. Note that, the AfL process was the primary focus of this work. At the outset of the process, the learning objectives for each lesson were identified based on the CLOs, with Bloom’s taxonomy employed to define the levels of these objectives.

The course syllabus and lesson plans were thoughtfully crafted to facilitate the attainment of the learning objectives. Table 2 illustrates the outline of the lesson plan components and provides examples of lesson plans for the first and second weeks. Our lesson plans were designed to be clear, relevant and conducive to practical implementation. After each lesson was taught according to plan, formative student self-assessment was employed to assess student comprehension and to make any necessary adjustments to the lesson plans based on student needs. Summative student self-assessment was also administered in the middle of the course, specifically at the midterm exam, to evaluate overall teaching and learning performance. The results obtained from these assessments were used to revise the lesson plans and course syllabus. Additionally, a similar summative student self-assessment was conducted at the end of the course, specifically during the final exam, to evaluate students’ achievement. The findings from these assessments were utilized to identify improvement areas and support continuous enhancement of the course, ensuring compliance with the ABET requirements.

2.3 Assessment techniques

In this study, we employed the AfL process to enhance the learning and teaching quality. A variety of AfL approaches were utilized for formative assessment, including classwork, homework and interactive discussions (as outlined in Table 2). Notably, a formative student self-assessment (presented in Table 3) was given particular emphasis in this research, following the findings of Taras (2010) and Agricola et al. (2020). The questionnaire consisted of three sections: instruction, students’ self-assessment and students’ response.

In the students’ self-assessment section, students were asked to evaluate their understanding level based on the learning objectives for each lesson. They were requested to rate their level of understanding on a scale of 1–5, where 1 indicated a lack of understanding and 5 denoted a high level of knowledge. The student response section allowed students to offer suggestions and feedback, encouraging their active participation and sharing their needs and expectations. The questionnaire was administered after each completed lesson to evaluate student comprehension, assess the achievement of learning objectives, evaluate instructor teaching performance, review the effectiveness of the lesson plan and assess instructional strategies. To ensure real-time data and accurate information, it was crucial to administer the questionnaire immediately after each class.

For summative students’ self-assessment, a summative students’ self-assessment questionnaire (considered as indirect assessment) was developed to facilitate continuous improvement and ongoing teaching–learning processes. The summative survey for this course, presented in Table 4, encompassed sections on instruction, students’ performance and course satisfaction and students’ feedback. The student performance and course satisfaction section involved rating satisfaction levels on a scale of 1–5 (with 1 indicating poor performance or disagreement and 5 denoting excellent performance or strong agreement). The survey was administered both during the midpoint of the course (during the midterm exam) and at the end of the course (during the final exam) to assess learning and provide valuable feedback.

The feedback results were shared and discussed with all students in the subsequent class to ensure accurate interpretation and understanding. Based on the feedback received, we communicated with the students to address any identified shortcomings or areas of improvement, thereby enhancing the teaching-learning quality as needed. This two-way communication was essential to gather reliable and meaningful feedback data. These guidelines helped to increase student engagement in the questionnaire, promoting an accurate and comprehensive assessment of their experiences.

2.4 Ethics approval and consent to participate

Before conducting the surveys, the authors informed the students about the research objectives, the questionnaires, the confidentiality of the data and their rights to participate (or not participate). This research did not collect primary data from individuals (such as name, age, student ID and Email) or have any access to the individual-level data. It means that the collected data could not be used to readily identify the participants (directly or indirectly/linked). This research was also performed in established or commonly accepted educational settings involving standard academic practices. Hence, this research did not require ethical approval from the Institutional Review Board (IRB).

2.5 Data analysis

The collected ordinal data from the questionnaire underwent analysis using descriptive statistics to derive key metrics such as the median (Md) and mean (x¯) for central tendency, percentage (%), standard deviation (SD) and interquartile range (IQR). These statistical measures were specifically chosen to mitigate the influence of outliers and skewed data, ensuring the accuracy of the analysis. Outliers have the potential to introduce invalid data, such as when students randomly select answers, thereby compromising the reliability of the collected information. The reliability of the questionnaires was measured by the internal consistency with the Cronbach’s alpha coefficient. The minimum alpha value of 0.7 was set as a benchmark for the appropriate and acceptable internal consistency. The obtained results from these statistical indicators were considered crucial in evaluating and determining the overall performance of the teaching–learning process. By employing these measures, the impact of outliers was minimized, allowing for a more robust assessment of the data and providing valuable insights into the effectiveness of the teaching methods.

The achievement of the learning objectives was assessed based on a numerical goal (%G) (Equation 1), which determined whether a student successfully met the objectives. For this particular course, the numerical goal was established at 70%, representing the passing threshold. This value was determined by our program, aligning with the criteria set by the ABET. Specifically, it meant that a minimum of 70% of students should rate their understanding above the midpoint (3.5) on the Likert scale. If the percentage fell below this threshold (70%), it indicated that the majority of students did not meet the specified learning objectives. Meeting the course development requirements, including the design of lesson plans and teaching strategies, was mandatory to facilitate the attainment of these objectives.

(1)Percentage of students who were satisfied with the numerical goal (%G)=QNx100%

Q = Total number of Likert scale ratings higher than 3.5

N = Total number of respondents

3. Result and discussion

3.1 Formative students’ self-assessment

One of the challenges encountered in this course was effectively engaging Gen Z students. Note that, Gen Z students were the newest wave of our class students. To address this, we developed and implemented simple and practical assessment methods. The first method introduced was the formative students’ self-assessment questionnaire, which was administered after each class. The students were provided with a Google Form link through the Google Classroom of the course and were required to complete the questionnaire before the end of the learning day. Late submissions were excluded from the assessment.

The formative questionnaire results obtained from Table 3 yielded two sets of data. Firstly, it provided insights into individual student learning progress, allowing us to assess how many students were meeting the learning objectives. Secondly, it offered valuable feedback on how to improve the lesson plans for the subsequent classes, enabling us to modify the plans or adjust teaching strategies to suit the majority of the course better. The students’ self-assessment data revealed the first information set, while the feedback provided insights for the second set.

The results shown in Figure 2(a), specifically for 1st week in year 2020, demonstrated exceptionally high numerical goal value among all students, surpassing the 70% passing threshold with an average %G of 89.7%. This indicated that the lesson plans and teaching methods implemented in the classroom (as outlined in Table 2) effectively met the needs of most students. The data analysis also indicated low variability, as evident from the interquartile range (IQR) values, ranging between 1 and 2 (data not shown). It was in alignment with the standard deviation value of each question in questionnaire which was located in the area of 0.54–0.83 (Figure 2(b)).

Partial comments and suggestions from the student feedback were reported and documented (as shown in Table 5). This feedback was instrumental in assisting the lecturer in identifying areas for improvement to achieve the course learning outcomes (CLOs). The practical insights gained from the feedback were subsequently incorporated into the teaching–learning process, ensuring continuous improvement and enhanced students’ engagement.

In the second week, despite the implementation of even better teaching methods compared to the first week, the average %G dropped to 78.1% (see Figure 3, 2nd week in year 2020). Student feedback indicated that there was an overwhelming amount of content to catch up with, making it challenging for some students to comprehend the subject matter comfortably. This demonstrated that cognitive overload resulting from a large volume of content could reduce the quality of learning. The feedback played a crucial role in helping the lecturer identify the strengths and weaknesses of the teaching methods and lesson plans, leading to areas of improvement being introduced to enhance the %G. The power of the formative assessment questionnaire was evident in monitoring real-time student performance. Without this assessment, the lecturer may not have been aware of the issues until it was too late to address them, resulting in a continued decline in the %G throughout the course.

In the third week, the lesson plan was revised based on the formative assessment results, leading to an increased average %G of 82.8% (see Figure 3, year 2020). This demonstrated how the formative assessment was integrated into the lesson plan, and these processes were continuously performed throughout the course. Significant drops in the average %G were observed in the fourth and thirteenth weeks for three consecutive academic years, as these weeks covered some of the most challenging topics for students. Teaching by example was recommended for this situation. In the final week of the course (16th week), a recap of all the course contents was conducted to ensure a comprehensive understanding, and no formative assessment was administered.

During the academic year 2021, due to the COVID-19 pandemic, the course was conducted entirely online. Similar lesson plans and teaching strategies as in 2020 were initially implemented in the first week. Figure 2 (year 2021) illustrates the formative assessment results, which showed a decrease in %G for each topic compared to the onsite teaching in 2020. The absence of practical face-to-face teaching methods and an active learning environment led to a significant reduction in student learning performance. The level of student participation was relatively low compared to the face-to-face mode, which is consistent with findings reported in the literature during the pandemic (Ghasem and Ghannam, 2021). These findings highlight the importance of measurement and assessment in the dynamics of educational change. Despite the relatively lower average %G in 2021, the use of formative assessment each week helped prevent a significant decline in %G. The average %G remained between 67.1 and 80% each week. It is worth noting that a sudden drop in %G can occur unexpectedly when assessment is not implemented.

Interestingly, different results were observed when comparing the academic years 2020 and 2021 during the final period (Weeks 9–15) (see Figure 3). The average %G in 2020 was higher than that in 2021 throughout these weeks, despite both years utilizing online teaching methods. The lower %G in 2021 may be attributed to a lack of foundational knowledge of course topics taught in the midterm period (Weeks 1–8). This knowledge was crucial for the later part of the course. In 2020, onsite teaching was conducted for the first half of the semester, allowing students to acquire a solid understanding of the basic knowledge. This indicates that the impact of online education on teaching and learning performances was significantly mitigated when students had a strong foundation of basic knowledge to build upon for subsequent content related to earlier topics.

3.2 Summative student self-assessment

The next assessment method introduced was the summative students’ self-assessment, which played a significant role in the development of the teaching–learning process and provided an overall understanding of student learning. In years 2020 and 2021, the summative questionnaire (see Table 4) was typically conducted at the end of the class to facilitate continuous improvement of the course. However, based on the formative assessment results from those years, it was evident that the knowledge acquired by students during the first half of the semester was crucial for their further learning in the course. Therefore, in 2022, the summative questionnaire was administered both in the middle and at the end of the course. The questionnaire assessed three aspects: course satisfaction, students’ knowledge progression and student expectations. The findings were utilized to identify the strengths and weaknesses of the course, enabling the identification of areas for improvement. Conducting the questionnaire in the middle of the course ensured that students had acquired sufficient knowledge to continue their learning. Upon implementing this technique (data not shown), the average %G was used to evaluate its effectiveness. The results revealed no significant drop in the average %G (see Figure 3, year 2022), indicating the effectiveness of the summative assessment method. The average %G ranged from 77.1 to 84.1, fulfilling all course learning objectives.

An example of the summative questionnaire results collected at the end of the course (year 2022) is presented in Figure 4. In Figure 4(a), students were asked to reflect on their knowledge before and after completing the course. The results showed that initially, 65.5% of the students reported not understanding anything (scale 1), while the remaining students indicated familiarity with some concepts/topics/ideas from their previous relevant courses or professional experiences. By the end of the period, 72.4% of the students considered themselves very good or excellent, with only 10.3% rating themselves as fair. The IQR values for these results were 1 and 2, respectively, indicating low variability in the data. The acceptable standard deviation values (<0.8) were also observed. These findings supported the success of the teaching–learning process in the course and the students' acquisition of sufficient knowledge.

In Figure 4(b), students were asked to rate their agreement level with statements on a scale of 1–5 points (see Table 4). The majority of students (>70%) agreed with all the statements proposed in the questionnaire, indicating a well-organized teaching–learning process. The first statement received no disagreement responses as the learning objectives were always discussed at the beginning of each class. A few respondents expressed slight disagreement in the other statements. Feedback provided by the students helped identify the important issues for improvement. The feedback included various suggestions and comments, such as the need for a variety of examples/exercises with different difficulty levels, sufficient time to comprehend topics before moving on and a desire to reduce the number of course assignments. These findings demonstrated that the summative questionnaire was considered a reliable and informative method for enhancing educational quality.

3.3 Validity and reliability

Given the importance of accurate data in enhancing the teaching–learning quality, it is crucial to avoid misinterpreting the data/outcome collected from self-assessments and feedback. Instances of respondent failures, such as lack of willingness to participate, dishonest answers and response bias, can lead to invalid data, compromising the accuracy of the evaluation results (Andrade and Du, 2007). Therefore, the validation of the process becomes essential to ensure the validity of the evaluation outcomes. Various appropriate validation tools have been utilized to confirm the correctness of the data (Amiri et al., 2021). In this study, two validation techniques were employed. The quiz technique was chosen to corroborate the results of the formative questionnaire, while the student’s numerical grade was used to validate the results of the summative questionnaire. The student’s numerical grade was determined based on the cumulative scores of exams, quizzes, homework and classwork. Other methods, such as individual midterm or final scores, could also be utilized for effective validation of the outcome.

Figure 5 illustrates the comparison between the questionnaire and quiz outcomes obtained in the academic year 2020. During the 2nd week, a multiple-choice quiz (total score of ten points) was administered to assess students' understanding after the class. The formative assessment was also conducted (refer to Table 3). The quiz scores were normalized to a five-point scale for visualization and analysis. Figure 5(a) demonstrates that the passing rate based on the quiz results (above the passing threshold) was 94.1%, while the passing rate from self-assessment was 78.1%. The median scores were 4.5 and 4, respectively. Most students passed the lesson of that week when both methods were employed. No evidence of discrepancy was observed, despite the slight disparity between the numbers obtained from both techniques. This indicates the effectiveness of both approaches in evaluating students' learning performance and verifying the validity of the data. It should be noted that the students’ self-assessment scores may be subject to over-assessment or under-assessment based on their self-evaluation skills (Davey, 2015; Sharma et al., 2016). Hence, indirect assessments were primarily used to enhance the teaching-learning quality, while direct assessments, such as quizzes and exams, served as the primary sources of information for evaluating the course’s success.

In another case, the results of the summative questionnaire were validated by comparing them with the students' numerical grades, as depicted in Figure 5(b). These results pertain to the year 2022. The relationship between the numerical student grades and the five-point Likert scale is as follows: five points (A), four points (B+ and B), three points (C+ and C), two points (D+ and D) and one point (F). The findings indicate that the students’ numerical grade supports the validity of the questionnaire data. The median scores were B+ (numerical grade) and four points (Likert scale), respectively. No significant divergence was observed between the two methods (questionnaire and numerical grade).

In this study, the formative questionnaire was validated twice during the course, in the 2nd–12th weeks. The summative assessment, on the other hand, was validated only once at the end of the period. It is important to note that excessive validations could overwhelm and burden the lecturer, so a balanced approach was adopted.

The reliability of the questionnaires was confirmed with the Cronbach’s alpha coefficient. It was found that the alpha values in the questionnaires from Tables 3 and 4 exceeded the minimum threshold of 0.7 by being located in the ranges of 0.85–0.97, reflecting the high-reliability level. Hence, our questionnaires were considered to be reliable and validated for analysis.

3.4 Usage of assessments

The effectiveness of the AfL process (formative and summative surveys) proposed in this study was further evaluated by applying them to other courses, the challenging and crucial courses in chemical engineering education. EGCH402 (Chemical Engineering Capstone Design), and EGCH390 (Computer Applications in Chemical Engineering), were selected for this purpose. The integrated assessments were implemented in the academic year 2022 without any modifications. Figure 6 provides an example of the summative results obtained from the EGCH402 course. The results showed that approximately 85% of the students expressed satisfaction with the knowledge they acquired during the course (Figure 6a). Furthermore, there was unanimous agreement among the students regarding their overall satisfaction with the course (Figure 6b), indicating that the teaching–learning process was well-organized and effective. Similar results were observed in the case of EGCH390 (Figure 7). Although not shown in the figure, feedback from the students was also collected and utilized for ongoing improvements to the courses. These findings demonstrate that the integrated assessment techniques proposed in this research can be directly implemented or adapted, as needed, to enhance the teaching–learning process in chemical engineering courses. Moreover, they hold potential for application in other courses across different fields, promoting continuous improvement in the overall educational experience.

4. Conclusion

This study showcased the improvement of the teaching–learning quality through a case study in the field of Chemical Engineering Education. The focus was on the AfL methods which was implemented based on the students’ self-assessments (formative and summative surveys). The case study specifically examined EGCH 316 Chemical Engineering Economics and Cost Estimation, a core course in the undergraduate program of chemical engineering, which was conducted using these methods. The systematic course design and planning processes were shared and discussed, resulting in the synchronous development of the teaching–learning quality based on the assessment results.

Data collected from these assessments was analyzed using various statistical parameters such as mean, percentage, frequency, standard deviation and IQR as well as the standard numerical goal (%G) set by the program. The findings revealed that the formative and summative questionnaires, which included students’ self-assessment and feedback, proved to be powerful and reliable tools for evaluating the teaching-learning quality. These questionnaires captured valuable data regarding student needs and expectations that could not be measured through direct assessments alone. By implementing these assessments, the study was able to identify and address improper teaching and learning techniques influenced by factors like student generation, unforeseen circumstances and digital technology.

During the case study, in the academic year 2021, amidst the challenges posed by the COVID-19 situation, the assessments were employed to identify teaching–learning weaknesses and prevent a decline in student achievement rate (%G). The assessments demonstrated their effectiveness in facilitating the identification of areas for improvement. The course was subsequently revised based on the students' needs and objectives. Validation methods, such as quizzes and numerical grades, were practically used to verify the outcome obtained from the questionnaires. The assessment techniques employed in this study can be directly implemented or adapted for use in various academic fields.

Figures

Course design with the teaching–learning process used in this work

Figure 1

Course design with the teaching–learning process used in this work

Results obtained from the questionnaire administered during the first week; (a) numerical goal and (b) mean and standard deviation

Figure 2

Results obtained from the questionnaire administered during the first week; (a) numerical goal and (b) mean and standard deviation

Profiles of the numerical goal (%G) for the weekly classes throughout the academic years 2020–2022

Figure 3

Profiles of the numerical goal (%G) for the weekly classes throughout the academic years 2020–2022

Results of the summative questionnaire administered in the academic year 2022

Figure 4

Results of the summative questionnaire administered in the academic year 2022

Results of the validation process (a) formative questionnaire vs quiz and (b) summative questionnaire vs numerical student grades

Figure 5

Results of the validation process (a) formative questionnaire vs quiz and (b) summative questionnaire vs numerical student grades

Results of the summative questionnaire administered for the EGCH 402 course in the academic year 2022

Figure 6

Results of the summative questionnaire administered for the EGCH 402 course in the academic year 2022

Results of the summative questionnaire administered for the EGCH 390 course in the academic year 2022

Figure 7

Results of the summative questionnaire administered for the EGCH 390 course in the academic year 2022

Course details used in this work

Course details
Course code and titleEGCH 316 Chemical Engineering Economics and Cost Estimation
Semester/class level1st semester/3rd year undergraduate students
Number of studentsa34, 40, and 31, respectively
Duration17 weeks (once a week for three hours)
Instruction modesaBlended, 100% online and 100% onsite, respectively

Note(s):aThe entire class of students for academic years 2020–2022

Source(s): Table by authors

Lesson plan components and examples for Weeks 1 and 2

WeekTopicContentsLearning objectivesCLOsActivitiesAssessmentaMaterials and equipment
1Introduction to Chemical Engineering Economics and Cost Estimation
  • 1.

    Economics vs engineering economics

  • 2.

    Decision-making process

  • 3.

    Engineering economic decisions

  • 1.

    Discuss the role of engineering economic analysis accurately

  • 2.

    Explain the nature and types of engineering economic decisions exactly

  • 3.

    Explain the ethical dimensions in engineering decision-making exactly

  • 1.

    Apply mathematics and engineering principles in engineering economic analysis for chemical engineering practices (CLO1)

  • 2.

    Determine and obtain suitable data to solve economic problems using appropriate techniques (CLO2)

  • 3.

    Demonstrate the importance and impact of making appropriate economic decisions in the economic, environmental and safety context (CLO3)

  • -

    Lecture

  • -

    Discussion

  • -

    Example

  • -

    Case study

  • -

    Classworkb

  • -

    Self-assessment surveyc

  • -

    PowerPoint

  • -

    Handout

  • -

    Google workspace

  • -

    Coursebook

2Engineering costs and cost estimating
  • 1.

    Engineering cost concepts

  • 2.

    Cost estimation type

  • 3.

    Cost estimation methods

  • 4.

    Cash flow diagram

  • 1.

    Describe the various engineering cost concepts accurately

  • 2.

    Explain the various cost estimation methods accurately

  • 3.

    Be able to estimate the cost with various models accurately

  • 4.

    Explain the concept of a cash flow diagram

  • 1.

    Apply mathematics and engineering principles in engineering economic analysis for chemical engineering practices (CLO1)

  • 2.

    Determine and obtain suitable data to solve economic problems using appropriate techniques (CLO2)

  • 3.

    Demonstrate the importance and impact of making appropriate economic decisions in the economic, environmental and safety context (CLO3)

  • -

    Lecture

  • -

    Discussion

  • -

    Case study

  • -

    Example

  • -

    Quizb

  • -

    Self-assessment surveyc

  • -

    PowerPoint

  • -

    Handout

  • -

    Google workspace

  • -

    Coursebook

Note(s): aFormative assessment used in the class

bDirect assessment

cIndirect assessment

Source(s): Table by authors

Questionnaire form for formative students’ self-assessment

SectionsDescriptions
1Week 1: Introduction to Chemical Engineering Economics and Cost Estimation
For each of the topics below, please check the box under the number that indicates your level of knowledge after completing the class
  • 1=

    Not at all – have no knowledge of the content

  • 2=

    Slightly – know very little about the content

  • 3=

    Moderately – have basic knowledge

  • 4=

    Very – have good knowledge

  • 5=

    Extremely – consider myself very knowledgeable

2After learning: How do you rate your knowledge about the following topics?
TopicsLevels
12345
1. The meaning of economic and engineering economic
2. The role and purpose of engineering economic analysis
3. The nature and types of engineering economic decisions
4. The difference between accounting and engineering economic
5. The rational decision-making process
6. The ethical dimensions in engineering decision making
3Any slide/content that needs to re-explain
Suggestions/comments

Source(s): Table by authors

Questionnaire form for summative students’ self-assessment

SectionsDescriptions
1Please submit feedback regarding the course you have just completed, including feedback on course structure, content and instructor
22.1 Contribution to learningLevels
PoorFairSatisfactoryVery goodExcellent
Level of skill/knowledge at start of course
Level of skill/knowledge at end of course
2.2 Course contentLevels
Strongly disagreeDisagreeNeutralAgreeStrongly agree
Learning objectives were clear
Course content was organized and well planned
Course workload was appropriate
Course organized to allow all students to participate fully
3Recommendation for development of this course

Source(s): Table by authors

Some students’ feedback (translated to English) received from 1st week and areas of improvement

Student feedbackAreas of improvement
Some students were satisfied with the case study technique, and this method should be applied to the other classesThe case study, discussion, example and other techniques were continually used when available and appropriate for the contents
Some students were satisfied with the teaching methods (discussion and case study)
Some technical term used in the course was challenging to understandMore examples about that word should be provided
Some students were difficult to understand the translated technical termsSome terms (in English) should be used without translation (to the primary language) to avoid misunderstanding
Some students felt that the lecture was very fast (since there was much content to cover)The lesson plan should be adjusted according to the students’ capabilities
Some students were satisfied with the handouts

Source(s): Table by authors

Conflicts of interest: There are no conflicts to declare.

Declaration of generative AI and AI-assisted technologies in the writing process: During the preparation of this work the author(s) used ChatGPT in order to check grammatical error and improve the clarity. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.

References

Agricola, B.T., Prins, F.J. and Sluijsmans, D.M.A. (2020), “Impact of feedback request forms and verbal feedback on higher education students' feedback perception, self-efficacy, and motivation”, Assessment in Education: Principles, Policy and Practice, Vol. 27 No. 1, pp. 6-25, doi: 10.1080/0969594X.2019.1688764.

Alvarado, M., Basinger, K., Lahijanian, B. and Alvarado, D. (2020), “Teaching simulation to generation Z engineering students: lessons learned from a flipped classroom pilot study”, 2020 Winter Simulation Conference (WSC), IEEE, pp. 3248-3259, doi: 10.1109/WSC48552.2020.9383950.

Amiri, A., Wang, J., Slater, N.K.H. and Najdanovic-Visak, V. (2021), “Enhancement of process modelling and simulation evaluation by deploying a test for assessment and feedback individualization”, Education for Chemical Engineers, Vol. 35, pp. 29-36, doi: 10.1016/j.ece.2021.01.001.

Andrade, H. and Du, Y. (2007), “Student responses to criteria-referenced self-assessment”, Assessment and Evaluation in Higher Education, Vol. 32 No. 2, pp. 159-181, doi: 10.1080/02602930600801928.

Bascuñana, J., León, S., González-Miquel, M., González, E.J. and Ramírez, J. (2023), “Impact of Jupyter Notebook as a tool to enhance the learning process in chemical engineering modules”, Education for Chemical Engineers, Vol. 44, pp. 155-163, doi: 10.1016/j.ece.2023.06.001.

Bennett, R.E. (2011), “Formative assessment: a critical review”, Assessment in Education Principles Policy and Practice, Vol. 18 No. 1, pp. 5-25, doi: 10.1080/0969594X.2010.513678.

Davey, K.R. (2015), “Student self-assessment: results from a research study in a level IV elective course in an accredited bachelor of chemical engineering”, Education for Chemical Engineers, Vol. 10, pp. 20-32, doi: 10.1016/j.ece.2014.10.001.

Dawe, N., Romkey, L., Bilton, A. and Khan, R. (2021), “A review of how lifelong learning is planned and enacted in canadian engineering programs”, Proceedings of the Canadian Engineering Education Association (CEEA), pp. 1-10, doi: 10.24908/pceea.vi0.14950.

Díaz-Sainz, G., Pérez, G., Gómez-Coma, L., Ortiz-Martínez, V.M., Domínguez-Ramos, A., Ibañez, R. and Rivero, M.J. (2021), “Mobile learning in chemical engineering: an outlook based on case studies”, Education for Chemical Engineers, Vol. 35, pp. 132-145, doi: 10.1016/j.ece.2021.01.013.

Fülöp, M.T., Breaz, T.O., Topor, I.D., Ionescu, C.A. and Dragolea, L.L. (2023), “Challenges and perceptions of e-learning for educational sustainability in the ‘new normality era’”, Educational Psychology, Vol. 14, doi: 10.3389/fpsyg.2023.1104633.

Ghasem, N. and Ghannam, M. (2021), “Challenges, benefits & drawbacks of chemical engineering on-line teaching during Covid-19 pandemic”, Education for Chemical Engineers, Vol. 36, pp. 107-114, doi: 10.1016/j.ece.2021.04.002.

Gikandi, J.W., Morrow, D. and Davis, N.E. (2011), “Online formative assessment in higher education: a review of the literature”, Computers and Education, Vol. 57 No. 4, pp. 2333-2351, doi: 10.1016/j.compedu.2011.06.004.

Gillett, J.E. (2001), “Chemical engineering education in the next century”, Chemical Engineering and Technology, Vol. 24 No. 6, pp. 561-570, doi: 10.1002/1521-4125(200106)24:6<561::AID-CEAT561>3.0.CO;2-X.

Hernández, R. (2012), “Does continuous assessment in higher education support student learning?”, Higher Education, Vol. 64 No. 4, pp. 489-502, doi: 10.1007/s10734-012-9506-7.

Li, J., Han, S.H. and Fu, S. (2019), “Exploring the relationship between students' learning styles and learning outcome in engineering laboratory education”, Journal of Further and Higher Education, Vol. 43 No. 8, pp. 1064-1078, doi: 10.1080/0309877X.2018.1449818.

Mavengere, N., Pondiwa, S., Tinashe, M.C., Manzira, F. and Mutanga, A. (2021), “The ‘new normal’ in higher education: innovative teaching and learning technologies and practices during a crisis”, Advances in Computing and Engineering, Vol. 1 No. 2, doi: 10.21622/ace.2021.01.2.037.

Nithyanandam, K. (2020), “A framework to improve the quality of teaching-learning process - a case study”, Procedia Computer Science, Vol. 172, pp. 92-97, doi: 10.1016/j.procs.2020.05.013.

Rodríguez, A., Díez, E., Díaz, I. and Gómez, J.M. (2019), “Catching the attention of generation Z chemical engineering students for particle technology”, Journal of Formative Design in Learning, Vol. 3 No. 2, pp. 146-157, doi: 10.1007/s41686-019-00034-1.

Schildkamp, K., Van der Kleij, F.M., Heitink, M.C., Kippers, W.B. and Veldkamp, B.P. (2020), “Formative assessment: a systematic review of critical teacher prerequisites for classroom practice”, International Journal of Educational Research, Vol. 103, 101602, doi: 10.1016/j.ijer.2020.101602.

Sharma, R., Jain, A., Gupta, N., Garg, S., Batta, M. and Dhir, S.K. (2016), “Impact of self-assessment by students on their learning”, International Journal of Applied and Basic Medical Research, Vol. 6 No. 3, pp. 226-229, doi: 10.4103/2229-516X.186961.

Taras, M. (2002), “Using assessment for learning and learning from assessment”, Assessment and Evaluation in Higher Education, Vol. 27 No. 6, pp. 501-510, doi: 10.1080/0260293022000020273.

Taras, M. (2010), “Student self-assessment: processes and consequences”, Teaching in Higher Education, Vol. 15 No. 2, pp. 199-209, doi: 10.1080/13562511003620027.

Tejeiro, R.A., Gomez-Vallecillo, J.L., Romero, A.F., Pelegrina, M., Wallace, A. and Emberley, E. (2012), “Summative self-assessment in higher education: implications of its counting towards the final mark”, Electronic Journal of Research in Educational Psychology, Vol. 10 No. 2, pp. 789-812, doi: 10.25115/ejrep.v10i27.1528.

Th, M., Schaer, E., Abildskov, J., Feise, H., Glassey, J., Liauw, M., Ó’Súilleabháin, C. and Wilk, M. (2022), “The importance/role of education in chemical engineering”, Chemical Engineering Research and Design, Vol. 187, pp. 164-173, doi: 10.1016/j.cherd.2022.08.061.

The Accreditation Board for Engineering and Technology (2023), available at: https://www.abet.org/ (accessed 1 August 2023).

Van der Kleij, F.M., Vermeulen, J.A., Schildkamp, K. and Eggen, T.J.H.M. (2015), “Integrating data-based decision making, assessment for learning and diagnostic testing in formative assessment”, Assessment in Education: Principles, Policy and Practice, Vol. 22 No. 3, pp. 324-343, doi: 10.1080/0969594X.2014.999024.

Varma, A. and Grossmann, I.E. (2014), “Evolving trends in chemical engineering education”, AIChE, Vol. 60 No. 11, pp. 3692-3700, doi: 10.1002/aic.14613.

Villanueva, K.A., Brown, S.A., Pitterson, N.P. and Hurwitz, D.S. (2017), “Teaching evaluation practices in engineering programs: current approaches and usefulness”, International Journal of. Engineering Education, Vol. 33 No. 4, pp. 1317-1334.

Vizcaya-Moreno, M.F. and Pérez-Cañaveras, R.M. (2020), “Social media used and teaching methods preferred by generation Z students in the nursing clinical learning environment: a cross-sectional research study”, International Journal of Environmental Research and Public Health, Vol. 17 No. 21, 8267, doi: 10.3390/ijerph17218267.

Voronov, R.S., Basuray, S., Obuskovic, G., Simon, L., Barat, R.B. and Bilgili, E. (2017), “Statistical analysis of undergraduate chemical engineering curricula of United States of America universities: trends and observations”, Education for Chemical Engineers, Vol. 20, pp. 1-10, doi: 10.1016/j.ece.2017.04.002.

Corresponding author

Nattee Akkarawatkhoosith can be contacted at: nattee.akk@mahidol.edu

Related articles