Diabetes Essay
Diabetes Essay
Introduction
This series of tutorials focused on Type 2 Diabetes Mellitus in Primary Care: Diagnosis and Management
was conducted in a primary care clinic. Healthcare professionals, including general practitioners, nurses,
and medical students, were trained and updated on the latest guidelines and practices related to type 2
diabetes. As the medical tutor in the workshop, my role was to guide participants through the tutorials on
Type 2 Diabetes Mellitus in Primary Care. I facilitated learning by delivering presentations, sharing
information, and encouraging discussions. I ensured an interactive and engaging format, promoting active
participation and application of knowledge through case-based scenarios and group activities (Sturt et al.,
2008, Sturt et al., 2005). The educational strategy employed in these tutorials aimed to provide healthcare
professionals with the knowledge and skills necessary to effectively identify signs and symptoms,
interpret test results, and make informed decisions regarding diabetes medications in a primary care
setting (Emami et al., 2020, Anderson et al., 1991).
The Dreyfus model is a framework that describes different stages of skill acquisition and expertise
development. It identifies five levels: novice, advanced beginner, competent, proficient, and expert. In the
context of this module, the group of learners consisted of approximately 15 participants who were already
at the proficient level of the Dreyfus model. They had acquired a solid foundation of knowledge and skills
related to diabetes care, including a deep understanding of the pathophysiology, risk factors, and basic
management principles associated with type 2 diabetes (Spann et al., 2006, Ramsburg, 2010). The module
aimed to build upon their existing knowledge and elevate their expertise to enhance their ability to
provide optimal care for individuals with type 2 diabetes in a primary care context (Mauldin, 2021,
Carraccio et al., 2008).
In the primary care clinic, various resources were available to effectively implement the assessment
strategy for this module. The team of healthcare professionals, including physicians, nurses, and
educators, contributed their expertise and facilitated the assessment process. The clinic's facilities
1
included a well-equipped lecture room with audiovisual aids, such as projectors and screens, to facilitate
engaging presentations (Sperl-Hillen et al., 2013). Breakout rooms were utilized for group discussions,
interactive activities, and case-based learning exercises, promoting active participation and collaborative
learning (Almendingen et al., 2022). Additionally, comprehensive teaching materials, including medical
textbooks, guidelines, research articles, and online resources, were utilized to support the learning process
(Hall et al., 2007). Reliable testing and evaluation tools, such as online platforms, were used to assess the
learners' proficiency in diagnosing and managing type 2 diabetes in a primary care setting. The
assessment methods utilized align with the principles of adult learning, promoting active engagement and
critical thinking among the learners to manage T2DM effectively in the primary care setting (Mauldin,
2021, Deakin et al., 2005).
Task 1
2
Formulate an individualized Affective Case studies, Summative assessment:
diabetes management plan for a problem-solving Written case study analysis
patient, considering lifestyle activities and management plan
modifications, medication
regimens, and monitoring
strategies.
When creating the learning outcomes (LOs) for the module on Type 2 Diabetes Mellitus in Primary Care,
I carefully considered the appropriate level of complexity for the learners. I considered the learners' prior
knowledge and aimed to build upon their existing foundation of knowledge and skills related to diabetes
care (Kegels et al., 2008). By incorporating the Dreyfus model, which identifies levels of proficiency, I
ensured that the LOs were pitched at the proficient level, where learners have a deep understanding of the
subject matter and can apply their knowledge effectively (Benner, 2004).
To determine the learners' prior knowledge, I conducted a needs assessment and gathered information
about their educational background, professional experience, and familiarity with type 2 diabetes
management. This helped me gauge their baseline understanding and tailor the LOs to meet their specific
needs. By incorporating case studies, problem-solving activities, and discussions, I allowed learners to
draw on their prior knowledge and experiences, facilitating active engagement and deepening their
understanding (Kalkan et al., 2007).
In creating SMART LOs, I considered the characteristics and needs of the learners, their Dreyfus learner
level, and the domains of expected learning (cognitive, psychomotor, and affective). To determine where
to pitch the LOs within the three domains of learning (cognitive, psychomotor, and affective), I drew
upon Miller's Pyramid, Bloom's Taxonomy, and the Revised Bloom's Taxonomy (Wheeler, 2005).
Miller's Pyramid suggests a progression from knowledge and comprehension to application, analysis,
synthesis, and evaluation (Rhind et al., 2021). Bloom's Taxonomy provides a framework for classifying
educational objectives and encompasses six levels: remembering, understanding, applying, analyzing,
evaluating, and creating (Chandio et al., 2016). The Revised Bloom's Taxonomy further expands on these
levels, incorporating action verbs and linking them to cognitive processes (Conklin, 2005).
Based on these models, I crafted LOs that encompassed different levels of cognitive complexity, such as
identifying signs and symptoms (remembering), interpreting laboratory test results (applying and
analyzing), evaluating appropriate medication timing (evaluating), and formulating an individualized
3
management plan (creating) (Mauldin, 2021). By incorporating hands-on activities, case studies, and
group discussions, I ensured that the LOs also addressed the psychomotor and affective domains,
enabling learners to practice skills and engage in critical thinking and decision-making processes (Kalkan
et al., 2007, Vég, 2006).
By aligning the LOs with these frameworks and taxonomies, I aimed to create a balanced and
comprehensive learning experience for the learners. It allowed them to progress from foundational
knowledge to higher-order thinking skills, developing a deeper understanding of type 2 diabetes
management in primary care (Anderson et al., 1991). Studies have shown that learner-centered
approaches, such as incorporating case studies and problem-solving activities, enhance engagement,
critical thinking, and knowledge retention (Susan Sandstrom, 2006, Trento et al., 2004). Additionally,
these frameworks provided guidance in selecting appropriate action verbs and instructional methods that
facilitated the achievement of the LOs and the learners' growth within the cognitive, psychomotor, and
affective domains (Nguyen and Gu, 2013, Van den Arend et al., 2000).
Task 2
Blueprint
Assessment Methods
4
Evaluate the appropriate timing ✓ ✓
for initiating diabetes medications
based on patient characteristics
and guideline recommendations
Formulate an individualized ✓ ✓ ✓
diabetes management plan for a
patient, considering lifestyle
modifications, medication
regimens, and monitoring
strategies
The Utility Index is a framework used to analyze and evaluate the effectiveness and appropriateness of an
assessment technique in terms of its validity, reliability, educational impact, acceptability, and cost
(Chandratilake and Davis, 2010). It allows for a comprehensive assessment of the utility and value of an
assessment method within a specific context. MCQs were selected as the assessment technique for this
module on Type 2 Diabetes Mellitus in Primary Care due to several reasons.
Validity: MCQs demonstrate high content validity as they can be designed to align closely with
the learning outcomes and content covered in the module (Hess and Davis, 1983). Research
supports the validity of MCQs, stating that well-constructed MCQs can effectively assess
knowledge and understanding (Yeong et al., 2020, Khafagy et al., 2016).
Reliability: MCQs can achieve high reliability when designed with clear and unambiguous
questions, well-defined answer choices, and consistent scoring criteria (Khafagy et al., 2016). A
study showed that MCQs can have high inter-rater reliability when scoring is standardized
(McCoubrie, 2004, Norcini et al., 1985).
Educational Impact: MCQs have a positive educational impact as they require learners to actively
retrieve information, apply concepts, and make decisions within the given options. This process
aids in knowledge consolidation and retention. Studies highlight the educational benefits of
retrieval practice, which is inherent in MCQ assessments (Norcini et al., 1985, Chan et al., 2011).
Acceptability: MCQs are widely accepted and familiar to learners, as they are a commonly used
assessment format in medical education. Learners perceive MCQs as fair and objective,
contributing to their acceptability. A study by Simbak (2014) found that medical students rated
MCQs as highly acceptable and relevant (Simbak et al., 2014).
5
Cost: MCQ assessments are cost-effective compared to resource-intensive methods such as
OSCEs or simulations. They require minimal resources in terms of question development,
scoring, and administration. The automated scoring feature of MCQs reduces the time and cost
associated with manual scoring (Pugh et al., 2020).
Task 3
Choice of Standard Setting Method: Modified Angoff Method for MCQs/SBA Exam
Among the assessment techniques used in the module, the Multiple-Choice Questions (MCQs)/Single
Best Answer (SBA) exam was selected for further analysis. The appropriate standard setting method for
this assessment technique is the Modified Angoff method. The Modified Angoff method is a widely used
standard setting approach that involves expert judges estimating the probability of a minimally competent
candidate correctly answering each test question (Ferdous and Plake, 2005). These estimates are then
averaged to determine the cut-off score or passing mark (Ricker, 2006).
The choice of the Modified Angoff method is justified for several reasons. Firstly, it allows for the
involvement of subject matter experts who possess the necessary knowledge and expertise in type 2
diabetes management. Their judgments are crucial in determining the level of knowledge and competency
required to pass the examination (Yim, 2018). Secondly, the Modified Angoff method aligns well with
the resources available in the setting. It does not require extensive resources in terms of time, manpower,
or financial investment. It can be implemented with a panel of expert judges who can provide their
estimates independently and subsequently reach a consensus (Hurtz and Auerbach, 2003).
Other standard setting methods, such as the Bookmark method or the Borderline Group method, may not
have been as suitable for the MCQs/SBA exam in this setting. The Bookmark method requires extensive
statistical analysis and may not be feasible in resource-constrained environments (Ananthanarayanan and
Abhilash, 1999). The Borderline Group method relies on comparing the performance of borderline
candidates, which may not provide a sufficiently objective and accurate standard in this context (Homer
et al., 2017). Evidence from the literature supports the use of the Modified Angoff method in various
settings and disciplines. A study by Mubuuke (2017) examined different standard setting methods and
found the Modified Angoff method to be reliable and practical (Mubuuke et al., 2017). Another study by
Hurtz and Auerbach (2003) highlighted the use of expert judgment in the Modified Angoff method and its
applicability to high-stakes assessments (Hurtz and Auerbach, 2003). By utilizing this method, we can
establish a fair and valid passing score, ensuring that learners are appropriately assessed in their
knowledge and understanding of type 2 diabetes management.
6
The Borderline Regression Method is the most appropriate standard setting method for the Objective
Structured Clinical Exam (OSCE) in this module on Type 2 Diabetes Mellitus (Geddes and Piasentin,
2016, Homer et al., 2017). Supported by evidence from Fidment (2012) and Shulruf (2015), this method
analyzes the relationship between exam scores and expert examiners' judgments of borderline candidates'
performance. It ensures a reliable determination of the pass/fail threshold, distinguishing between
competent and non-competent candidates based on their skills and competencies in diabetes management
(Shulruf et al., 2015, Fidment, 2012).
Task 4
For the OSCE, one assessment bias that may be encountered is the halo effect. The halo effect refers to
the tendency of examiners to let their overall impression of a candidate influence their scoring across
multiple stations, rather than objectively assessing each station separately (Nisbett and Wilson, 1977).
To avoid the halo effect bias in the Objective Structured Clinical Exam (OSCE), strategies such as
examiner training and calibration, standardized patient encounters, and blind scoring can be implemented.
Examiner training and calibration sessions ensure a clear understanding of assessment criteria and scoring
rubrics. Standardized patients provide consistency in case portrayal, while clear checklists and rating
scales minimize subjective impressions (Schleicher et al., 2017). Blind scoring, which involves removing
candidate identification information from scoring sheets, has been shown in a study by Uchida (2017) to
effectively reduce biases and ensure fair assessments in OSCEs (Uchida, 2021). By eliminating
preconceived notions or biases based on candidate identity, blind scoring allows examiners to focus solely
on candidate performance and responses.
For the MCQ exam, one assessment bias that may be encountered is construct-irrelevant variance. This
bias occurs when factors unrelated to the construct being assessed influence the scores. For example,
poorly written or ambiguous questions may introduce irrelevant variability in the responses (Zhai et al.,
2021).
7
item difficulty should accurately reflect type 2 diabetes knowledge and competencies. Careful item
selection and validation processes that ensure a representative range of content and difficulty levels in the
MCQ exam can minimize the impact of irrelevant factors on the scores, ensuring that the assessment
accurately reflects the knowledge and competencies required in the primary care setting (Hess and Davis,
1983). Evidence from the literature supports these strategies, highlighting the importance of rigorous item
development, while emphasize content coverage and item difficulty as factors influencing MCQ exam
validity and reliability (McCoubrie, 2004, Haladyna, 2004).
By implementing these strategies, such as examiner training, calibration, standardized patient encounters,
blind scoring for the OSCE, and rigorous question development, review processes, and content coverage
for the MCQ exam, the biases associated with each assessment technique can be minimized, leading to
more valid and reliable assessments in the context of type 2 diabetes in primary care.
Task 4
To fulfill this goal, my SMART action item for future professional development is to enroll in a
certification program in assessment and evaluation within the next six months. This action item is
Specific (enrolling in a certification program), Measurable (completion of the program and obtaining a
certification), Achievable (accessibility of relevant programs), Relevant (enhancing assessment
expertise), and Time-bound (within the next six months). This certification program will provide
comprehensive training, allowing me to refine my assessment skills, stay updated with best practices, and
make informed decisions in my role as an educator.
Appendix
Formative Assessment
8
Wee Tutorial Topic Learning Outcome Instructional Formative
k Method Assessment
1 Identify signs LO1: Recognize common signs and Presentation and MCQ (40%)
and symptoms symptoms of type 2 diabetes group discussion
2 Identify signs LO1: Recognize common signs and Standardized OSCE (20%)
and symptoms symptoms of type 2 diabetes patient encounters
3 Interpreting test LO3: Interpret laboratory test results Case-based MCQ (40%)
results commonly used in diagnosing and scenarios and
monitoring type 2 diabetes discussion
4 Interpreting test LO3: Analyze the significance of test Standardized OSCE (20%)
results results and their implications for type patient encounters
2 diabetes
5 Diabetes LO4: Understand the indications, Lecture and case MCQ (40%)
medications contraindications, and side effects of studies
diabetes medications
References:
9
1. ALMENDINGEN, K., SKOTHEIM, T. & MAGNUS, E. M. 2022. Breakout Rooms Serve as a
Suitable Tool for Interprofessional Pre-Service Online Training among Students within Health,
Social, and Education Study Programs. Education Sciences, 12, 871.
3. ANDERSON, R. M., FUNNELL, M. M., BARR, P. A., DEDRICK, R. F. & DAVIS, W. K. 1991.
Learning to empower patients: results of professional education program for diabetes educators.
Diabetes care, 14, 584-590.
4. BENNER, P. 2004. Using the Dreyfus model of skill acquisition to describe and interpret skill
acquisition and clinical judgment in nursing practice and education. Bulletin of science,
technology & society, 24, 188-199.
5. CARRACCIO, C. L., BENSON, B. J., NIXON, L. J. & DERSTINE, P. L. 2008. From the
educational bench to the clinical bedside: translating the Dreyfus developmental model to the
learning of clinical skills. Academic Medicine, 83, 761-767.
6. CHAN, C. K. Y., TAM, V. W. & LI, C. Y. V. 2011. A comparison of MCQ assessment delivery
methods for student engagement and interaction used as an in-class formative assessment.
International Journal of Electrical Engineering Education, 48, 323-337.
8. CHANDRATILAKE, M. & DAVIS, M. 2010. Evaluating and designing assessments for medical
education: the utility formula.
9. CONKLIN, J. 2005. A taxonomy for learning, teaching, and assessing: A revision of Bloom's
taxonomy of educational objectives complete edition. JSTOR.
10. DEAKIN, T. A., MCSHANE, C. E., CADE, J. E. & WILLIAMS, R. 2005. Group based training
for self‐management strategies in people with type 2 diabetes mellitus. Cochrane database of
systematic reviews.
10
11. DOWNING, S. M. 2002. Construct-irrelevant variance and flawed test questions: Do multiple-
choice item-writing principles make any difference? Academic Medicine, 77, S103-S104.
12. EMAMI, Z., KOUHKAN, A., KHAJAVI, A. & KHAMSEH, M. E. 2020. Knowledge of
physicians regarding the management of type two diabetes in a primary care setting: the impact of
online continuous medical education. BMC Medical Education, 20, 1-9.
13. FERDOUS, A. A. & PLAKE, B. S. 2005. The use of subsets of test questions in an Angoff
standard-setting method. Educational and Psychological Measurement, 65, 185-201.
14. FIDMENT, S. 2012. The objective structured clinical exam (OSCE): A qualitative study
exploring the healthcare student’s experience. Student engagement and experience journal, 1, 1-
18.
15. GEDDES, M. & PIASENTIN, K. Assessing the Validity and Reliability of Passing Standards for
Performance-Based Assessments: Borderline Regression Method. ITC 2016 Conference, 2016.
16. HALADYNA, T. M. 2004. Developing and validating multiple-choice test items, Routledge.
17. HALL, D. L., DRAB, S. R., CAMPBELL, R. K., MEYER, S. M. & SMITH, R. B. 2007. A Web-
based interprofessional diabetes education course. American journal of pharmaceutical
education, 71.
18. HESS, G. E. & DAVIS, W. K. 1983. The validation of a diabetes patient knowledge test.
Diabetes Care, 6, 591-596.
19. HOMER, M., PELL, G. & FULLER, R. 2017. Problematizing the concept of the “borderline”
group in performance assessments. Medical Teacher, 39, 469-475.
21. KALKAN, M., CERIT, A. G. & ZORBA, Y. 2007. How Problem–Based Discussion Sessions
Are Used To Promote Cognitive, Affective, and Psychomotor Domains: A Case Study at a
Maritime Higher Education and Training Institution.
11
22. KEGELS, E., VANDEKERCKHOVE, M., REMMEN, R., GIJBELS, D. & PETEGEM, P. V.
2008. Learning approaches in a traditional curriculum at senior student level may be responsive
to practice-based learning in the primary care setting. Education for Primary Care, 19, 624-631.
23. KHAFAGY, G., AHMED, M. & SAAD, N. 2016. Stepping up of MCQs’ quality through a
multi-stage reviewing process. Education for Primary Care, 27, 299-303.
24. MAULDIN, B. 2021. A novel teaching strategy in nursing pharmacology: Learning using
cognitive load theory. Nursing Education Perspectives, 42, E158-E160.
25. MCCOUBRIE, P. 2004. Improving the fairness of multiple-choice questions: a literature review.
Medical teacher, 26, 709-712.
26. MUBUUKE, A., MWESIGWA, C. & KIGULI, S. 2017. Implementing the Angoff method of
standard setting using postgraduate students: Practical and affordable in resource-limited settings.
African journal of health professions education, 9, 171-175.
28. NISBETT, R. E. & WILSON, T. D. 1977. The halo effect: Evidence for unconscious alteration of
judgments. Journal of personality and social psychology, 35, 250.
29. NORCINI, J., SWANSON, D., GROSSO, L. & WEBSTER, G. 1985. Reliability, validity and
efficiency of multiple choice question and patient management problem item formats in
assessment of clinical competence. Medical education, 19, 238-247.
30. PUGH, D., DE CHAMPLAIN, A., GIERL, M., LAI, H. & TOUCHIE, C. 2020. Can automated
item generation be used to develop high quality MCQs that assess application of knowledge?
Research and Practice in Technology Enhanced Learning, 15, 1-13.
31. RAMSBURG, L. 2010. An initial investigation of the applicability of the Dreyfus skill
acquisition model to the professional development of nurse educators, Marshall University.
32. RHIND, S. M., MACKAY, J., BROWN, A. J., MOSLEY, C. J., RYAN, J. M., HUGHES, K. J. &
BOYD, S. 2021. Developing Miller’s pyramid to support students’ assessment literacy. Journal
of Veterinary Medical Education, 48, 158-162.
12
33. RICKER, K. L. 2006. Setting cut-scores: A critical review of the Angoff and modified Angoff
methods. Alberta journal of educational research, 52.
34. SCHLEICHER, I., LEITNER, K., JUENGER, J., MOELTNER, A., RUESSELER, M.,
BENDER, B., STERZ, J., SCHUETTLER, K.-F., KOENIG, S. & KREUDER, J. G. 2017.
Examiner effect on the objective structured clinical exam–a study at five medical schools. BMC
medical education, 17, 1-7.
35. SHULRUF, B., POOLE, P., JONES, P. & WILKINSON, T. 2015. The objective borderline
method: A probabilistic method for standard setting. Assessment & Evaluation in Higher
Education, 40, 420-438.
36. SIMBAK, N. B., AUNG, M. M. T., ISMAIL, S. B., JUSOH, N. B. M., ALI, T. I., YASSIN, W.
A. K., HAQUE, M. & REBUAN, H. M. A. 2014. Comparative study of different formats of
MCQs: multiple true-false and single best answer test formats, in a new medical school of
Malaysia. International Medical Journal, 21, 562-566.
37. SPANN, S. J., NUTTING, P. A., GALLIHER, J. M., PETERSON, K. A., PAVLIK, V. N.,
DICKINSON, L. M. & VOLK, R. J. 2006. Management of type 2 diabetes in the primary care
setting: a practice-based research network study. The Annals of Family Medicine, 4, 23-31.
38. SPERL-HILLEN, J., O'CONNOR, P., EKSTROM, H., RUSH, W., ASCHE, S., FERNANDES,
O., APPANA, D., AMUNDSON, G. & JOHNSON, P. 2013. Using simulation technology to
teach diabetes care management skills to resident physicians. Journal of diabetes science and
technology, 7, 1243-1254.
39. STURT, J., HEARNSHAW, H., BARLOW, J. & HAINSWORTH, J. 2005. Supporting a
curriculum for delivering Type 2 diabetes patient self-management education: a patient-needs
assessment. Primary Health Care Research & Development, 6, 291-299.
40. STURT, J., WHITLOCK, S., FOX, C., HEARNSHAW, H., FARMER, A., WAKELIN, M.,
ELDRIDGE, S., GRIFFITHS, F. & DALE, J. 2008. Effects of the Diabetes Manual 1: 1
structured education in primary care. Diabetic Medicine, 25, 722-731.
41. SUSAN SANDSTROM, M. 2006. Use of case studies to teach diabetes and other chronic
illnesses to nursing students. Journal of Nursing Education, 45, 229.
13
42. TRENTO, M., PASSERA, P., BORGO, E., TOMALINO, M., BAJARDI, M., CAVALLO, F. &
PORTA, M. 2004. A 5-year randomized controlled study of learning, problem solving ability,
and quality of life modifications in people with type 2 diabetes managed by group care. Diabetes
care, 27, 670-675.
43. UCHIDA, H. 2021. Reducing and widening disparities with blind evaluations: evidence from a
field experiment. Available at SSRN 3767565.
44. VAN DEN AREND, I., STOLK, R., RUTTEN, G. & SCHRIJVERS, G. 2000. Education
integrated into structured general practice care for Type 2 diabetic patients results in sustained
improvement of disease knowledge and self‐care. Diabetic Medicine, 17, 190-197.
45. VÉG, A. 2006. Teaching and Learning in Type 2 Diabetes: The Importance of Self-Perceived
Roles in Disease Management. Acta Universitatis Upsaliensis.
46. WHEELER, D. 2005. A taxonomy for learning, teaching and assessing. Revista Brasileira de
Aprendizagem Aberta ea Distância, 1.
47. YEONG, F. M., CHIN, C. F. & TAN, A. L. 2020. Use of a competency framework to explore the
benefits of student-generated multiple-choice questions (MCQs) on student engagement.
Pedagogies: An International Journal, 15, 83-105.
48. YIM, M. 2018. Comparison of results between modified-Angoff and bookmark methods for
estimating cut score of the Korean medical licensing examination. Korean journal of medical
education, 30, 347.
49. ZHAI, X., HAUDEK, K. C., WILSON, C. & STUHLSATZ, M. A framework of construct-
irrelevant variance for contextualized constructed response assessment. Frontiers in Education,
2021. Frontiers, 751283.
14