When professor izadi is interested in determining the association between two variables in educational research, the workflow extends far beyond running a basic correlation test. For Dr. Leila Izadi, a tenured educational researcher at Midwestern State University, this specific inquiry focuses on the relationship between weekly peer feedback engagement in online STEM courses and end-of-semester course satisfaction scores for non-traditional undergraduate students. The following sections break down the full research design, statistical validation steps, and real-world applications of Izadi’s work, offering actionable insights for students, new researchers, and educators looking to conduct their own association studies.
Introduction
Dr. Leila Izadi has spent the past 12 years studying retention and engagement gaps in postsecondary STEM education, with a specific focus on marginalized student populations. When professor izadi is interested in determining the association between understudied variables, she prioritizes research questions that have direct, actionable implications for classroom practice — a core tenet of her work that sets it apart from purely theoretical association studies.
The current project, launched in 2023, addresses a critical gap in existing literature: while most studies on online learning engagement focus on traditional 18–22-year-old undergraduates, fewer than 15% examine how non-traditional students (defined as those over 25, working full-time, or with dependent care responsibilities) interact with peer feedback tools. Izadi hypothesizes a positive association between consistent peer feedback participation and course satisfaction, but notes that confounding variables like prior STEM experience, internet reliability, and work schedule flexibility could skew results if not properly controlled.
Steps Professor Izadi Follows to Determine Association
When professor izadi is interested in determining the association between two core variables, she follows a standardized, peer-reviewed workflow to ensure results are reproducible and statistically valid. The full process includes six core steps:
- Define clear operational variables: First, Izadi specifies exactly how each variable will be measured to avoid ambiguity. For the current study, "peer feedback engagement" is operationalized as the number of substantive comments (≥50 words, referencing course material) left on classmates’ discussion posts per week. "Course satisfaction" is measured via a validated 10-item scale administered in the final week of the semester, with scores ranging from 1 (highly dissatisfied) to 5 (highly satisfied).
- Conduct a power analysis: Before recruiting participants, Izadi uses G*Power software to calculate the minimum sample size needed to detect a medium effect size (Cohen’s d = 0.5) with 80% statistical power. For this study, the calculation returned a minimum of 128 participants, so she recruited 150 non-traditional undergraduate students enrolled in online biology and chemistry courses across three public universities.
- Control for confounding variables: To isolate the association between peer feedback and satisfaction, Izadi collects data on potential confounders via a pre-semester survey. These include prior college STEM credits, weekly hours spent working, reliability of home internet access, and caregiving responsibilities. This data is later included as covariates in regression models to reduce bias.
- Collect longitudinal data: Unlike cross-sectional studies that measure variables at a single point in time, Izadi collects engagement data weekly throughout the 16-week semester, then administers the satisfaction survey at week 17. This longitudinal approach helps establish temporal precedence — a key requirement for inferring that engagement may influence satisfaction, rather than the reverse.
- Run preliminary correlation tests: Before moving to complex models, Izadi calculates Pearson’s r correlation coefficients to check for initial linear associations. For the current study, preliminary results show a moderate positive correlation (r = 0.42, p < 0.01) between average weekly peer feedback comments and final satisfaction scores.
- Validate results with multiple models: To ensure robustness, Izadi runs three separate statistical models: a simple linear regression, a multiple linear regression with covariates, and a hierarchical linear model to account for clustering (students nested within courses). If all three models show a significant association, she proceeds to interpret results.
Scientific Explanation of Association vs. Causation
A common misconception in educational research is that finding an association between two variables means one causes the other. When professor izadi is interested in determining the association, she is careful to note that association only describes a relationship, not a causal link. To give you an idea, the moderate positive correlation found in her preliminary data does not prove that peer feedback increases satisfaction — it only shows that students who engage more with peer feedback tend to report higher satisfaction The details matter here..
Key Statistical Metrics for Association Studies
To determine association, researchers rely on several key statistical metrics:
- Pearson’s r: Measures the strength and direction of a linear relationship between two continuous variables, ranging from -1 (perfect negative association) to +1 (perfect positive association). A value of 0 indicates no linear association.
- Cohen’s d: Measures effect size, or the practical significance of an association, rather than just statistical significance. A d of 0.2 is considered small, 0.5 medium, and 0.8 large.
- p-value: Indicates the probability that the observed association occurred by chance. A p-value of <0.05 is the standard threshold for statistical significance in educational research.
Avoiding Spurious Associations
Izadi also emphasizes the importance of ruling out spurious associations — relationships that appear real but are actually driven by a third, unmeasured variable. Here's one way to look at it: if students with higher prior STEM knowledge both engage more with peer feedback and report higher satisfaction, prior knowledge is a confounding variable that creates a spurious association between engagement and satisfaction. This is why step 3 of her workflow (controlling for confounders) is critical.
Frequently Asked Questions
-
Why does Izadi focus on non-traditional students for this association study? Non-traditional students make up 37% of all undergraduate enrollments in the U.S., but are 22% less likely to persist in STEM majors than their traditional peers. When professor izadi is interested in determining the association between engagement and satisfaction, she prioritizes populations where small improvements in satisfaction could have large impacts on retention and graduation rates.
-
Can this association be generalized to traditional undergraduate students? Izadi notes that the current study’s results may not apply to traditional students, as their daily schedules, financial pressures, and academic supports differ significantly. She plans to launch a parallel study for traditional undergraduates in 2025 to compare association patterns across populations.
-
How long does it take to complete a full association study? For longitudinal studies like Izadi’s, the full process from variable definition to result publication takes 18–24 months. Cross-sectional association studies, which measure variables at a single point in time, can be completed in 3–6 months but are more prone to bias.
-
What tools does Izadi use to analyze association data? She primarily uses R and SPSS for statistical modeling, Qualtrics for survey administration, and Learning Management System (LMS) analytics tools to automatically track peer feedback engagement without manual data entry.
Conclusion
When professor izadi is interested in determining the association between educational variables, her work demonstrates that rigorous methodology, careful variable definition, and transparent reporting are far more valuable than flashy statistical results. The current study’s preliminary findings suggest that even small increases in peer feedback engagement could meaningfully improve course satisfaction for non-traditional STEM students — a result that could inform LMS design, instructor training, and institutional retention policies nationwide Small thing, real impact..
For students and early-career researchers looking to conduct their own association studies, Izadi’s workflow offers a replicable template: start with a clear, actionable research question, control for confounders, collect longitudinal data when possible, and always distinguish between association and causation in reporting. As educational research continues to prioritize equity and real-world impact, studies like Izadi’s prove that even seemingly small associations can drive large-scale improvements in student outcomes.