Case Study: “What If” Scenario Analyses for Likert-Scale Data in a Public Health Dissertation

You are here: Home » Guides » Guide 1: Case Studies » “What if” Variations » Case Study: “What If” Scenario Analyses for Likert-Scale Data in a Public Health Dissertation

1. Study Context & Motivation for this Likert scale what if case study

In a Public Health PhD dissertation, 413 healthcare workers rated their perceptions of operational protocols on a 10-item, 5-point Likert scale. After transforming and aggregating responses into 3-point composites, the core analyses used chi-square tests on a derived categorical “high agreement” variable. Stakeholders then asked:

  • What if we defined “high agreement” more stringently?
  • What if we included the mid-category (“neutral”) as “agreement”?
  • What if we analyzed composite scores as numeric rather than categorical?

To inform robust decision-making, the research team ran three “What If” scenario analyses on the Likert pipeline.


2. Defining the Scenarios for this Likert scale what if case study

  1. Stringent Threshold
    • Original: Only composite scores equal to 3 (“Agree”) qualify as “high agreement.”
    • Variation: Require composite = 3 and item-level scores ≥4 before collapsing.
  2. Inclusive Threshold
    • Original: Composite = 3 only.
    • Variation: Treat both composite = 2 (“Neutral”) and 3 as “high agreement,” increasing sample size for inferential tests.
  3. Continuous Composite Analysis
    • Original: Composite collapsed to categorical {1,2,3}.
    • Variation: Use the raw composite means (ranging 1–3) in an ordinal logistic regression or linear regression, preserving information.

3. Software & Workflow for this Likert scale what if case study

3.1 Baseline Pipeline Recap

  • Data Prep: likert_transform.R collapses 5→3, computes composites, exports likert_masterchart.csv.
  • Inferential: inferential_tests.R runs chi-square on high_agreement vs. hospital_type.

3.2 Implementing “What If” Variations

All variations were orchestrated in an expanded likert_whatif.R script:

df <- read.csv("likert_masterchart.csv")

# Scenario 1: Stringent threshold
df$stringent_agree <- with(df, ifelse(composite_protocol == 3 & 
                                       Q1>=4 & Q4>=4 & Q7>=4, 1, 0))

# Scenario 2: Inclusive threshold
df$inclusive_agree <- with(df, ifelse(composite_protocol >= 2, 1, 0))

# Scenario 3: Continuous composite
# Use composite_protocol as numeric predictor

For each scenario, we ran:

  • Chi-Square: chisq.test(table(df$<scenario>_agree, df$hospital_type))
  • Ordinal Logistic Regression: MASS::polr(factor(composite_protocol) ~ hospital_type + covariates, data=df)
  • Linear Regression: lm(composite_protocol ~ hospital_type + covariates, data=df)

Results and diagnostic plots (mosaic plots, residuals, proportional-odds assumption checks) were exported to /whatif_results/.


4. Key Findings

Scenario“High Agreement” Rate (Public)“High Agreement” Rate (Private)Statistical Outcome
Baseline42.2 %60.1 %χ²(1)=…, p<0.001
Stringent Threshold28.6 %48.9 %χ²(1)=…, p<0.01 (stronger effect size)
Inclusive Threshold75.5 %82.3 %χ²(1)=…, p=0.05 (effect attenuated)
Continuous CompositeN/A (mean scores)N/A (mean scores)OR=2.1 (95 % CI 1.4–3.2) in ordinal model; β=0.35, p<0.001 in linear model
  • Stringent Threshold magnified group differences, highlighting the robustness of private-hospital staff’s stronger agreement.
  • Inclusive Threshold reduced statistical significance, suggesting mid-scale responders dilute the effect.
  • Continuous Analysis preserved scale information, yielding consistent effect estimates and enabling adjustment for covariates (e.g., years of service, age).

5. Implications & Recommendations

  • Threshold Selection Matters: Choice of collapsing rule can substantially alter effect sizes and p-values—researchers should pre-register threshold criteria or perform sensitivity checks.
  • Mid-Category Insights: Including “neutral” responses may reflect genuine ambivalence rather than partial agreement; consider reporting both versions.
  • Utility of Continuous Models: Treating composites as numeric preserves information and allows for covariate adjustment via regression, offering nuanced insights beyond categorical splits.

6. Takeaways for “What If” Analyses

  1. Parameterize Pipelines: Build flexible scripts (likert_whatif.R) that accept threshold parameters for easy reruns.
  2. Automate Diagnostics: For each scenario, automate goodness-of-fit and assumption checks (e.g., proportional-odds test for ordinal models).
  3. Document All Variations: Maintain a logbook of scenarios tested, their rules, and resulting metrics to support reproducibility and transparency.
  4. Visualize Comparisons: Use side-by-side bar charts and overlaid density plots to illustrate how scenario definitions shift distributions.

This “What If Variations” case study demonstrates how sensitivity analyses on Likert-scale transformations can strengthen the credibility and applicability of findings in a Public Health dissertation.



Want to explore more PhD-level case studies? Check out our Comprehensive Case Studies on PhD Statistical Analysis guide page.


Discover more from PhDStats Advisor

Subscribe to get the latest posts sent to your email.

Leave a Reply