/r/statistics

Photograph via snooOG

/r/Statistics is going dark from June 12-14th as an act of protest against Reddit's treatment of 3rd party app developers.

This community will not grant access requests during the protest. Please do not message asking to be added to the subreddit.


Guidelines:

  1. All Posts Require One of the Following Tags in the Post Title! If you do not flag your post, automoderator will delete it:

Tag Abbreviation
[Research] [R]
[Software] [S]
[Question] [Q]
[Discussion] [D]
[Education] [E]
[Career] [C]
[Meta] [M]
  • This is not a subreddit for homework questions. They will be swiftly removed, so don't waste your time! Please kindly post those over at: r/homeworkhelp. Thank you.

  • Please try to keep submissions on topic and of high quality.

  • Just because it has a statistic in it doesn't make it statistics.

  • Memes and image macros are not acceptable forms of content.

  • Self posts with throwaway accounts will be deleted by AutoModerator


  • Related subreddits:


    Data:


    Useful resources for learning R:

  • r-bloggers - blog aggregator with statistics articles generally done with R software.

  • Quick-R - great R reference site.


  • Related Software Links:

  • R

  • R Studio

  • SAS

  • Stata

  • EViews

  • JMP

  • SPSS

  • Minitab


  • Advice for applying to grad school:

  • Submission 1


  • Advice for undergrads:

  • Submission 1


  • Jobs and Internships

    For grads:

    For undergrads:

    /r/statistics

    588,891 Subscribers

    1

    [Q] resources for brushing up on experimental design?

    I have an internship interview at a biopharma company. I’ve been out of school for two years with a non statistics job and I’m quite rusty. I remember the experimental design class I took was incredibly difficult for me- does anyone have any resources to brush up on experimental design? Especially mixed effects and contrasts?

    My apologies if this isn’t an appropriate post, I didn’t see anything against it in the sub rules.

    0 Comments
    2025/02/01
    21:31 UTC

    0

    [Q] which math course will be more helpful in the long run as a stats major?

    I was a former math major and fulfilled most of my lower division requirements (calculus 1-4, discrete math 1-2, linear algebra, diffy eqs, a course using maple, and an upper div biological math course) but I couldn't stand the proof based upper division math courses which is why I am making the change to statistics. Originally I was going to take 2 statistics courses for the upcoming semester but unfortunately I am only allowed to take one statistics course, so I'm figuring out what to fill the second slot with. I'm debating filling the second slot with either a course in Set Theory or Discrete Mathematics. Although I have seen content in both courses already, I figured this would be a good opportunity to brush up on my proof writing skills as it is to my understanding that statistics programs still require proofs (although they're not as rigorous as those seen in a math program). On the one hand, I think Set Theory would be better to practice proofs as set theory is the basis for all math but Discrete Mathematics focuses on combinatorics and counting which I believe is essential for probability stuff (even though I already took Discrete Math, I'm also terrible at counting so I think this would be a good refresher too). Do you guys have any advice on the conundrum I see myself in?

    4 Comments
    2025/02/01
    19:30 UTC

    3

    [Q] What to do when a great proportion of observations = 0?

    I want to run an OLS regression, where the dependent variable is expenditure on video games.

    The data is normally disturbed and perfectly fine apart from one thing - about 16% of observations = 0 (i.e. 16% of households don’t buy video games). 1100 observations.

    This creates a huge spike to the left of my data distribution, which is otherwise bell curve shaped.

    What do I do in this case? Is OLS no longer appropriate?

    I am a statistics novice so this may be a simple question or I said something naive.

    14 Comments
    2025/02/01
    18:43 UTC

    1

    [Q] Logistic regression likelihood vs probability

    How can the logistic regression curve represent both the likelihood and the probability?

    I understand from a continuous normal distribution perspective that probability represents the area under the curve. I also understand that likelihood represents a single observation. So on a normal distribution you can find the probability by calculating the area under the curve and you can find the likelihood of a particular observation by observing the value of the y-axis with respect to a single observation.

    However, it gets strange when I look at a logistic regression curve, I guess because the area is being calculated differently? So, for logistic regression, you are measuring the probability of a binary on the y axis. However, this can also represent the likelihood, especially if you pick an observation and trace it over to the y axis.

    So how is probability different, or the same for a logistic regression curve in comparison to a continuous normal distribution. Is probability still measured in the sense that you can draw the area (would it be over the curve instead of under) between two points?

    4 Comments
    2025/02/01
    17:43 UTC

    3

    [Q] Messed up on how I approach my dissertation for my Biostatistics PhD (wasted first semester) - Question on how to move forward

    I am 3 year deaf phd student transitioning from my coursework to research on my thesis. My advisor give me research problem and the statistical method to address that problem. I was assigned a postdoc to work with also.

    I am not smartest person, and have very bad social skills.

    I thought the manuscript was supposed to be written at the end (not as you go through proving proof of properties, writing the background, and formulating simulation studies). I spent the first semester coding the method and and trying some random simulation study rather than proving the properties, which was suggested by my advisor and postdoc. I did not take writing the manuscript very seriously at first (treated as bunch of notes)

    I think I frustrated my advisor and postdoc(more of tutor than collobrators) and may ruin the relationship potentially and delay the completition of my degree for so how long. The postdoc did said my project was straightforward, as it was concrete and may be easy to visualize the result. I did have another project( applied) that I was able to progress, but there was some hiccups (some not on my side as the other person did not provide data)

    I am just wondering how to move forward? What should I expect for simulation studies and real data analysis? I can now visualize the steps for simulation studies on my own.

    My topic has elements of high dimensional statistics.

    5 Comments
    2025/02/01
    15:05 UTC

    4

    [Q] I have to give a MCQ test in a few weeks and need some statistics for this. This is not a homework problem.

    If there is a test where for each correct answer 4 marks are awarded and 1 mark is deducted for each incorrect answer. No marks given for unattempted questions. There are four choices for every MCQ and only one is correct.

    If I only know the answer to few questions, should i guess them or leave them unattempted?

    4 Comments
    2025/02/01
    14:45 UTC

    1

    [Q] Any good (not textbook) book that gives brief introduction to the major fields of statistics ?

    I know that there is wikipedia as a source, but nothing beats well written book from experts in the field, everyday I come across new statistical terminology and subfields that I would love to know what's going on there.

    4 Comments
    2025/02/01
    00:43 UTC

    28

    [C] How to internalize what you learn to become a successful statistician?

    For context I'm currently pursuing an MSc in Statistics. I usually hear statisticians on the job saying things like "people usually come up to me for stats help" or "I can believe people at my work do X and Y, goes to show how little people know about statistics". Even though I'm a masters student I don't feel like I have a solid grasp of statistics in a practical sense. I'm killer with all the math-y stuff, got an A+ in my math stats class. Hit may have been due to the fact that I skipped the Regression Analysis course in undergrad, where one would work on more practical problems. I'm currently an ML research intern and my stats knowledge is not proving to be helpful at all, I don't even know where to apply what I'm learning.

    I'm going to try and go through the book "Regression and other stories" by German to get a better sense of regression, which should cover my foundation to applied problems. Are there any other resources or tips you have in order to become a well-rounded statistician that could be useful in a variety of different fields?

    9 Comments
    2025/01/31
    23:49 UTC

    46

    [D] US publicly available datasets going dark

    0 Comments
    2025/01/31
    23:08 UTC

    2

    [question] French game similarity to Monty hall scenario

    There is a old French tv game that just restarted after a lot of time. During the final a candidate was currently wining a pack of card and was given 4 screen to choose from. The host explained : One of the screen was « keep your pack of card » One of the was « a crappy thing » One of them was « a decent thing » One of them was a car

    So at that point I got strong Monty hall vibe watching this. The candidate initially think screen 1 but ask his friends in the public to join him and discuss and after he hesitated between screen 1 and 4. The the candidate ask the host if he can start by ditching 2 and 3 and the host say sure why not. It happen and the 2 eliminated was the pack of card and the crappy thing. It left the « decent » and the car. The candidate then follow his friends advice for screen 4 and get the car.

    I’m wondering how applicable Monty hall logic can be on this one.

    • ⁠the candidate did not give a choice to officially change since he was hesitating because of his friends • ⁠the candidate and not the host choose the screen to eliminate and it could have been the car but it was not so technically, it was the « two goat reveal » of the Monty hall • ⁠at this point does Monty hall logic apply and had he a better chance by choosing screen 4 like he did ? It feel to me like yes because the crap got eliminated we returned to a Monty hall. So he had 1/4 chance of picking the correct screen at the beginning so switching is better, but can someone that know more on probabilities confirm it ? I dunno if any of this event change the probability distribution compared to a standard Monty hall

    5 Comments
    2025/01/31
    22:38 UTC

    1

    [Question] Hayes Process Model 7 Moderated Mediation Analysis (Insignificant moderation effect, but significant mediation effect- how to report?)

    Hello,

    I am currently working on a paper. I have already done a multiple mediation analysis with 3 mediators.

    I decided to add sex as a moderator, as in my descriptive stats sex indicated a significant difference between scores.

    The index of moderated mediation is non significant, so I know that gender does not moderate the relationship between X > Med > Y. Would I report the normal a/ b pathways as I would in a multiple mediation analysis, OR would I report the interaction pathways as I would in a moderated mediation?

    Please note using the usual pathways keeps my mediation effect as significant (as it was before adding a moderator) if I use the interaction pathways it will no longer be significant... So I assume we would not use the interaction as the moderator is not significant?

    Please let me know!!!!

    3 Comments
    2025/01/31
    22:16 UTC

    0

    [Q] statistics is hard

    [Q] 2010-2020 unemployment rate for Phoenix, AZ is given under the attached

    1. Find the average unemployment rate
    2. Find the standard deviation of the data set
    3. Find the five-number summary for the data and construct a box-plot

    and they provide me with the longest list of unemployment rates. how am i supposed to find those three with sooo many numbers? help please

    11 Comments
    2025/01/31
    21:46 UTC

    6

    [D] Analogies are very helpful for explaining statistical concepts, but many common analogies fall short. What analogies do you personally used to explain concepts?

    I was looking at for example this set of 25 analogies (PDF warning) but frankly many of them I find extremely lacking. For example:

    The 5% p-value has been consolidated in many environments as a boundary for whether or not to reject the null hypothesis with its sole merit of being a round number. If each of our hands had six fingers, or four, these would perhaps be the boundary values between the usual and unusual.

    This, to me, reads as not only nonsensical but doesn't actually get at any underlying statistical idea, and certainly bears no relation to the origin or initial purpose of the figure.

    What (better) analogies or mini-examples have you used successfully in the past?

    11 Comments
    2025/01/31
    20:24 UTC

    2

    [R] Layers of predictions in my model

    Current standard in my field is to use a model like this

    Y = b0 + b1x1 + b2x2 + e

    In this model x1 and x2 are used to predict Y but there’s a third predictor x3 that isn’t used simply because it’s hard to obtain.

    Some people have seen some success predicting x3 from x1

    x3 = a*x1^b + e (I’m assuming the error is additive here but not sure)

    Now I’m trying to see if I can add this second model into the first:

    Y = b0 + b1x1 + b2x2 + a*x1^b + e

    So here now, I’d need to estimate b0, b1, b2, a and b.

    What would be your concern with this approach. What are some things I should be careful of doing this. How would you advise I handle my error terms?

    5 Comments
    2025/01/31
    19:32 UTC

    4

    [Education] Interactive Explanation to ROC AUC Score

    Hi Community,

    I worked on an interactive tutorial on the ROC curve, AUC score and the confusion matrix.

    https://maitbayev.github.io/posts/roc-auc/

    Any feedback appreciated!

    Thank you!

    3 Comments
    2025/01/31
    19:14 UTC

    4

    [Q] Determining Periodicity on Granular Time Series data

    TLDR: any resources or suggestions on how to decompose time series data logged at the millisecond level?

    I am trying to apply practical methods of time series decomposition to server memory telemetry (e.g. % of memory used over time). The data is captured at the millisecond level, span ~1.5 days, and is stationary. However, I am having a very hard time understanding the best approach to decomposing it using something like STL. From the plotted data I can see there is certainly seasonality (and/or perhaps more irregular cyclicality) to the data which I would like to remove. But determining the correct periodicity to use seems to be hindering my work. Due to the granularity of the data it's nearly impossible to eyeball and roughly guess what the periodicity of the trend may be.

    In yearly, monthly, or weekly time series you have a sense of periodicity to work from, but I don't really have that sense given the scale of the data as to what would make sense in this case. I've done some basic ACF / PACF to look at lagging values. The plots show steady drop-offs in correlation over time before stabilizing. I've also done some very elementary frequency testing to try to establish the ideal period to use. But I need to be more methodical. Most of the resources I've found online don't seem to cover advanced cases of time series decomposition and certainly not in the context of very granular time intervals.

    My ultimate goal in decomposition is to detrend the data and analyze the residuals so that I can compare multivariate data across memory usage, swap usage, and other telemetry time series.

    5 Comments
    2025/01/31
    14:25 UTC

    1

    [Q] Calculations on Nested ANOVA

    Excuse me for asking this.

    So, before the question here's the scenario: There's an experiment.. nested design

    2 groups, say, A and B Under A there are 2 subgroups W and X, each of which have 2 samples. Under B there are 2 subgroups Y and Z, but here Y has 3 samples under it, and Z has 1.

    The experiment is carried out in 3 replications. Each replication has 15 tests from each sample.

    Now, when applying nested ANOVA after collection of raw data, We calculate means.

    First, calculating mean for each sample across 3 replications. For example: If 11, 10 and 12 out of 15 tests were positive for W1 The mean for W1 is MW1= 10... Calculating similarly for all 8 samples.

    Now, we calculate mean of each subgroup, Say, MW1=10 MW2=12

    Mean if subgroup W = 11 And so on for other groups X, Y and Z.

    Now as we go to mean of the group, there's a confusion.

    For example if we calculate mean of the group B, Y and Z Say, Mean of Y=avg(MY1,MY2,MY3) Mean of Z=avg(MZ1)

    Mean of B= avg(Mean of Y, Mean of Z) ....... But, If we were to calculate the mean of B using individual sample values Like, Mean of B= avg((MY1,MY2,MY3,MZ1).

    We would get a different value

    It is obvious because of different number of samples under each subgroup.

    But the question is, which one would be more appropriate to be used in the nested ANOVA Calculation.

    This same thing happens when calculating the overall mean using the group means Overall mean = avg(mean of A, mean of B).... {following the same order to calculate mean}

    Or

    Overall mean = avg(MW1,MW2,MX1,MX2,MY1,MY2,MY3,MZ1)....{Calculating with individual values}

    .....

    Overall mean will be used in calculating sum of squares, so it's confusing which way is the correct one.

    0 Comments
    2025/01/31
    13:51 UTC

    5

    [E] NSF Workshop: Advancing AI with Math and Stats Foundations 📊🤖

    The NSF is hosting a workshop on using mathematical and statistical foundations to advance AI! This event will explore how cutting-edge math and stats can drive innovation in AI, from theory to applications.

    📅 When: February 22–23, 2025

    📍 Where: Virtual

    The focus will be on:

    Strengthening AI’s theoretical underpinnings

    Addressing challenges in explainability, fairness, and robustness

    Bridging the gap between pure math/stats and practical AI tools

    Researchers, educators, and industry pros are encouraged to attend. Registration is free, but spots are limited!

    Details & registration: NSF Event Page

    1 Comment
    2025/01/31
    07:57 UTC

    7

    [Q] Self-learning statistics as an undergraduate science major

    Hello, I’m a second year undergraduate student majoring in neuroscience and minoring in mathematics. I’m doing a neuropsychology research internship at a hospital and I expressed a lot of interest in learning how to do anything mathematical/statistical and my supervisor said that I could potentially be involved in that. However, I don’t have much knowledge in statistics and I’ve never taken a statistics class.

    What are some good resources to efficiently self-learn statistics, especially statistics relating to biomedical research? I have money to buy textbooks but of course I prefer resources that are free. Also, I’ve taken math up to Calculus II and I’m currently taking Linear Algebra if that helps.

    12 Comments
    2025/01/31
    02:14 UTC

    34

    [Q] In his testimony, potential U.S. Health and Human Services secretary RFK Jr. said that 30 million American babies are born on Medicaid each year. What would that mean the population of the US is?

    By my calculation, 23.5% of Americans are on Medicaid (79 million out of 330 million). I believe births in the US as a percentage of population is 1.1% (3.6 million out of 330 million). So, would RFK's math mean the U.S. is 11.6 billion people?

    Essentially, (30 million babies / .011 babies per 1 person in U.S. population) / .235 (Medicare population to total population)

    17 Comments
    2025/01/31
    01:21 UTC

    6

    Summer before starting PhD [Education]

    Prep for Qualifying Exams

    I was accepted into a decent stats PhD program. While it’s not my top choice due to funding concerns, department size, and research fit, it’s the only acceptance I have and I am very grateful. I would like to prepare myself to pass a stats PhD program quals.

    I am reasonably confident in my mathematical analysis training. I am taking measure theory at a grad level in my final semester of undergrad, which goes over Stein and Shakarchi. I also took some other grad math classes (I was a math major and I focused more heavily on machine learning and applied math than traditional parametric statistics).

    However, I fear that because I have not extensively practiced statistics and probability since I took the courses, I’m a little rusty on distributions and whatnot. I’ve been only taking math classes based on proofs for the last 1-2 years, and apart from basic integrals and derivatives, I’ve done few computations with actual numbers.

    Here and there, I did some questions on derivations of moments for transformations of Gaussian random variables, but I honestly forgot a lot formulas

    Should I end up at this program, I will find an easier summer job so I can grind Casella and Berger this summer. Im mainly fearful because a nontrivial number of the domestic students admitted fail the quals.

    Please, guys, do you have any recommendations / advice?

    4 Comments
    2025/01/30
    17:42 UTC

    1

    [Q] How do I get prevalence after adjusting for gender and age?

    12 Comments
    2025/01/30
    06:49 UTC

    29

    [Q] How exactly does one calculate and compare probabilities of getting bitten by Luis Suarez compared to a shark?

    During the 2014 World Cup, Uraguayan soccer player Luis Suarez bit opposing team's players 3 times during the cup. Later, some news sources (reputable and non-reputable) identified a statistical estimation that one has a higher liklihood of being bitten by Suarez at 1 in 2,000, much more probabilistic than the chance of being bitten by a shark (at the time 1 in 3.7 million).

    How the hell does one estimate this? Seems like an odd thought experiment

    13 Comments
    2025/01/30
    04:50 UTC

    19

    [Q] Going for a masters in applied statistics/biostatistics without a math background, is it achievable?

    I've been planning on going back to school and getting my masters, and I've been strongly considering applied statistics/biostatistics. I have my bachelor’s in history, and I've been unsatisfied with my career prospects (currently working in retail). I took an epidemiology course as part of a minor I took during undergrad (which sparked my interest in stats in the first place) and an introductory stats course at my local community college after graduation. I'm currently enrolled in a calculus course, since I will have to satisfy a few prerequisites. I'm also currently working on the Google Data Analytics course from Coursera, which includes learning R, and I have a couple projects lined up down the road upon completion of the course.

    Is it feasible to apply for these programs? I know that I've made it a little more difficult on myself by trying to jump into a completely different field, but I'm willing to put in the work. Or am I better off looking elsewhere?

    17 Comments
    2025/01/29
    19:57 UTC

    2

    [Q] Lin. Regression: Interpreting coefficient in mediation model

    Hello everyone,

    I have a question regarding the interpretation of the beta-coefficient of a mediating variable taken from a regression table. The model consists of a categorical indepvar (sex; coded: 1=male; 2=female), a numeric depvar (income) and a numeric mediating variable (workhours). The table reports a coefficient of 35.67€/hour. I am wondering if this is the average increase of income per additional workhour for BOTH male AND female OR ONLY for male? Does this depend on the coding of sex? I struggle with the term „…keeping sex constant”.

    Help me understand this problem more generally. I want to get better at interpreting mediator-coefficients.

    Thank you!

    3 Comments
    2025/01/29
    15:41 UTC

    3

    [Q] Question related to the bernouli distribution?

    Let's say a coin flip comes head with probability p, then after N flips i can expect the with 95% that the number of heads will be on the limit (p-2*sqrt(p*(1-p)/N,p+2*sqrt(p*(1-p)/N), right?

    Now suppose I have a number M much larger than N by the order of 10 times as large and a unkown p

    I can estimate p by counting the number of sucess on N trials, but how do i account by uncertainess range of p on a new N flips of coins for 95%? As i understand on the formula (p-2*sqrt(p*(1-p)/N,p+2*sqrt(p*(1-p)/N) the p value is know and certain, if i have to estimate p how would i account for this uncertainess on the interval?

    9 Comments
    2025/01/29
    12:40 UTC

    0

    [Q] What would be a good model for single subject intense longitudinal data?

    I've been tracking some of my own data on a daily basis over the last two years. It's mostly habits and biometric data such as step count for a total of 18 variables. I'd like to be able to make some inferences from the data but want to do so in a way that's not just looking at graphs.

    I've looked into intense longitudinal DSEM but those are both only tracking a very small number of parameters and focus on within-peraon and between-person effects. Both of these don't really fit my application.

    On the other hand, I do have some ideas and a path model I would like to investigate but my main issue with that is that my data violates the independence assumption. This is a characteristic of tools I used to record the data. Basically the data outputs from these habits (besides step count) are either booleans for each day (these should be fine to use). The other is a "trend" type of data which changes scores depending on sustained recurrence of daily habits with a decay function built in.

    Does anyone here know what I could look into to analyse the data?

    4 Comments
    2025/01/29
    12:16 UTC

    60

    [E] Recast - Why R-squared is worse than useless

    I don’t know if I fully agree with the overall premise that R2 is useless or worse than useless but I do agree it’s often misused and misinterpreted, and the article was thought provoking and useful reference

    https://getrecast.com/r-squared/

    Here are a couple academics making same point

    http://library.virginia.edu/data/articles/is-r-squared-useless

    https://www.stat.cmu.edu/~cshalizi/mreg/15/lectures/10/lecture-10.pdf

    36 Comments
    2025/01/29
    12:07 UTC

    2

    [Q] Comparing differences in preference with 3 values including neutral

    Scenario: Analyzing preference data with 3 values (For example: Which do you prefer: Football, Baseball, or no preference (i.e., neutral)?)

     

    Primary Research Questions:

    1. Is either greater than Neutral?
    2. Is Football preferred over Baseball or vice versa?

     

    Strangely I'm not seeing many strong recommendations regarding this scenario unlike for continuous data.

    My question is

    • What statistic is most appropriate to analyze preference data with 3 values (e.g., Football, neutral, Baseball)?
      • If the answer relies on "making up" expected values (e.g., splitting the responses across the 3 values (33%,33%,33%), then what values (or their calculations) do you propose? (Note: I'm not a fan of 33,33,33) as preferring neither is a different outcome than preferring one or the other (see above primary research questions). 

    Additional caveats:

    • Just pointing out that it's NOT a factorial design (like we would if comparing 2 success rates like team 1's wins/losses vs team 2's wins/losses), so we cannot calculate expected values by "averaging" successes across Football and Baseball.
    • The number of responses can be really low in my dataset. In one instance a chi square was significant when Football = 12 and Baseball = 13.
    • McNemar test: Sauro/MeasuringU recommends this test. Although it's for nominal variables, it's in the form of 2x2 with paired samples (repeated measures). So, his recommendation seems to be for other scenarios.

     

    Options I've considered:  

    Option A1 - Neutral expected = observed

    First, eyeball (or confidence interval) the differences between Neutral and Football/Baseball.

    Second, set the Neutral expected value EQUAL to the observed. Split the remaining expected values across Football and Baseball (50/50 split) to "remove" Neutral, but maintain sample size. (See image for example)

    || || | |Observed|Expected| |Football|36|(58/2) =29| |Neutral|42|42| |Baseball|22|(58/2) = 29|

     

    ObservedExpected
    Football36(58/2) =29
    Neutral4242
    Baseball22(58/2) =29

    One problem seems to be the statistic itself b/c it's really wonky to try to interpret. It's like, "after removing the effect of neutral responses, participants’ preferences differed (or did not differ) between Football and Baseball."

     

    Option A2. Neutral vs. others along with Neutral expected = observed

    Instead of the first step above, either (A2a) take the larger of Football and Baseball, (A2b) add Football and Baseball together and see if combined they differ from Neutral, or (A2c) the average of Football and Baseball to see if that average is different than Neutral.

     

    One problem is the interpretability of either A2a, A2b, and A2c is… they are hard to interpret and/or take a lot of language to explain.

     

    Then use the second step above. So the same interpretability problem as A1.

     

    Option B1 - Confidence intervals' overlap across expected values

    [incomplete solution]

    Calculate confidence intervals and compare to EXPECTED values. Same problem as above: How do you calculate expected values that are meaningful across the 3 values (33,33,33 is NOT in my opinion). So what expected values??

     

    Option B2 - Confidence Intervals' overlap across the 3 observed values

    Similar to using confidence intervals to eyeball differences between continuous data

     

    Option C. Your suggestions!

     

    Thoughts, opinions, suggestions**? Thank you!**

    4 Comments
    2025/01/29
    03:06 UTC

    1

    [Q] Computing Likert data in a mediation analysis

    I am running a mediation analysis for my thesis project and have a couple of questions. My research consisted of administering a questionnaire via Qualtrics where everything is likert data. I tried running a CFA in JASP and R and came across the issue of having R treating my data as continuous, while JASP was treating it as ordinal. I believe the SEM class I took only handled continuous data, which was something I did not realize at the time. Now I am trying to figure out if I should continue treating my data as ordinal or continuous? For example, depressive symptoms were assessed using the DASS-21 subscale, where the final score is calculated by summing the responses to the relevant items, so in my head I feel this can technically be continuous if I use the total score. Luckily, I can manipulate JASP to treat my items as continuous so I can run my analysis with the ML estimator, but I am wondering if this is compromising my model fit in any way and if I should be treating my data as ordinal from beginning to end.

    I am clearly very new at this and just need some guidance outside of what my advisor and committee is offering me

    6 Comments
    2025/01/29
    03:03 UTC

    Back To Top