/r/probabilitytheory

Photograph via snooOG

For everything probability theory related! No matter if you're in highschool or a stochastic wizard; if you want to learn something about this subject you're most welcome here.

This subreddit is meant for probability theory-related topics. Regardless of your level of education or your familiarity with probability theory; if you would like to learn more or just talk about this fascinating subject, you are welcome.

Please keep topics of discussion related to probability theory. Interdisciplinary topic is fine, but if the focus is a different area, it may be removed as off-topic. Titles should be a short summary, use the text box for the content of your question or topic.

Remain civil/polite in your conversations. If you think somebody is mistaken, focus on the content, not the user. Treat them as you might treat a student. Insults or snide remarks may result in removed comments or locked comment chains.

Post Flair Please use one of the following flairs for your post. Links will filter to that flair.

  • Homework: Questions about homework.
  • Applied: Regarding a concrete application of probability. If you're not sure what flair, it's probably this one.
  • Education: Questions/discussion about learning probability theory / resources regarding such (not homework).
  • Discussion: Other conceptual topics about probability. Should not be homework or an applied problem. Not just a catch-all.
  • Research: Talking about novel research in probability theory.
  • Meta: About the sub.

Filter out homework posts: Anti-homework

Regarding homework: When asking a homework question, please be sure to: (1) Clearly state the problem; (2) Describe/show what you have tried so far; (3) Describe where you are getting stuck or confused.

Some related subreddits:

To see what probability theory is, see the probability theory wiki. To understand the difference between statistics and probability theory, see this discussion.

Unfortunately, there is no universal LaTeX plugin for Reddit, so we have to make due with normal fonts.

/r/probabilitytheory

15,395 Subscribers

1

Probability of a sequence not occuring

A dice with 100 numbers. 97% chance to win and 3% chance to lose. roll under 97 is win and roll over 97 is lose. Every time you lose you increase your bet 4x and requires a win streak of 12 to reset the bet. This makes a losing sequence 1Loss + 11 Wins, A winning sequence is 1Loss + 12 Wins. With a bank roll enough to cover 6 losses and 7th loss being a bust (lose all) what is the odds of having 7 losses in a maximum span of 73 games.

The shortest bust sequence is 7 games (1L+1L+1L+1L+1L+1L+1L) and that probability is 1/33.33^7 or 1:45 billion. The longest bust sequence is 7 losses in 73 games (1L+11W+1L+11W+1L+11W+1L+11W+1L+11W+1L+11W+L) for 73 games

The probabilties between win streaks under 12 do not matter since the maximum games to bust is 73 games so it can be 6L in a row then 12 wins, only failure point is if it reaches 7 losses before 12 wins which has a maximum of 73 games as the longest string.

Question is the probability of losing 7 times in 73 games without reaching a 12 win streak? I can't figure that one out if anyone can help me out on that. I only know it can't be more than 1:45 billion since the rarest bust sequence is 7 losses in a row.

9 Comments
2024/04/09
21:40 UTC

1

Question about soccer probability

If we take all soccer matches in the world, shouldn't the probability of a team: win = draw = lose ≈ 1/3 ?

14 Comments
2024/04/09
16:53 UTC

1

Could clever counting of rolls increase odds of winning in roulette

For example, suppose I know history of roulette rolls. And bet on red only after seeing 10 black rolls in a row.

Can you provide math explaining why or why not this kind of strategies are advantageous

2 Comments
2024/04/09
15:26 UTC

1

Applied. My employer publishes an “On Call” list every year.

Each week, (54 weeks), 2 employees are chosen. There are 25 employees on the list. There are 10 holidays on the schedule.

What are the chances to be chosen for 1, 2, or 3 holidays?

Some employees are selected 3 times in a year. What are the chances an employee is chosen 3 times?

Assume a random selection of 25 employees is chosen until there are no names left, starting Week 1. Then all names go back in the hat for the next round. Repeat until all weeks are filled.

Its funny how some employees get “randomly” selected for 3 holidays a year for several years in a row. Some have never had to work a holiday or get picked for a 3rd week.

This year, 1 poor guy got picked 3 times and each time happens to be a holiday.

This is way too complex for me to tackle. Any help would be appreciated.

2 Comments
2024/04/08
22:06 UTC

0

what's the probability?

probability of a wordle ladder happening

1 Comment
2024/04/08
12:18 UTC

3

General definition of expectation

I have been doing my questions based on general definition of expectation and convergence of expectation. Though each statement i see is pretty much trivial for a simple random variable but it takes me a big leap of faith for each q to make assumptions about things that i feel uncomfortable about like in extended random variables talking about infinity as a value and and a lot of extra stuff. Is there any way to build up rigour from simple to general random variable

2 Comments
2024/04/04
12:07 UTC

2

Rules for making assumptions through symmetry

Frequently I encounter problems where symmetry is used to obtain key info for finding a solution, but here I ran into a problem where the assumption I made led to a different result from the textbook.

Job candidates C1, C2,... are interviewed one by one, and the interviewer compares them and keeps an updated list of rankings (if n candidates have been interviewed so far, this is a list of the n candidates, from best to worst). Assume that there is no limit on the number of candidates available, that for any n the candidates C1, C2,...,Cn are equally likely to arrive in any order, and that there are no ties in the rankings given by the interview.

Let X be the index of the first candidate to come along who ranks as better than the very first candidate C1 (so CX is better than C1, but the candidates after 1 but prior to X (if any) are worse than C1. For example, if C2 and C3 are worse than C1 but C4 is better than C1, then X = 4. All 4! orderings of the first 4 candidates are equally likely, so it could have happened that the first candidate was the best out of the first 4 candidates, in which case X > 4.

What is E(X) (which is a measure of how long, on average, the interviewer needs to wait to find someone better than the very first candidate)? Hint: find P(X>n) by interpreting what X>n says about how C1 compares with other candidates, and then apply the result of the previous problem.

This is the 6th question that can be found here (Introduction to Probability).

My thought is that, since we know nothing about C1 and Cx other than one is strictly better, there is equal probability that Cx is better or worse (this is my symmetry assumption). And since there are infinitely many candidates, the probability that Cx is better than C1 is independent from the probability that Cy is better than C1.

Hence I concluded that after meeting the 1st candidate, the expected # of candidates to be interviewed to find a better one follows that of an r.v. ~ Geom(1/2). Therefore 3 is the solution. Essentially every interview after the first is an independent Bernoulli trial with p=1/2 (from symmetry): we either find a better candidate, or we don't, there is no reason why we should assume one is more likely than the other.

The book argues that any of the first n candidates have equal probability to be the best (this is the book's symmetry assumption), hence there is 1/n chance that the first is the best and thus X > n. Therefore there is a 1/2 chance that X > 2, 1/3 chance that X > 3, ... etc., and E(X) is 1+1/2+1/3+1/4+... = infinity (solution is also available at the link above).

I am having some difficulty identifying why my assumption is wrong and the book right, and in general how to avoid making more of the same mistakes. If anyone could shed some light on it I would be very grateful.

2 Comments
2024/04/04
10:11 UTC

1

Probability of Specific numbers when tossing an unfair die

If I have an unfair die where odd numbers are weighted differently than even numbers, how could I calculate the probability of getting a specific outcome. For example, if the probability of getting an odd number is 1/9 and getting an even number is 2/9, then when I toss the die 12 times (independent trials) what's the probability of getting each number exactly twice? I think using binomial theorem would work but I don't know if that accounts for the fact that each time I toss the die I have less trials to get my desired outcome.

4 Comments
2024/04/03
14:56 UTC

1

Answering exam questions

Hello! I’m about to take an aptitude exam for law school and it will be a multiple choice type of exam. It is inevitable that there will be some questions that I do not know the answer to.

My question is: what is the probability that I will get a higher score if I choose the same letter of choice for the questions that I do not know the answer to?

Or is there a higher probability to get a higher score if I choose a random letter for every question that I do not know the answer to?

Thanks a lot!

2 Comments
2024/04/02
16:45 UTC

1

Probability for card draws after a shuffle

Say there’s 4 copies of a card I want randomly scattered throughout my deck.

I decide to look at the top 3 or so cards and then discard them because they were not the card I wanted.

This would probably bring me much closer to drawing one of the copies I want, but what if I then shuffle the deck?

It feels like I would lose a lot of the progress I made towards getting the card I want, but I assume probability would still be the same?

2 Comments
2024/04/02
16:29 UTC

1

Suitcase locks

On a suitcase that has two locks, each with three cylinders that have 10 options (0-10), how many combinations are there? The two locks do not have the same combo.

I'm of the belief that all 6 numbers need to line up, giving us the equation 1010101010*10 for 1,000,000 possible combinations.

Is there something I'm missing?

2 Comments
2024/03/31
23:31 UTC

1

My girlfriend came with an interesting question

What is the probability of an American with a nipple piercing getting struck by lightning? I tried to do the math but I got lost… I based my assumption of that as of December 2017 13% of Americans had a nipple piercing. About 300 Americans get struck by lightning every year and about 40.000.000 lightning bolts strike per year in America. Please help

7 Comments
2024/03/30
10:46 UTC

2

Using probability and expectation to prove existence, clarification needed

This is from Blitzstein and Hwang's Introduction to Probability, 4.9. The original statement is as follow:

The good score principle: Let X be the score of a randomly chosen object. If

E(X) >= c, then there is an object with a score of at least c.

I think there may have been some context I've missed, because here is a counterexample: Let X be the number shown on top of a fair D6, and let 10 dice, rolled and unobserved, be the objects. The expected score of each die is 3.5, but there is no guarantee that one of them has a score greater than 1.

Supposed that the missing context is "the expected score is calculated through observing the objects and their configurations are thoroughly known", then the example given in the same chapter still doesn't work out in my head. Here is the example problem:

A group of 100 people are assigned to 15 committees of size 20,

such that each person serves on 3 committees. Show that there exist 2 committees

that have at least 3 people in common.

The book concluded that, since the expected number of shared members on any two committees is 20/7 (much like the expected roll of a fair D6 is 3.5), there must be two committees that share at least 3 members in common.

If I then add the context that "these committees are observed empirically to have 20/7 common members between any given 2", then I think the problem is trivialized.

So is the original statement legit? Or did the textbook fail to mention some important conditions? Thanks in advance.

5 Comments
2024/03/30
06:29 UTC

1

Infinite trolley problem

Suppose that you have a typical trolley problem, where the player must decide wether to pull the lever or not, it goes as follows:

-If the player pulls the lever the trolley will change its direction, killing one person.

-If the player doesn´t pull the lever, the trolley won´t kill anyone, but it will go through a portal and that portal will create to separate problems. Of course, if in the next two problems both players decide to NOT pull the lever, both trains will go through their respective portals, each one creating two separate problems, resulting in four (and so on, the problem could grow exponentially).

The question is, if the players decided randomly whether to pull the lever or not, what is the expected value of the number of victims? Is it infinite? If not, what does it converge to?

P.D. If i did not explain myself properly, I apologize, english is not my first language.

2 Comments
2024/03/29
04:59 UTC

0

Rule of at least one adjusted

Suppose you are trying to find the probability an event wont/did not occur.

In this scenario there are 4 independent probabilities that show an event wont/didnt happen.

They each have a value of 50%. So 4X 50% probabilities to refute/show an event does not or did not occur.

Now let's assume you are only 90% certain that each probability is valid.

They now have a value of 45% each

So there is a 90.84% probability this event didnt/wont happen.

For the rule of at least one would that be factored into this equation at all.
In the 90% certainty the probabilities are valid. (Lets assume it's due to uncertainty/second guessing yourself in this hypothetical fictional scenario)

Would you take the 10% uncertainty ×4 to get 34.39% one of these probabilities is invalid? Thereby changing the overall probability an event did not occur to 88.27% the event did not occur?

Or am I way off base here?

9 Comments
2024/03/28
18:48 UTC

1

is Expectation always the mean ?

for a simple random variable it is but for a general case would it be true

3 Comments
2024/03/28
09:57 UTC

0

Dice probability for my DnD game

The other day I was playing a game of DnD online. Before the game our players will purge dice through an automatic dice roller. 2 people got the same number in a row. I am curious about the odds of it. Here’s the info…

Rolls 4 sided x5 6 sided x5 8 sided x5 10 sided x10 (because of the percentage die) 12 sided x5 20 sided x5 All at the same time

308 was the total by 2 people in a row.

5 Comments
2024/03/27
04:41 UTC

4

Need help with checking my work for probability of drawing a pair in a certain condition. My approach is in the body.

I have a problem which I want to verify my work for. Lets say I have 5 cards in my hand from a standard deck of 52 cards that are all completely unrelated (EX: 2,4,6,8,10). Assuming I discard these cards, and these cards are not placed back in the deck, and I draw 5 new cards from the deck (which currently has 47 cards because I had originally had 5 and discarded them), what are the odds of me drawing only a pair and 3 random unrelated cards? EX: drawing a hand (3,3,5,7,9 or Jack, Jack, Queen, King, Ace or 6, 6, 9, 10, Ace) I cannot count three of a kind, four of a kind, or full houses as part of the satisfying condition of drawing a pair.

I believe I'm supposed to use the combination formula but I'm not sure if I am approaching this problem correctly. I have as follows:

(8c1 * 4c2 + 5c1 * 3c2) * ((7c3 * (4c1)^3) + (5c3 * (3c1)^3))+ (8c3 * (4c1)^3) + (4c3 * (3c1)^3)) / 47c5

My thought is to calculate the combinations of pairs and then calculate the combinations of valid ways to draw 3 singles and multiply them together to get total combinations that satisfy the requirement of drawing a pair and 3 random singles that don't form a pair. Then I divide this by the total number of combinations possible (47 c 5) to get the final probability. Please let me know if I am approaching this right or if I am missing something.

Any input would be greatly appreciated!

2 Comments
2024/03/25
20:32 UTC

2

Probability and children's card games

I am trying to calculate the odds of drawing at least one of 18 two card combinations in a yu-gi-oh! deck. I making a spreadsheet to learn more about using probability in deck building in the yu-gi-oh! card game. In my deck there are 9 uniqure cards with population sizes varying from 4 to 1 which make up a possible 18 desirable 2 card combination to draw in your opening hand (sample of 5). The deck size is 45 cards. I have calculated the odds of drawing each of these 18 2 card combination individually but want to know how I can calculate a "total probability" of drawing at least one of any one of these 18 two card combinations. I have attached a screenshot of a spreadsheet I have made with the odds I calculated.

8 Comments
2024/03/25
07:13 UTC

2

Probability paradox or am I just stupid?

Let's imagine 3 independent events with probabilities p1, p2 and p3, taken from a discrete sample space.

Therefore P = (1 - p1).(1 - p2).(1 - p3) will be the probability of the scenario in which none of the three events occur. So, the probability that at least 1 of them occurs will be 1 - P.

Supposing that a researcher, carrying out a practical experiment, tests the events with probabilities p1 and p2, verifying that both occurred. Will the probability, of the third event occur, be closer to p3 or 1 - P ?

5 Comments
2024/03/24
14:42 UTC

2

Combined Monte Carlo P50 higher than sum of P50s

Hi everyone,
Sorry if I'm posting in the wrong sub.

I'm working on the cost estimate of a project for which I have three datasets :

  • One lists all the components of CAPEX and their cost. I let each cost vary based on a triangular law from -10% to +10% and sum the result to get a CAPEX estimate.
  • One lists all perceived event-driven risks and associates both a probability of occurrence and a cost to each event. I let each event-driven cost vary like in the first dataset but also multiply them by their associated Bernoulli law to trigger or not the event. I sum all costs to get an event-driven risk allocation amount.
  • The last one lists all the schedule tasks and their minimal/modal/maximum duration. I let each task duration vary via a triangular law using the mode and bounded to the min and max duration. I sum all durations and multiply them by an arbitrary cost per hour to get the total cost associated to delays.

I'm using an Excel addon to run the simulations, using 10k rolls at least.

From what I understood, I should see a 50th percentile for the "combined" run that is less than the sum of the 50th percentiles of each datasets simulations ran separately.
My 50th percentile however is slightly higher than the sum of P50s and I'm struggling to understand why.

Could it be because of the values? Or is such a model always supposed to respect this property?

1 Comment
2024/03/24
14:19 UTC

1

Odds of winning after n consecutive losses

Hi ! I'm trying to solve a probability problem but I'm not sure about the solution I found. I'm looking for some help / advice / insight. Let's get right to it, here's the problem :

I) The problem

  • I toss a coin repeatedly. If It hits head, I win, if it hits tails, I lose.
  • We know the coin is weighed, but we don't know how much it's weighed. Let's note p the probability of success of each individual coin toss. p is an unknown in this problem.
  • We've already tossed the coin n times, and it resulted in n losses and 0 wins.
  • We assume that each coin toss doesn't affect the true value of p. The tosses are hence all independent, and the probability law for getting n consecutive losses is memoryless. It's memoryless, but ironically, since we don't know the value of p, we'll have to make use of our memory of our last n consecutive losses to find p.

What's the probability of winning the next coinflip ?

Since p is the probability of winning each coinflip, the probability of winning the next one, like any other coinflip, is p. This problem could hence be equivalent to finding the value of p.

Another way to see this is that p might take any value that respect certain conditions. Given those conditions, what's the average value of p, and hence, the value we should expect ? This problem could hence be equivalent to finding the expected value of p.

II) Why the typical approach seems wrong

The typical approach is to take the frequency of successes as equal to the probability of success. This doesn't work here, because we've had 0 successes, and hence the probability would be p=0, but we can't know that for sure.

Indeed, if p were low enough, relative to the number of coin tosses, then we might just not be lucky enough to get at least 1 success. Here's an example :

If p=0.05, and n=10, the probability that we had gotten to those n=10 consecutive losses is :
P(N≥10) = (1-p)^(n) = 0.95^(10) ≈ 0.6

That means we had about 60% chances to get to the result we got, which is hence pretty likely.

If we used the frequency approach, and assumed that p = 0/10 = 0 because we had 0 successes out of 10 tries, then the probability P(N≥10) of 10 consecutive losses would be 100% and we would have observed the same result of n consecutive losses, than in the previous case where p=0.05.

But if we repeat that experiment again and again, eventually, we would see that on average, the coinflip succeeds around p=5% of the time, not 0.

The thing is, with n consecutive losses and 0 wins, we still can't know for sure that p=0, because the probability might just be too low, or we might just be too unlucky, or the number of tosses might be too low, for us to see the success occur in that number of tosses. Since we don't know for sure, the probability of success can't be 0.

The only way to assert a 0% probability through pure statistical observation of repeated results, is if the coinflip consistently failed 100% of the time over an infinite number of tosses, which is impossible to achieve.

This is why I believe this frequency approach is inherently wrong (and also in the general case).

As you'll see below, I've tried every method I could think of : I struggle to find a plausible solution that doesn't show any contradictions. That's why I'm posting this to see if someone might be able to provide some help or interesting insight or corrections.

III) The methods that I tried

III.1) Method 1 : Using the average number of losses before a win to get the average frequency of wins as the probability p of winning each coinflip

Now let's imagine, that from the start, we've been tossing the coin until we get a success.

  • p = probability of success at each individual coinflip = unknown
  • N = number of consecutive losses untill we get a success

{N≥n} = "We've lost n consecutive times in n tries, with, hence, 0 wins"
It's N≥n and not N=n, because once you've lost n times, you might lose some extra times on your next tries, increasing the value of N. After n consecutive losses, you know for sure that the number of tries before getting a successfull toss is gonna be n or greater.
*note : {N≥n} = {N>n-1} ; {N>n} = {N≥n+1}

  • Probability distribution : N↝G(p) is a geometrical distribution :

∀n ∈ ⟦0 ; +∞⟦ : P(N=n) = p.(1-p)^(n) ; P(N≥n) = (1-p)^(n) ; P(N<0) = 0 ; P(N≥0) = 1

  • Expected value :

E(N) = ∑*^(n ∈ ⟦ 0 ; +∞⟦)* P(N>n) = ∑*^(n ∈ ⟦ 0 ; +∞⟦)* P(N≥n+1) = ∑*^(n ∈ ⟦ 0 ; +∞⟦)* (1-p)^(n+1) = (1-p)/p
E(N) = 1/p - 1

Let's assume that we're just in a normal, average situation, and that hence, n = E(N) :
n = E(N) = 1/p - 1

⇒ p = 1/(n+1)

III.2) Method 2 : Calculating the average probability of winning each coinflip knowing we've already lost n times out of n tries

For any random variable U, we'll note its probability density function (PDF) "f{U}", such that :
P( U ∈ I ) = u∈ I f(u).du (*)

For 2 random variables U and V, we'll note their joint PDF f{U,V}, such that :
P( (U;V) ∈ I × J ) = P( { U ∈ I } ⋂ { V ∈ J } ) = *^(u∈ I)* *^(v∈ J)* f{U,V}(u;v).du.dv

Let's define X as the probability to win each coinflip, as a random variable, taking values between 0 and 1, following a uniform distribution : X↝U([0;1])

  • Probability density function (PDF) : f(x) = f{X}(x) = 1 ⇒ P( X ∈ [a;b] ) = *^(x∈ [a;b])* f(x).dx = b-a
  • Total probability theorem : P(A) = *^(x∈ [0;1])* P(A|X=x).f(x).dx = *^(x∈ [0;1])* P(A|X=x).dx ; if A = {N≥n} and x=t : ⇒ P(N≥n) = ∫***^(t∈ [0;1])*** P(N≥n|X=t).dt (**) (that will be usefull later)
  • Bayes theorem : f{X|N≥n}(t) = P(N≥n|X=t) / P(N≥n) (***) (that will be usefull later)
    • Proof : (you might want to skip this part)
    • Let's define Y as a continuous random variable, of density function f{Y}, as a continuous stair function of steps of width equal to 1, such that :

∀(n;y) ∈ ⟦0 ; +∞⟦ × ∈ [0 ; +∞[, P(N≥n) = P(Y=⌊y⌋), and f{Y}(y) = f{Y}(⌊y⌋) :
P(N≥n) = P(⌊Y⌋=⌊y⌋) = ^(t∈ [)^(n ; n+1]) f{Y}(t).dt = ^(t∈ [)^(n ; n+1]) f{Y}(⌊t⌋).dt = ^(t∈ [n ; n+1]) f{Y}(n).dt = f{Y}(n) (1)

  • Similarily : P(N≥n|X=x) = P(⌊Y⌋=⌊y⌋|X=x) = ^(t∈ [)^(n ; n+1]) f{Y|X=x}(t).dt = ^(t∈ [n ; n+1]) f{Y|X=x}(⌊t⌋).dt

= ^(t∈ [*n ; n+1]) f{Y|X=x}(n).dt = f{Y|X=x}(n) (2)

  • f{X,Y}(x;y) = f{Y|X=x}(y) . f{X}(x) = f{X|Y=y}(x) . f{Y}(y) ⇒ f{X|Y=y}(x) = f{Y|X=x}(y) . f{X}(x) / f{Y}(y) ⇒ f{X|N≥n}(x) = f{Y|X=x}(n) . f{X}(x) / f{Y}(n) ⇒ using (1) and (2) :

f{X|N≥n}(x) = P(N≥n|X=x) . f{X}(x) / P(N≥n) ⇒ f{X|N≥n}(x) = P(N≥n|X=x) / P(N≥n).
Replace x with t and you get (***) (End of proof)

We're looking for the expected probability of winning each coinflip, knowing we already have n consecutive losses over n tries : p = E(X|N≥n) = *^(x ∈ [0;1])* P(X>x | N≥n).dx

  • P(X>x | N≥n) = *^(t∈ [x ;1])* f{X|N≥n}(t) . dt by definition (*) of the PDF of {X|N≥n}.
  • f{X|N≥n}(t) = P(N≥n|X=t) / P(N≥n) by Bayes theorem (***), where :
    • P(N≥n|X=t) = (1-t)n
    • P(N≥n) = ∫*^(t∈ [0;1])* P(N≥n|X=t).dt by total probability theorem (**)

⇒ p = E(X|N≥n) = *^(x ∈ [0;1])* ^(t∈ [x ;1]) (1-t)^(n) . dt . dx / P(N≥n)
= [ *^(x ∈ [0;1])* *^(t∈ [x ;1])* (1-t)^(n).dt.dx ] / ∫*^(t∈ [0;1])* (1-t)^(n).dt where :

  • *^(t∈ [x ;1])* (1-t)^(n).dt = -*^(u∈ [1-x ; 0 ])* u^(n).du = [-u^(n+1)/(n+1)]^(u∈ [1-x ; 0 ]) = -0^(n+1)/(n+1) + (1-x)^(n+1)/(n+1) = (1-x)^(n+1)/(n+1)
  • *^(x ∈ [0;1])* *^(t∈ [x ;1])* (1-t)^(n).dt.dx = *^(x ∈ [0;1])* (1-x)^(n+1)/(n+1) = 1/(n+1) . *^(t∈ [x=0 ;1])* (1-t)^(n).dt = 1/(n+1)²
  • ∫*^(t∈ [0;1])* (1-t)^(n).dt = 1/(n+1)

⇒ p = 1/(n+1)

III.3) Verifications :

Cool, we've found the same result through 2 different methods, that's comforting.

With that result, we have : P(N≥n) = (1-p)^(n) = [1- 1/(n+1) ]^(n)

  • P(N≥0) = (1-p)^(0) = 1 [1- 1/(0+1) ]0 = 1 ⇒ OK
  • P(N≥+∞) = 0 lim*^(n→+∞)* [1- 1/(n+1) ]^(n) = lim*^(n→+∞)* [1/(1+1/n) ]^(n) = lim*^(n→+∞)* e^(n.ln(1/[1+1/n])) = lim*^(n→+∞)* e^(-n.ln(1+1/n)) = lim*^(i=1/n →0+)* e^(-[ln(1+i - ln(1+0)] / (i-0))) = lim*^(x →0+)* e^(-ln'(x)) = lim*^(x →0+)* e^(-1/x) = lim*^(y →-∞)* e^(y) = 0 ⇒ OK
  • n=10 : p≈9.1% n=20 : p≈4.8% n=30 : p≈3.2% ⇒ The values seem to make sense
  • n=0 ⇒ p=1 ⇒ Doesn't make sense. If I haven't even started tossing the coin, p can have any value between 0 and 1, there is nothing we can say about it without further information. If p follows a uniform, we should expect an average of 0.5. Maybe that's just a weird limit case that escape the scope where this formula applies ?
  • n=1 ⇒ p = 0.5 ⇒ Doesn't seem intuitive. If I've had 1 loss, I'd expect p<0.5.

III.4) Possible generalisation :

This approach could be generalised to every number of wins over a number of n tosses, instead of the number of losses before getting the first win.

Instead of the geometrical distribution we used, where N is the number of consecutive losses before a win, and n is the number of consecutive losses already observed :
N↝G(p) ⇒ P(N≥k) = (1-p)^(k)

... we'd then use a binomial distribution where N is the number of wins over n tosses, and n the number of tosses, where p is the probability of winning :
N↝B(n,p) ⇒ P(N=k) = n! / [ k!(n-k)! ] . p^(k).(1-p)^(n-k)

But I guess that's enough for now.

4 Comments
2024/03/23
19:27 UTC

2

How do you calculate the probability of rolling an exact number a set amount of times?

My current question revolves around a Magic the gathering card. It states that you roll a number of 6-sided die based on how many of this card you have. If you roll the number 6 exactly 7 times in your group of dice then you win.

How do you calculate the probability that exactly 7 6's are rolled in a group of 7 or more dice?
Since I am playing a game with intention of winning I'd like to know when it is best to drop this method in favor of another during my gameplay.

For another similar question how would you calculate the chances that you will roll a number or a higher number with one or more dice.
For example I play Vampire the Masquerade which requires you to roll 1 or more 10-sided dice with the goal of rolling a 6-10 on a set amount of those dice or more.

I'd like to know my chances of success in both.

Finally, is there a good website where I can read up on probabilities and the like?

4 Comments
2024/03/22
20:08 UTC

0

Why do flipping two coins are Independent events

Iam doing an experiment with two coins both are identical coins probability of getting heads is p for both coins and probability of getting tails is 1-p ,now prove me that getting heads for heads in 1 st coin is the independent of getting heads in second coin from independent event definition (p(a and b)=p(a)*p(b))

And don't give this kind of un-useful answers

To prove that getting heads on the first coin is independent of getting heads on the second coin, we need to show that:

P(Head on first coin) * P(Head on second coin) = P(Head on first coin and Head on second coin)

Given that the probability of getting heads on each coin is 'p', and the probability of getting tails is '1-p', we have:

P(Head on first coin) = p
P(Head on second coin) = p

Now, to find P(Head on first coin and Head on second coin), we multiply the probabilities:

P(Head on first coin and Head on second coin) = p * p = p^2

Now, we need to verify if P(Head on first coin) * P(Head on second coin) = P(Head on first coin and Head on second coin):

p * p = p^2

Since p^2 = p^2, we can conclude that getting heads on the first coin is indeed independent of getting heads on the second coin, as per the definition of independent events.**

I called this un-useful answer because How can you do P(Head on first coin and Head on second coin) = p * p = p^(2) Without knowing Head on first coin and Head on second coin are independent events.\

If anyone feel offensive or if there is any errors recommend me an edit.I will edit them .because I am new to math.stackexachange plz don't down vote this question or if you feel this is stupid question like my prof then don't answer this(and tell me why this question is stupid)

And advance thanks to the person who is going to answer this

I asked this question in math.stackexchange I got 8 down votes

https://math.stackexchange.com/q/4885063/1291983

14 Comments
2024/03/22
05:26 UTC

1

Drawing cards probability

Hi, if I draw 5 cards from a deck of 52 cards, what is the probability that 4 of them are from the same suit? I think it’s 13C4 X 4C1, but I don’t know how to account for the fifth card. Should it be 48C1 or 13C1 X 3C1? I think it should be the second one, otherwise a fifth card from the same suit could be selected.

8 Comments
2024/03/21
07:24 UTC

3

Question about Probability Theory and Infinity

I’m currently a senior in high school. My math background is that I’m currently in AP stats and calc 3, so please take that into consideration when replying. I’m no expert on statistics and definitely not any sort of expert on probability theory. I thought about this earlier today:

Imagine a perfectly random 6 sided fair die, every side has exactly a 1/6 chance of landing face up. The die is of uniform density and thrown in such a way that it’s starting position has no effect on its landing position. There is a probability of 0 that the die lands on an edge (meaning that it will always land on a face).

If we define two events, A: the die lands with the 1 face facing upwards, and B: the die does not land with the 1 face facing upwards, then P(A) = 1/6 ≈ 0.1667 and P(B) = 5/6 ≈ 0.8333.

Now imagine I have an infinite number of these dice and I roll each of them an infinite number of times. I claim that if this event is truly random, then at least one of these infinity number of dice will land with the 1 facing up every single time. Meaning that in a 100% random event, the least likely event occurred an infinite number of times.

Another note on this, if there is truly an infinite number of die, then really an infinite number of die should result in this same conclusion, where event A occurs 100% of the time, it would just be a smaller infinity that the total amount of die.

I don’t see anything wrong with this logic and it is my understanding of infinity and randomness that this conclusion is possible. Please let me know if anything above was illogical. However, the real problem occurs when I try to apply this idea:

My knowledge of probability suggests that if I roll one of these die many many times, the proportion of rolls that result in event A will approach 1/6 and the proportion of rolls that result in event B will approach 5/6. However, if I apply the thought process above to this, it would suggest that there is an incredibly tiny chance that if I were to take this die in real life and roll it many many times it would land with 1 facing up every single time. If this is true, it would imply that there is a chance that anything that is completely random would have a small chance of the most unlikely outcome occurring every single time. If this is true, it would mean that probability couldn’t (ethically) be used as evidence to prove guilt (or innocence) or to prove anything really.

This has long been my problem with probability, this is just the best illustration of it that I’ve had. What I don’t understand is in a court case how someone could end up in prison (or more likely a company having to pay a large fine) because of a tiny probability of an occurrence of something happening. If there is a 1 in tree(3) chance of something occurring, what’s to say we’re not in a world where that did occur? Maybe I’m misunderstanding probability or infinity or both, but this is the problem that I have with probability and one of the many, many problems I have with statistics. At the end of the day unless the probability of an event is 0 or 1, all it can tell you is “this event might occur.”

Am I misunderstanding?

My guess is that if I’m wrong, it’s because I’m, in a sense, dividing by infinity so the probability of this occurring should be 0, but I’m really not sure and I don’t think that’s the case.

6 Comments
2024/03/19
23:28 UTC

3

Help with simple probability problem

There are 3 bags.

Bag A contains 2 white marbles

Bag B contains 2 black marbles

Bag C contains 1 white and 1 black

You pick a random bag and you take out a white marble.

What is the probability of the second marble from the same bag being white?

Can someone show me the procedure to solve this kind of problems? Thanks

17 Comments
2024/03/18
16:43 UTC

2

Distribution of random variables: Have been struggling with this problem for a while. Any help please.

2 Comments
2024/03/15
20:02 UTC

1

Cumulative distribution function of probability law

I feel dumb because I've been stuck the whole day on the power law and I think I completely misunderstand it. I've read the paper of Gopikrishnan et al. (1999) about the inverse cubic law distribution of stock price fluctuations, and it states that α ≈ 3. Also, P(g>x) = 1/(x^(α) ) as stated in the paper "For both positive and negative tails, we find a power-law asymptotic behavior P(g>x) ≈ 1/(x^(a) )" (Page 5/12). However, if I replace x by a possible stock price variation, let say 2%, I get a number way greater than 1, which should be impossible.

What do I misunderstand to fail that bad?

2 Comments
2024/03/13
15:41 UTC

5

Certainly an easy and definite question for most of you but I just can't convince myself.

Are independent probabilities definitely independent?

Hi, like I said in the title this question might be very easy and certain for most of you but I couldn't convince myself. Let me describe what I am trying to figure out. Let's say we do 11 coin tosses. Without knowing any of their results, the eleventh coin toss would be 50/50 for sure. But if I know that the first ten of them were heads, would the eleventh coin toss certainly be 50/50?
I know it would but I feel like it just shouldn't be. I feel like knowing the results of the first ten coin tosses should make a - maybe just a tiny bit - difference.

PS. English is not my native language and I learned most of these terms in my native language so forgive me if I did any mistakes.

4 Comments
2024/03/13
11:47 UTC

Back To Top