/r/math
This subreddit is for discussion of mathematics. All posts and comments should be directly related to mathematics, including topics related to the practice, profession and community of mathematics.
This subreddit is for discussion of mathematics. All posts and comments should be directly related to mathematics, including topics related to the practice, profession and community of mathematics.
Please read the FAQ before posting.
Rule 1: Stay on-topic
All posts and comments should be directly related to mathematics, including topics related to the practice, profession and community of mathematics.
In particular, any political discussion on /r/math should be directly related to mathematics - all threads and comments should be about concrete events and how they affect mathematics. Please avoid derailing such discussions into general political discussion, and report any comments that do so.
Rule 2: Questions should spark discussion
Questions on /r/math should spark discussion. For example, if you think your question can be answered quickly, you should instead post it in the Quick Questions thread.
Requests for calculation or estimation of real-world problems and values are best suited for the Quick Questions thread, /r/askmath or /r/theydidthemath.
If you're asking for help learning/understanding something mathematical, post in the Quick Questions thread or /r/learnmath. This includes reference requests - also see our list of free online resources and recommended books.
Rule 3: No homework problems
Homework problems, practice problems, and similar questions should be directed to /r/learnmath, /r/homeworkhelp or /r/cheatatmathhomework. Do not ask or answer this type of question in /r/math. If you ask for help cheating, you will be banned.
Rule 4: No career or education related questions
If you are asking for advice on choosing classes or career prospects, please post in the stickied Career & Education Questions thread.
Rule 5: No low-effort image/video posts
Image/Video posts should be on-topic and should promote discussion. Memes and similar content are not permitted.
If you upload an image or video, you must explain why it is relevant by posting a comment providing additional information that prompts discussion.
Rule 6: Be excellent to each other
Do not troll, insult, antagonize, or otherwise harass. This includes not only comments directed at users of /r/math, but at any person or group of people (e.g. racism, sexism, homophobia, hate speech, etc.).
Unnecessarily combative or unkind comments may result in an immediate ban.
This subreddit is actively moderated to maintain the standards outlined above; as such, posts and comments are often removed and redirected to a more appropriate location. See more about our removal policy here.
If you post or comment something breaking the rules, the content may be removed - repeated removal violations may escalate to a ban, but not without some kind of prior warning; see here for our policy on warnings and bans. If you feel you were banned unjustly, or that the circumstances of your ban no longer apply, see our ban appeal process here.
Filters: Hide Image Posts Show All Posts
Recurring Threads and Resources
What Are You Working On? - every Monday
Discussing Living Proof - every Tuesday
Quick Questions - every Wednesday
Career and Education Questions - every Thursday
This Week I Learned - every Friday
A Compilation of Free, Online Math Resources.
Click here to chat with us on IRC!
Using LaTeX
To view LaTeX on reddit, install one of the following:
MathJax userscript (userscripts need Greasemonkey, Tampermonkey or similar)
TeX all the things Chrome extension (configure inline math to use [; ;] delimiters)
[; e^{\pi i} + 1 = 0 ;]
Post the equation above like this:
`[; e^{\pi i}+1=0 ;]`
Using Superscripts and Subscripts
x*_sub_* makes xsub
x*`sup`* and x^(sup) both make xsup
x*_sub_`sup`* makes xsubsup
x*`sup`_sub_* makes xsup
sub
Useful Symbols
Basic Math Symbols
≠ ± ∓ ÷ × ∙ – √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
Geometry Symbols
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
Algebra Symbols
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘∏ ∐ ∑ ⋀ ⋁ ⋂ ⋃ ⨀ ⨁ ⨂ 𝖕 𝖖 𝖗 ⊲ ⊳
Set Theory Symbols
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
Logic Symbols
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ↔ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣
Calculus and Analysis Symbols
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
Greek Letters
Other Subreddits
Math
Tools
Related fields
/r/math
Applied Calculus for the Managerial, Life and Social Sciences Tenth Edition by Tan, S. T.
6e Calculus by Edwards & Penney. Not sure if that’s the actual title, check here
Found these books in the library today as I was looking for resources to practice calculus. Does anyone have any experience with them? Are they any good?
I know that they’re homology equivalence classes, but I was wondering if there’s something shorter? Can I call them cycles? I know that they’re only cycles in one dimension but is that cool and hip with the topologists to do it casually for all dimensions?
It’s been a while since I studied this in undergrad so apologies for the lack of rigor and specificity.
When I was in Calc 3, I remember learning about curvature, and I believe it was denoted with “k”. We were taught that a small circle was more curvy, than a larger circle with a larger radius.
I understood at the time, why this was defined as such, however I felt like there should also be some notion of curvature that is standardized for the shape of the closed shape.
So that all circles have the same shape regardless of size, but we could still compare the curvature of shapes, i.e. a circle is more curvy than a triangle.
In the same way that Usain Bolt can run really fast for his size, but a Giant 50ft troll running at 30mph would be quite average, if not slow.
Then, I’m sure we could extend this in higher dimensions, where there would probably different ways to quantify curvature when standardized for the shape of the closed N-d shape.
Has anyone come across this line of thinking? I’m sure it’s probably been explored before.
Hi everyone!
I’m currently taking a course in functional analysis, and for the oral exam, I need to present a 20-minute seminar on a topic related to the course. The topic should not overlap with the material already covered but still stay within the realm of functional analysis.
Some of the main areas we’ve covered include:
• Normed and Banach spaces, linear operators, and dual spaces.
• Hilbert spaces and orthogonal projections.
• Hahn-Banach theorem, topologies (weak, weak*), and convexity.
• Spectral theory for bounded operators and compact operators.
• Elements of distribution theory and spaces of sequences.
I’m particularly interested in topics that connect functional analysis to probability theory or ergodic theory, as these are fields I’d like to explore further.
Do you have any suggestions for seminar topics that fit these criteria?
Thanks in advance!
I particularly like popscience books. Of course, they won't teach you the content itself, but they're great for stimulating creativity and imagination, and for arousing the curiosity of the lay public.
I really like Ian Stewart's books, and James Gleick's “Chaos”.
This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on this week. This can be anything, including:
All types and levels of mathematics are welcomed!
If you are asking for advice on choosing classes or career prospects, please go to the most recent Career & Education Questions thread.
Are there any philosophical views that encourage certain foundations? Are there any foundations that encourage certain philosophical views? If we take a set theoretical foundation using sets as a primitve object would a platonist differentiate between the existence of sets and objects which are constructed from sets or would they be treated the same? Are there any good articles on this relation?
Following the steps of a recent discussion in the community, I have read a Wiki page on the Feit–Thompson theorem. There I found the following:
The final paper is 255 pages long. <...>
Perhaps the most revolutionary aspect of the proof was its length: before the Feit–Thompson paper, few arguments in group theory were more than a few pages long and most could be read in a day. Once group theorists realized that such long arguments could work, a series of papers that were several hundred pages long started to appear. Some of these dwarfed even the Feit–Thompson paper; the paper by Michael Aschbacher and Stephen D. Smith on quasithin groups was 1,221 pages long.
How do authors come up with such long proofs? I don't believe there are brilliant insights every few pages. If there were, these gigantic papers would be valued as high as FLT proof, but they are not. I imagine there should be some machinery that produces arguments more or less mechanically, but still, these arguments are not so standard that they could be just omitted from the paper.
But that's just my guess. If it is true, what does this machinery look like? If it is not, how are behemoth-long proofs actually made?
Been noticing this. I guess a lot of dopamine is rolling round in the build up and I just feel kind of lost the days after. Wonder if anyone has similar experiences and how you deal with it.
Hello! Forgive the improper terminology.
If you are coming up with a system of classifying some set of things, you want the system to be:
Complete, meaning every item in the set falls into one class or category, i.e., there are no items that cannot fit into some category or another, and
Unambiguous, meaning every item falls into one and only one category; no item could be said to fit into more than one category.
I'm pretty sure there are actual mathematical terms for those two properties, but I don't know what they are. Any wisdom or me?
Incidentally, I'm sure this issue comes up in about a billion different settings, like library science, cladistics, law and legal statutes, etc.
Thanks!
I’m thinking of proofs that were strictly speaking incorrect but were subsequently “patched-up” and we still attribute the proof to the original author
The paragraph below is from https://www.reddit.com/user/fredarietem/ , who posted four years ago in "Is it normal to be struggling to “get” the Yoneda lemma or is Category Theory just not for me?Is it normal to be struggling to “get” the Yoneda lemma or is Category Theory just not for me?", https://www.reddit.com/r/math/comments/n9a761/is_it_normal_to_be_struggling_to_get_the_yoneda/ . The post is archived, unfortunately, which means I can't discuss it with the original discussants. But the paragraph says exactly what I want to start from. To go further, I want to ask: in what way can Hom(C,-) and Hom(D,-) fail to be naturally isomorphic, and how do such failures correspond to failure in C and D being isomorphic?
One way to appreciate the Yoneda lemma is through its corollaries. There's one particular special case which I find particularly enlightening: the Yoneda embedding. The idea is that we can think of an object C as being represented by a hom functor Hom(C,-), i. e. by all its relations to other objects in the category (in the form of arrows from C). Then, an arrow from C to D corresponds exactly to a natural transformation between the corresponding hom functors (this is because the Yoneda embedding is fully faithful, courtesy of the Yoneda lemma). In particular, if Hom(C,-) and Hom(D,-) are naturally isomorphic then C and D are isomorphic, which essentially justifies the "it's the arrows that are important" perspective of category theory. Note that we could just as well look at Hom(-,C) instead of Hom(C,-).
I've never seen this discussed, but is there a concept of "C and D are isomorphic except for these cases"? Of, as it were, defects or deformations in isomorphism? As a programmer, I find it natural (sorry!) to study such points of failure. If I could do so for Yoneda, I think it would be enlightening. Because it's easy to see how a natural transformation can fail. Some components may not be there. But what kind of damage does that do to the wanna-be isomorphism arrows between C and D?
By the way, I attach a picture of a category with objects C and D in, also X, X', X'', etc. It also shows the functors Hom(C,-) and Hom(D,-), with just visible hom-sets indicating their effect on some of the X's, and a natural transformation. Maybe someone can see how to complete it to make the source of C and D's isomorphism obvious.
In Physics the B2FH paper is basically a monument and provides the origins of chemical elements and is foundational to cosmology. I'm not a physist but I was wondering within mathematics what are similarly monumental type papers and field defining work?
When one first learns real analysis/rigorous calculus, part of the experience is coming up with pathological functions to act as counterexamples. For example:
"Every function ℝ → ℝ is locally bounded somewhere."
Counterexample: The function f(x) = 0 when x is irrational, and f(x) = q where x=p/q is rational and written in lowest terms.
Even if the pathological function is well known, it can still be interesting to see what statements they are a counterexample for. For example:
"Every removable discontinuity of a Riemann integrable function ℝ → ℝ is isolated."
Counterexample: The removable discontinuities of Thomae's function are dense in the reals.
What are your favorite counterexamples?
It’s clear to me why the quaternion rotation formula works. The exps commute through the parallel component, cancelling and leaving it unchanged, but they anticommute with the orthogonal part, each performing a half-rotation. And we use exp because exp of a 90-degree rotation (e.g. all unit pure quaternions) generates all rotations.
What doesn’t make sense to me is how this relates to the double-covering of SO(3) by Spin(3). I understand algebraically that the sign of the quaternion cancels in the rotation formula. I also understand that the theta/2 in the quaternion rotation formula requires you to take a 720deg path around Spin(3) to make a full loop. But I don’t understand what’s really special about a full 720deg walk. So what if we need that to take a loop around Spin(3)? What makes Spin(3) a truer group of rotations than SO(3)? I know it’s the universal cover, but why is that relevant in this case?
And of course, there’s Dirac’s belt trick. I “get” what’s going on there, but I fail to see the significance of it.
What’s so special about 720deg rotations?? This is driving me crazy
I feel it is hard to distinguish real understanding from just being able to do symbolic manipulation and get the right answer consistently. For me, visualization is a big part of understanding, but so many concepts require higher dimensions to be able to "seen". So sometimes, even if I can do a proof, I feel like there is not something I could explain about that proof.
An example: Right now, I'm learning about duality in optimization. And it's interesting, and I can do some proofs, but I don't really see why two polyhedra would be dual to each other or what that means.
I'm trying to find what a concept is called and references to learn more about it. My best guess is to call it a coproduct of random variables, but this leads nowhere.
Here's a description (handwaving measure theory to keep things short):
Given two random variables x : Ω → S_x and y : Ω → S_y, we can form the joint random variable (x,y) : Ω → S_x × S_y by taking (x,y)(ω) = (x(ω), y(ω)). This is a product in the category of random variables over (Ω, Σ, μ). This raises the question: is there a coproduct?
Yes, there is. We can prove its existence using Zorn's lemma or explicitly construct it by taking the coequaliser of the maps x : Ω → S_x ∪ S_y and y : Ω → S_x ∪ S_y, i.e. the random variable x∨y : Ω → E, where E = S_x ∪ S_y / ~ and ~ is the smallest relation such that x(ω) ~ y(ω) for all ω.
Another characterisation of this object is that the sub-σ-algebra of Σ induced by (x∨y)^(-1) is the intersection of the sub-σ-algebras induced by x^(-1) and y^(-1).
Some questions:
What is x∨y called in the literature and how is it usually denoted?
Are there references discussing this construction?
What interesting results are there regarding x∨y? Can we express its distribution function nicely? Does it satisfy an entropy relation?
If, for example, you are studying linear algebra and don't understand why anyone would come up with a notion of a vector space in the first place or if you see an axiom so obvious you don't understand why it exists, and so on, how do you get out of that feeling? Do you just battle through in hopes you'll get it later?
I think this is the hardest part for me, when I am trying to self-study. Sometimes the maths I am trying to understand seems too detached or too abstract, unmotivated, sort of artificial. Then Mathematics stops sparking curiosity in me and I get stuck. Do you ever feel the same? How do you deal with it?
Let’s say we want to compute a square root. There are different methods or algorithms to compute a square root function, but can we say or prove something about them in general?
By Bertrand's Postulate, for every n>1, there is at least one prime p such that n<p<2n. However, this result has been greatly improved upon. For example, Pierre Dusart showed that if n>=89693, there is at least one prime p in the interval n<p<=(1+1/ln³(n))n. I was wondering about the number of primes between n and 2n. This is approximately n/ln(n) for large n, but my question is what is a good lower bound on the number of primes between n and 2n? With the improvements to Bertrand's Postulate, there should be a good lower bound on this that always holds.
I believe there are more clever ways that we can graphically represent a symmetric binary relation between two objects than simply drawing a line between them or incidence/adjacency matrices.
I haven't been exposed to much higher level math yet (only Olympiad style math, analysis, and linear algebra), so my scope to be creative might be limited, this is where people with different experiences could come in with a fresh perspective.
I will start: Take a finite graph G, we represent the n points on some affine line (equally spaced), and if a pair of these points are incident we draw a circle around those points such that the space between the points is the diameter. Now project this line onto the real projective plane, so that the circles become ellipse. Now the pairs of incident points correspond to the foci of the ellipse.
I have no idea what the right subreddit for this is but I wanted to share it since I feel like it might be interesting and/or useful.
For a short description of the problem: You are hosting a random matchmaking queue for a game where two players face off on one of multiple possible maps. Most players don't want to learn every map, so you offer map bans. But you still want to ensure that any two players will be able to find a map that neither of them has banned. What is a good way to allow your average joe to ban as many maps as possible without making things feel too complicated or too restrictive?
The format that is typically used in games (e.g. AoE2) asymptotes at a map pool size of 2n
maps if each player has to leave n
maps unbanned.
My previously favourite format asymptotes at 4n
maps for n
unbanned.
This new format that I just found asymptotes at n²/2
maps for n
unbanned. EDIT: As pointed out by u/mfb- and u/bartekltg, the n²/2
asymptote has the downside of not allowing the maximal amount of n-1
arbitrary bans, but there is a separate asymptote of n²/2.25
that does.
I think the theoretical limit (Fano plane anyone) is somewhere between n²/2
and n²
but it seems way too unwieldy and restrictive to make that into a user-friendly system.
So here's my new system:
x*y
maps, where x
is odd.x
columns as their "home column". They can't ban any maps in their home column.If these rules are followed, three scenarios can happen when attempting to match two random players A and B:
a) Both players have selected the same column. In this case, any map from that column can be randomly selected and played (fully random, or prioritizing favourited maps, or something else, we don't need to care).
b) Player A has selected (wLoG) column 1, and player B has selected an odd column. Then player A has left open one of the maps in player B's home column, so that map can be played.
c) Player A has selected (wLoG) column 1, and player B has selected an even column. Then player B has left open one of the maps in player A's home column, so that map can be played.
Example: There are 5x4
maps in the pool (picture 1).
A B C D E - B - D - - - - - E
F G H I J - G - - - - G - - J
K L M N O K L - - - - - - N O
P Q R S T - Q - - - - - - - T
Player A's bans leave open the maps in picture 2. Player B's bans leave open the maps in picture 3.
Both players have left map G open, so map G will be played.
There are a number of threads on this subreddit and others asking for resources for matrix calculus. There isn't much attention given to differentiating with respect to matrices. For example, there's not much of this in The Matrix Cookbook. I'm looking for this sort of info (all in numerator layout as that's what I need to use):
This post suggests Matrix Differential Calculus with Applications in Statistics and Econometrics by Magnus and Neudecker. This is certainly the most thorough discussion I've found, but it doesn't seem to excel as a reference and focuses mostly on differentials rather than derivatives (like the items listed above), which are what I'm running into.
Thanks!
Hi everyone,
I'm working on a problem involving subsets of R3 and I'm trying to figure out how to determine if one set is contained inside another. The first set is defined by a polynomial inequality, defining a closed volume. The second one involves a functional inequality with one real positive parameter. I've tried numerical methods (using integrals) and visualization, and it seems to be the case that the first set is contained into the second.
Has anyone tackled problems like this before or can suggest an effective approach? Any thoughts on a more rigorous analytical argument would be very helpful.
Thanks in advance for your insights!
Hey all,
I graduated recently, and I've had a lot more time on my hands with an FT job. I really enjoyed learning math, but my job doesn't really use those skills (for context, I'm just a simple technician at an insurance company that assists actuaries and underwriters). The last classes I remember taking were some upper division statistics, real analysis, abstract algebra, and a bit of topology and number theory. I wholly enjoyed real analysis and made an attempt to read through Rudin, but it was definitely a harder book to get through. Any recommendations as to what to do next or go through again?