/r/pystats
A place to discuss the use of python for statistical analysis.
Welcome to /r/pystats, a place to discuss the use of python in statistical analysis and machine learning.
Related Subreddits
Where to start
If you're brand new to python, first go and check out the /r/learnpython wiki, or the official Beginner's Guide.
The best way to install python packages is using pip:
pip install <package>
Recommended packages:
Some of these packages have dependencies, most require numpy, and some require scipy, check the links for details.
For a good overview of what stats pacakges are available for python, check out http://stats.stackexchange.com/q/1595
/r/pystats
Hello! Im running an analysis using python's statsmodels rm anova method. I have a 2 way repeated measures anova analysis and a series of 1 way repeated measures anovas. I want to calculate the effect sizes.
Since there isn't a direct function for retrieving the partial eta square measure, I figured I would have to calculate it. But to do that I require the sum of squares values. As far as I can tell, I can't retrieve those values either.
So my questions are:
Excited to release ryp, a Python package for running R code inside Python! ryp makes it a breeze to use R stats packages in your Python projects.
hey so Ive been fascinated with getting into coding. I personally know little to no code,
I can make a simple bot but thats about it. Is there any websites/ apps that I could use that are compatible with Mac systems
Hello, I wanted some suggestions on how to implement a mixed effects multinomial logistic regression model for my data.
A little context on my data- I am trying to predict how people categorize an object (4 possible options here - categorical) based on 2 types of inputs (both inputs are categorical variables with 4 different categories each).
Initially, I thought a normal multinomial logit model would work, but it was brought to my attention that I had repeated measures in my data. Which had me looking up mixed effects models.
But, mixed effects multinomial logistic regression for categorical variables sounds....complicated.
Any suggestions on how to implement this (python packages/code samples etc) or any better/easier alternatives for this type of data, would be welcome.
I am using the Python statsmodels GLM function with family=sm.families.NegativeBinomial.
class statsmodels.genmod.families.family.NegativeBinomial(
link=None
,
alpha=1.0
,
check_link=True
)
I want to learn what I should think about and how I should think when setting the alpha value.
Should I use a value for alpha that:
a. Gets the ratio Df Residuals / Pearson chi2 as close as possible to one?
b. Maximizes Log-Likelihood
c. Is a "compromise" between a and b?
d. Something else?
Thanks!
Here is documentation: https://www.statsmodels.org/devel/generated/statsmodels.genmod.families.family.NegativeBinomial.html#statsmodels.genmod.families.family.NegativeBinomial
Im trying to Seasonally Adjust a time series in python using X13-ARIMA-SEATS but I'm not able to use the StatsModels module. So I was trying to find an alternative to it or even another methodology to seasonally adjust time series. It would be amazing if someone could help me with this.
Here is a link to a new github repository introducing new Python functions using the delta-method or parametric bootstrap to estimate confidence intervals for predicted values, and prediction intervals for new data, using nonlinear regression.:
https://github.com/gjpelletier/delta_method
These new functions extend the capabilities of the python packages scipy or lmfit to apply the delta-method or parametric bootstrap for confidence intervals and prediction intervals:
The first step is to use either scipy or lmfit to find the optimum parameter values and the variance-covariance matrix of the model parameters. The user may specify any expression for the nonlinear regression model.
The second step is to estimate the confidence intervals and prediction intervals using a new python function that applies either the delta-method or parametric bootstrap.
Three examples are provided:
The user may build any expression for the nonlinear relationship between observed x and y for the nonlinear regression using either scipy.optimize.curve_fit or the ExpressionModel function of lmfit.
To estimate the confidence intervals and prediction intervals, we use a new python functions that apply either the delta-method or parametric bootstrap as described in detail in Section 5 of this MAP566 online lecture by Julien Chiquet from Institut Polytechnique de Paris:
The groupby function in Pandas divides a DataFrame into groups based on one or more columns. You can then perform aggregation, transformation, or other operations on these groups. Here’s a step-by-step breakdown of how to use it: Getting Started with Pandas Groupby
Pivoting is a neat process in Pandas Python library transforming a DataFrame into a new one by converting selected columns into new columns based on their values. The following guide discusses some of its aspects: Pandas Pivot Tables: A Comprehensive Guide for Data Science
The guide shows hads-on what is pivoting, and why do you need it, as well as how to use pivot and pivot table in Pandas restructure your data to make it more easier to analyze.
Flask SQLAlchemy is a popular ORM tool tailored for Flask apps. It simplifies database interactions and provides a robust platform to define data structures (models), execute queries, and manage database updates (migrations).
The tutorial shows how Flask combined with SQLAlchemy offers a potent blend for web devs aiming to seamlessly integrate relational databases into their apps: Flask SQLAlchemy - Tutorial
It explains setting up a conducive development environment, architecting a Flask application, and leveraging SQLAlchemy for efficient database management to streamline the database-driven web application development process.
The article explores list comprehension, along with the definitions, syntax, advantages, some use cases as well as how to nest them - for easier creation process and avoiding the complexities of traditional list-generating methods: Python List Comprehension | CodiumAI
I just published a python library, chess-analytica, that aims to make data analytics of chess games a lot easier. It's pretty niche, so I didn't expect much to come of it, but I've checked pystats and another site that check pip downloads and they say I have anywhere between 1k-3k. What should I expect is actually true? Is it actually like 200?
For the Pandas library in Python, pivoting is a neat process that transforms a DataFrame into a new one by converting selected columns into new columns based on their values. The following guide discusses some of its aspects: Pandas Pivot Tables: A Comprehensive Guide for Data Science
The guide shows hads-on, how, with these functions, you can restructure your data to make it more easier to analyze.
The short guide discusses the advantages of utilizing Python for statistical modeling as well as three most popular Python libraries for this and checks several examples of their utilization: Statistical Modeling with Python: How-to & Top Libraries
These libraries can be used together to perform a wide range of statistical modeling tasks, from basic data analysis to advanced machine learning and Bayesian modeling - that's why Python has become a popular language for statistical modeling and data analysis.
I'm thrilled to share with you my latest creation - 'AnalytiXHero,' a cutting-edge Python3 library. With just a few lines of code, this library simplifies exploratory data analysis and preprocessing. It covers all aspects of data preprocessing, including outlier handling, minimizing skewness/kurtosis, handling null spaces, plotting outliers, calculating variance, and performing various transformations. This library comes equipped with pre-defined state-of-the-art features to make your data preprocessing tasks a breeze.
To get started, simply install 'AnalytiXHero' in either Python's global environment or a virtual environment by executing the following command in your terminal: `pip install analytixhero`. For those interested in diving into the source code, you can find it at this link: https://github.com/thesahibnanda/AnalytiXHero
To explore the library's documentation, visit: https://github.com/thesahibnanda/AnalytiXHero/blob/main/DOCUMENTATION/0.%20Documentation%20Index.md
If you're interested in contributing, please refer to the contribution guidelines found here: https://github.com/thesahibnanda/AnalytiXHero/blob/main/CONTRIBUTION%20GUIDELINES.md
Official PyPI Link: https://pypi.org/project/analytixhero/
I created this library that can be useful to anyone analyzing Italian data. It gives you access to Italian administrative, geographic and demographic data, taken from the Italian Institute of Statistics (2022), allowing you to easily draw geographic graphs (docs here).
It can also be used as a pandas accessor.
I'd love to hear from anyone who tries it any suggestions or ideas for improvement.
If anyone would like to contribute they would be welcome.
Hi All,
I'm not new to stats, but I am new to python. Something I'm struggling with is when to use the syntax df.method() versus the syntax method(df).
For example, I see I can get the length of a dataframe with len(df) but not df.len() . I'm sure there's a reason, but I haven't come across it yet! In contrast, I can see the first five lines of a dataframe with df.head() but not head(df) .
What am I missing? I'm using Codecademy, and they totally glossed over this. I've searched for similar posts and didn't see any.
Thanks for your help!
Hi Everyone. I wrote a python script to fit a curve for preorders. You can see by the dots that as the release date gets closer the preorders increase significantly. The problem is I can't figure out why I can't shade the second curve. I believe the issue is with the params_upper and params_lower where the sigma is applied. For some reason it just returns zero when passing it through. How can I fix this? Any help would be greatly appreciated
# Define the exponential function
def exponential(x, a, b, c):
return a * np.exp(b * (x-c))
#Define a function to fit the curve to
def polynomial(x, a, b, c):
return a*x**2 + b*x + c
# Define the combined function
def combined(x, a1, b1, c1, a2, b2, c2):
polynomial_range = (x >= 0) & (x <= 27)
exponential_range = (x > 27) & (x <= 37)
y = np.zeros_like(x)
y[polynomial_range] = polynomial(x[polynomial_range], a1, b1, c1)
y[exponential_range] = exponential(x[exponential_range], a2, b2, c2)
return y
# Load data from a Pandas dataframe
x_data = preorders_AF['rank'].values
y_data = preorders_AF['running_total'].values
# Fit the curve using the defined function and the x and y data
params, covariance = curve_fit(combined, x_data, y_data)
# Fit the combined function to the data
# Calculate the 5 sigma interval
sigma = np.sqrt(np.diag(covariance))
params_upper = params + 1*sigma
params_lower = params - 1*sigma
# Generate the curve using the fitted parameters
x_curve = np.linspace(min(x_data), max(x_data) + 6, 37)
y_curve = combined(x_curve, *params)
y_upper = combined(x_curve,*params_upper)
y_lower = combined(x_curve,*params_lower)
fig, ax = plt.subplots()
# Plot the data points and the curve
ax.plot(x_data, y_data, 'o', label='Data')
ax.plot(x_curve, y_curve, label='Curve')
ax.fill_between(x_curve, y_upper, y_lower, alpha=0.2, label='Range')
# Add labels for the last data points
last_y1 = y_curve[-1].astype(int)
last_y2 = y_upper[-1].astype(int)
last_y3 = y_lower[-1].astype(int)
ax.annotate(f'{last_y1}', xy=(x_curve[-1], y_curve[-1]), xytext=(x_curve[-1]+0.5, y_curve[-1]), fontsize=12, color='orange')
ax.annotate(f'{last_y2}', xy=(x_curve[-1], y_upper[-1]), xytext=(x_curve[-1]+0.5, y_upper[-1]), fontsize=12, color='lightblue')
ax.annotate(f'{last_y3}', xy=(x_curve[-1], y_lower[-1]), xytext=(x_curve[-1]+0.5, y_lower[-1]), fontsize=12, color='lightblue')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.legend(loc='center right')
fig = plt.gcf()
fig.set_size_inches(13, 10)
plt.ylim(bottom=0)
plt.show()
hello all
I am trying to perform multiple linear regression using statsmodels.OLS (in python); ie my goal is to fit a set of measured data points to a linear combination of two or more predefined sets of values. The measured data has a measurement error on it, but I can't find anywhere how to include this uncertainty in the model, and I need to do so in order to have correct errors on the regression coefficients. Is there any way to do this?
Hello,
I wanted to calculate the chance that I inhale at least one molecule of Ceasars words (see here). I thought to calculate the chance of inhaling zero molecules and distract this value from 1 [1-(binom(0,n,p)]
I used this code
from scipy.stats import binom
def calculate(n, p, r):
print (f"{n=} {p=} {r=}")
print (f"PMF The chance that you inhale {r} molecules {binom.pmf(r, n, p)}")
print (f"CDF The chance that you inhale {r} molecules {binom.cdf(r, n, p)}")
n = 25.0*10**21
p = 1.0*10**-21
r = 0
calculate(n, p, r)
My output is
PMF The chance that you inhale 0 molecules 1.0
CDF The chance that you inhale 0 molecules 1.388794386496407e-11
When I do normal values my output is the same
n=10 p=0.1 r=0
PMF The chance that you inhale 0 molecules 0.3486784401000001
CDF The chance that you inhale 0 molecules 0.34867844009999993
How is this possible?
Anyone know if there is a documentation for the SEC Edgar api? There doesn’t seem to be any information available. Please help!!
I have a csv file containing article titles and article content. I'm trying to find a way to take a new title as input and use the training model to generate content. I've found a bunch of resources on how to use GPT2 or transformer pipelines to do complete sentences, etc. but I'd like to be able to provide my own data/model instead of using something from e.g. HuggingFace.
Can anyone point me in the right direction?
So there's this dating show where there are 12 guys and 12 girls. Each person has a "perfect pair" and they're supposed to try to find out who it is. So every trial they match up with someone and then we find out how many of those pairs are correct (but not which ones they are). Also one of the pairs is randomly chosen, and we find out if they are a pair or not.
I basically want to build a python app using that data, and show how many possible combinations there are after each trial.
I've only done one intro to stats course in college, so I don't really know where to begin. I know this is a super broad question, but can anyone give me any advice on how to start? Maybe some formulas or concepts I should look into? Thanks!