/r/Python
If you have questions or are new to Python use r/LearnPython
News about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python
You can find the rules here.
If you are about to ask a "how do I do this in python" question, please try r/learnpython, the Python discord, or the #python IRC channel on Libera.chat.
Please don't use URL shorteners. Reddit filters them out, so your post or comment will be lost.
Posts require flair. Please use the flair selector to choose your topic.
Posting code to this subreddit:
Add 4 extra spaces before each line of code
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
Online Resources
Five life jackets to throw to the new coder (things to do after getting a handle on python)
PyMotW: Python Module of the Week
Online exercices
programming challenges
Asking Questions
Try Python in your browser
Docs
Libraries
Related subreddits
Python jobs
Newsletters
Screencasts
/r/Python
Link: https://youtu.be/2ZqaRIZnAso
I'm testing out making YouTube videos,, if you have any comments or suggestions on what I can improve, please do let me know!
I’m on the hunt for some developers who are good with Manim or MoviePy for a project I have in mind. If you’ve got experience with either and want to chat about it, feel free to DM me!
Looking forward to hearing from you!
I have a notion doc specifying the objective. Let me know if anyone needs it. Basically its a job to convert data mentioned in the JSON (shapes,text) to animation
Hi, Reposting as I did not get any responses earlier. Looking to make a switch from java to python programming.Have finished the udemy course for beginners and wanted to work on some real time projects to get more hands on experience.Any suggestions or repos that I can refer to start working on such use cases.Basically want to gain more confidence with hands on coding before I start giving interviews.Any inputs or suggestions or pointers highly appreciated.
Thanks in advance
After many months of procrastination, I have finally managed to release version 0.1.10 of my package Arrest.
What it does
It is a package that lets you declaratively write a REST service client, configured with the sets of resources, routes and methods you want to call, and provide Pydantic models for request and responses to automatically parse them during the HTTP calls. Arrest also provides retry mechanisms, exception handling, automatic code generation from the OpenAPI specification, and much more.
Target audience
Primarily backend developers working on communicating with multiple web services from a Python client. It can also be useful in a microservice architecture where you have to write API bindings for all the dependant sevices for another service.
Comparison
There are packages that does similar things which I got to know about from this subreddit after my initial post. For example:
The key highlights of the new version are:
There are many more, you can check them out at whats new. Do check out the docs and GitHub, and if this sounds interesting to you, please do give it a try, and let me know in case you face any issue.
For those who might already be familiar with it and encountered any issues, I hope the new version fixes them for you. For new people, I'd love to know your thoughts and suggestions, and thank you to everyone here in the Python community who showed their support and provided their feedbacks in my earlier posts!
P.S. I am also open to contributions, if you feel like you have some ideas that Arrest can benefit from, feel free to raise a PR!
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
Share the knowledge, enrich the community. Happy learning! 🌟
I was asked to write a short list of good python defaults at work. To align all teams. This is what I came up with. Do you agree?
First, an apology. I've posted this project here a few days ago. The project was showcasing an idea, but did not show anything substantial or interesting, and I had only invested a few hours into it. To make matters worse, I generated a post via ChatGPT, which in hindsight looked like total garbage, and was generally a dick move.
Second, about me. I'm T, a security researcher at Microsoft. A lot of my work revolves around identifying user behavior in our Azure cloud infrastructure. Naturally, this happens through mountains of logs, which I query on our platform. However, I always felt like viewing this data in the form of a boring gray table is a missed opportunity. I think many good insights can be gained from viewing bland data in creative ways. So, I came up with log4view as a single-evening project just to show it around my office.
Fast forward to now, it's Friday night and I've spent most of my weekend working on features and improvements. I think this is a really cool and fun project, and I would genuinely love to hear your thoughts and ideas.
So, third, my project.
What Log4View Does
Log4view is a tool for technical people who work with logs to view their data in a more visually stimulating way - in the hopes of bringing new insights and ideas. Log4view will generate up to 25 nodes per page, and a potentially endless number of pages total. This amount of nodes is hardcoded, but you can edit the variable which is acceptable_number_of_nodes_in_page. Ideally you will work with up to a couple hundred logs, but if you choose to expand the above variable, the sky's the limit.
Log4view accepts a file path to your data, and a secondary key. The tool will then create main nodes made up of secondary keys, and sub-nodes of the main outer key of your data structure.
The output is a color coded collection of pages of network graphs, each featuring nodes and edges, and more data about each node on hovering your cursor over it.
Target Audience
My target audience is people who view mountains of logs as I do, and who try to glean insights from them. I can't even imagine how many professions this includes, but I reckon many in IT, Data Science, some Engineering, etc.
Comparison
I checked out a few other commercial tools which claim to be log visualizers, but the closest I've found is SolarWinds who create real-time view of logs with a few charts and colors.
This further emphasizes my point. Creative insights require creative views. I genuinely think the more creative ways you can view and think about your data, the better you'll understand it.
I hope I'm right.
Anyway, here's the link. Hope you like it, and if you don't, hope you're willing to share your thoughts with me :)
What it does
pg_mooncake brings a columnstore table to Postgres with DuckDb execution. These tables are written as Iceberg and Delta tables (parquet files + metadata) to your object store.
Query them outside of Postgres with DuckDB, Polars, Pandas, Spark directly without complex pipelines, stitching together ad-hoc files, or dataframe wangling.
Target audience
Product engineers, data engineers, data scientist.
Comparison
You can use psycopg2 / sqlalchemy today. But the approach here is fundamentally different. You're writing data to an s3 bucket. You can share that bucket to your data science, engineering, analyst team without giving them access to your Postgres.
There are some Parquet exporters in Postgres (pg_duckdb, pg_parquet, pg_analytics). pg_mooncake actually exposes table semantics inside of Postgres (updates, deletes, transactions). And table semantics outside of Postgres (Iceberg/Delta).
Story time!
I'm one of the founders of Mooncake Labs. We are building the simple lakehouse without the complex pipelines / data engineering infra.
Modern apps are built on Postgres. And we want to bring the python processing and analytics closer to this ecosystem.
Postgres and Python are all you need.
I have a set of notebooks (actually they are jupytext files). They are broken up into logical units and exchange data via the file system. I am building a processing script to run the notebooks in order and render the plotting notebook to html.
This seems to work, but before i make this a production script i wanted to hear your thoughts.
Previous i was using one large file, but that gets so unwieldy. I did like using papermill for parameter injection though. now i have to do that via config file.
I have tried to break out parts of the script into functions in a module but that just seems to give the worst of both worlds: can’t easily modify code, still have to work with notebooks.
How are you all handling this?
We're excited to announce that the PyCon US 2025 website and call for (talk) proposals are officially live!
Please help us spread the word, and if you're interested in giving a talk read the guidelines and submit one!
I’ve been experimenting with an AI tool to generate / deploy Python apps in your browser. It has a lot of glaring issues (minimal logs, slow deployments, only supports FastAPI) but I’m curious to learn if anyone thinks this could be potentially useful before I go deeper on implementation.
You can try it here: ai.launchflow.com
The source code is here: github.com/launchflow/launchflow-ai
Here’s an example HTMX + FastAPI app I just generated with it: Todo App
What it does
Talk to an AI agent (Claude) to generate new FastAPI apps, edit them in your browser, then deploy to a serverless runtime.
Target audience
Any Python user that wants to prototype a new app.
Comparison
This project is a fork of bolt.new with the webcontainers swapped out for a remote, serverless runtime that better supports Python. (webcontainers only support wasm Python apps, so most packages will not work)
Disclaimer: It does require an account to use, but everything (even deployments) is 100% free. The account is only used to enforce rate limiting so we don’t burn through all our Anthropic credits!
Is this something you would use if it was more polished?
TLDR: clean up your inbox quickly at CleanMail . Code is over at https://github.com/BharatKalluri/cleanmail
What it does
Let's you bulk delete & unsubscribe to emails grouped by sender. so that you can quickly clean up all the cruft from your email!
Target audience
Personal side project, I think people may find use in it
Comparison
Tidy mail exists, but unfortunately its last updated 5 years back and the website does not seem to work for me. I wanted a low maintenance / simple app.
Story time!
I've started today morning with 1847 emails in my Gmail inbox today morning. After some preliminary analysis, I found that more than 70% of all my emails were marketing junk.
I searched around for some time and found that there are a lot of companies charging a pretty significant amount for something so straightforward.
So I wrote a open source email cleaning solution, it groups by sender ID and gives you an option to both unsubscribe and delete all emails from that sender email ID.
After doing all this, I was around 180 emails which I could quickly scan and Archive or Delete.
Please feel free to raise issues or share feedback!
I work for an organization where I will start to do some lightweight data analysis & dataviz, with a workflow that means I make static charts then hand off a designer to be jazzed up in Adobe Illustrator.
Does anyone have thoughts on the best visualization library to use for this? What I'd like is something that A) allows me to create somewhat good looking charts off the bat and B) can export these charts in a clean SVG format, so that a designer can concentrate mostly on adding visual flair without spending a lot of time tidying up things first.
Any reason to recommend say Plotly, Seaborn, Altair, above others? Or something else entirely?
Last month I asked this community if anyone would be willing to take my new course on how to code Python and give me some feedback in return.
The response was overwhelming and I am so grateful! Loads of people took the course and I got tonnes of feedback which I was able to implement. I'm really pleased to share that since then I have now had over 300 enrolments on the course and a small amount of income coming my way.
This is massive for me, since this was my first course and I am now going forward onto making more courses - this time on the topic of simulation in Python.
So as a thank you, I'd like to give away 100 complimentary vouchers for the course, just for this community: https://www.udemy.com/course/python-for-engineers-scientists-and-analysts/?couponCode=THANKSREDDIT
Please take one of the vouchers if you feel you might benefit from the course. It is aimed at people with some kind of existing technical skillset (e.g. engineers, scientists, etc) so has a focus on data, statistics and modelling. The main libraries covered are numpy, pandas and seaborn.
Thanks again r/Python
Hi all,
I’ve been tasked with implementing a dashboard which will update monthly from a database which needs to show key analysis metrics, have user authentication, and ideally run super smooth. I have been looking at using libraries such as Django and combining it with plotting libraries but I’ve only used Streamlit in the past which required no JavaScript or HTML knowledge.
Are there any other solutions which would allow me to have greater control than Streamlit but without losing the ease and speed of deploying such dashboards? Extra points if the libraries are MIT licensed!
Three diverging colormaps have been added: "berlin", "managua", and "vanimo". They are dark-mode diverging colormaps, with minimum lightness at the center, and maximum at the extremes. These are taken from F. Crameri's Scientific colour maps version 8.0.1 (DOI: https://doi.org/10.5281/zenodo.1243862).
import numpy as np import matplotlib.pyplot as plt
vals = np.linspace(-5, 5, 100) x, y = np.meshgrid(vals, vals) img = np.sin(x*y)
_, ax = plt.subplots(1, 3) ax[0].imshow(img, cmap=plt.cm.berlin) ax[1].imshow(img, cmap=plt.cm.managua) ax[2].imshow(img, cmap=plt.cm.vanimo)
Already available in Matplotlib v3.10.0rc1.
I am exploring various tools and libraries for data extraction from documents like PDFs. One tool I've looked into is img2table, which has been effective at extracting tables and works as a wrapper around different OCR tools. However, I noticed that PyMuPDF is a requirement for img2table, and I’ve read that if you build with PyMuPDF, you must make your source code open-source in line with its AGPL license. Does this requirement still apply if I use a project where PyMuPDF is a dependency, even if I don’t directly interact with the library myself?
You can find it here:
FPV is a file path validation and cleaning library that consolidates all the quirky file path rules from different operating systems and cloud storage providers. It's designed to help automate compliance with various platform-specific file naming rules, especially when working with cloud storage services or syncing data across multiple systems.
While some built-in OS libraries can validate or clean file paths, they don’t generally cover complex scenarios—like cross-platform checks or cloud provider restrictions. FPV aims to address specific constraints unique to services like SharePoint, Box, OneDrive, and more.
Sure, but FPV organizes these rules into classes so that each supported platform has predefined validations and cleaning methods, saving you the time it would take to code all these restrictions individually. FPV can validate and clean file paths based on the platform’s unique restrictions, with modular classes for each service.
FPV can be a handy tool for:
Installation
pip install file-path-validator
Here’s a quick example of how FPV is used:
# example.py
from FPV import FPV_Windows, FPV_MacOS, FPV_Linux, FPV_Dropbox, FPV_Egnyte, FPV_OneDrive, FPV_SharePoint, FPV_ShareFile
# Example path with potential issues
example_path = "C:/ Broken/ **path/to||file . txt"
# Creating a validator object for Windows
FPVW = FPV_Windows(example_path, relative=True)
# Original path
print("Original Path:", FPVW.original_path)
# Clean the path
cleaned_path = FPVW.clean()
print("Cleaned Path:", cleaned_path)
# Validate the path
try:
FPVW.validate()
print("Path is valid!")
except ValueError as e:
print(f"Validation Error: {e}")
# Auto-cleaning upon instantiation
FPVW_auto_clean = FPV_Windows(example_path, auto_clean=True, relative=True)
print("Automatically Cleaned Path:", FPVW_auto_clean.path)
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
Let's keep the conversation going. Happy discussing! 🌟
Hi all!
I work freelance as a Analytics Engineer. My role with one of my major clients has taken somewhat of a turn lately, as i have been building a couple of internal streamlit apps to automate some of their internal functions in the company. This is all fine and dandy, we have been hosting some on a local server, and in other cases i merely installed python on their PC and made them a quick shortcut that boots up the server.
They want to make some of these apps available to their international offices.
It is VERY low traffic (would go from about 5 daily users, to about 30-40 daily users. Each using the app for aproximately 1-2 hours a day, so some sort of serverless solution seems obvious.
So what do you think would be a suitable solution going forward?
Deploy on some sort of cloud solution? (seem like you can host it in a serverless fashion which seems obvious given the low traffic.)
Switch framework? (Taipy looks quite promising)
Ditch the fullstack python idea and rebuild it with a proper seperate frontend? (my frontend development capeabilities are VERY limited.)
Something entirely different?
Thank you
Hi, I am looking for bot developers interested in deploying discord bots to a server that mainly builds Python projects and solutions in a community with around 10k users. The idea is to boost and expand engagement while implementing new features, so you'll be part of the server transformation journey. If you do have the experience or you are just starting but believe that your portfolio can provide meaningful value, we can start discussing the details.
just DM me to know more.
I'm trying to print streamable output using Rich. It work flawlessly with out the console.status, however console.status is cause the previous line to be overwritten.
Eg:
Iterataion One Ouput,
Hello
Iteration Two Output,
There.
Expectation,
Iteration one output,
Hello
Iteration Two Ouput,
Hello There.
Again this happens only if I introduce console.status, any suggestion. Sharing the following code.
with console.status("") as status:
for chunk in ai.query_llm(user_input):
console.print(f"{chunk.content}", end="", )
sleep(0.1)
console.print()
sleep(0.1)
You can find it here:
What My Project Does
Scrunkly is a zero dependency script runner that fits my needs for a script runner.
pyproject.toml
I use this for stuff like needing to deploy and ssh so a pyproject.toml isn't as portable.
Why not use X?
I can't add features to it that caters to my needs.
We've been using it in production for the startups that I worked with for quite some time.
Example
# run.py
import scrunkly
from scrunkly import with_env, py
dev_env = with_env({
"DEBUG": "1",
"MONGO_DB_URI": "mongodb://localhost:27017",
"MESSAGING_URL": "mongodb://localhost:27017",
"MONGO_DB_NAME": "test",
"AWS_REGION": "ap-southeast-2",
"AWS_S3_BUCKET_NAME": "test-...",
"AWS_ACCESS_KEY_ID": "AKI...", # these only have access to test buckets
"AWS_SECRET_ACCESS_KEY": "eyFi7...",
})
prod_env = with_env({
"DEBUG": "0",
"MONGO_DB_NAME": "prod",
"AWS_REGION": "ap-southeast-2",
"AWS_S3_BUCKET_NAME": "prod-...",
})
scrunkly.scripts({
"api:dev": [dev_env, f"""{py} -m watchfiles --filter python "uvicorn api.api:app --port 8001" ."""],
"api:prod": [prod_env, f"{py} -m uvicorn api.api:app --host --port 8080"],
"reqs:generate": f"{py} -m pipreqs.pipreqs . --force",
"worker": f"{py} ./run_worker.py",
"install:dev": f"{py} -m pip install -r dev-requirements.txt",
"install:app": f"{py} -m pip install -r requirements.txt",
"load-data": f"{py} ./scripts/part_data_import.py --force",
"install": ["install:dev", "install:app", "load-data"],
"api:compose:rebuild": "docker-compose up -d --no-deps --build api",
"worker:compose:rebuild": "docker-compose up -d --no-deps --build worker",
"up:prod": "docker-compose up -d --scale worker=10",
"up:prod:full": "docker-compose up -d --scale worker=10 --build",
})
Then you can run it with
scrunkly api:dev
or if for some reason you don't have scripts installed
python3 run.py api:dev
You've probably heard of Cadwyn before, and I know it's been mentioned here previously but this video has the creator, Stanislav Zmiev, giving a full overview and demo of how to implement it for advanced API versioning (like DB migrations!) in Python/FastAPI projects:
Video: https://youtu.be/9-WPvMsTjj8
Cadwyn: https://github.com/zmievsa/cadwyn
Hello party people,
a while ago I started a project called confluent to generate code for different programming languages based on a language neutral YAML configuration to make updating constants-files for different languages easier. As time moved on, I found some flaws in how I implemented this project (especially the name bugged me). So today I'm proud, to finally release it under its new name: ninja-bear 🥷🐻
It uses the same configuration principles but adds more flexibility for developers to add their own stuff by offering a plugin-system.
Lets say you only want to generate files for C and TypeScript, no problem. Install ninja-bear, ninja-bear-language-c and ninja-bear-language-typescript and you're ready to go.
Here's a short demo on how to use it: https://youtu.be/bya_exGrS68
Let me know what you think :)
Hi I'm trying to remove metadata from a file in python with PyExifTool. I'm doing an execute() with needed parameters to remove metadata like the original tool "exiftool".
In windows, for example, to remove metadata:
exiftool -all= -overwrite-original /path/to/file
So I'm doing this function on python:
def remove_metadata_file(filepath):
try:
with exiftool.ExifTool() as et:
result = et.execute("-all=", "-overwrite_original", filepath)
if "0 image files updated" in result:
return f"Couldn't have removed metadata from file: {os.path.basename(filepath)}"
else:
return f"File: {os.path.basename(filepath)} metadata has been removed correctly"
except Exception as e:
messagebox.showerror("Error", f"Error on removing metadata from a file: {e}")
And I've done a lot of testings printing results and filepath, and is always:
0 image files updated
1 image files unchanged
I tried to delete '=' on "-all=" but this command just prints all the metadata from file.
I had an interesting experience with trying to run a million empty tests. It showed me some things about how Python works that were not obvious to me before.
Hey everyone,
I'm a backend/data engineer with 10 years of experience, and I'm hitting a roadblock with the UI for a multi-tenant web app I’m building. My client isn’t satisfied with the current Streamlit-based UI, even after adding custom React components.
The backend is solid—I’ve set up all the necessary queries and table schemas, and I know exactly how the visuals should look. The app is designed to allow admins to manage CRUD operations for users and metrics, with the ability to view all users' data, while individual users can only see their own information. For authentication, I'm using AWS and Cognito to handle login and user management.
I recently came across Django/react templates, which seem like a great fit for my needs, but I’m finding component libraries a bit overwhelming. I also checked out Reflex.dev, though it feels somewhat clunky.
At this point, I'm open to simplifying the stack, even if that means dropping multi-tenancy. I’d really appreciate any recommendations on an easy way to layer a UI over my database and queries, particularly one that works well with AWS and Cognito.
Thanks in advance.
I've just released version 0.3.1 of IcedPyGui. Rust bindings using pyo3 and built with maturin.
IPG has many widgets now and more will be added each month. If you have ever used dearpygui, you'll find the syntax similar.
There are a ton of examples at https://github.com/icedpygui/IcedPyGui-Python-Examples
These examples will easily get you started.
The rust repository is https://github.com/icedpygui/IcedPyGui
The Iced respository is https://github.com/iced-rs/iced
Hello, I have recently published a new book that focuses on structural pattern matching in Python. You can find it at https://a.co/d/95C84J6. If you find this book interesting and would like me to arrange a free copy, please send me a direct message.