/r/madeinpython
A subreddit for showcasing the things you made with the Python language! Use us instead of flooding r/Python :)
Hey check out r/madeinjs for JavaScript and Typescript!
A subreddit for showcasing the things you made with the Python language! Use us instead of flooding r/Python :)
/r/madeinpython
Are you tired of spending hours perfecting your resume for every job application? Say hello to Resume_Builder_AIHawk, a powerful Python tool designed to make creating visually stunning resumes fast and effortless!
Help us enhance the visual appeal of our resume templates by creating custom CSS styles. Your unique designs could become part of our project!
To learn how to contribute, follow the guidelines for designers.
Ready to elevate your job application game? Check out Resume_Builder_AIHawk on GitHub and start creating standout resumes today!
Feel free to ask any questions or provide feedback. Your input is invaluable!
I have been working on this tool for the past few weeks. Its goal is very simple: checking if an URL is still working or not. The real challenge was to handle the different edge cases like redirects, 4XX, 5XX, Connection timeout, read timeout, etc. Here are the features:
HEAD
request instead of GET
to save some bandwidthThe tool is available on Pypi and the code source on Github. Let me know if you have any suggestions or feedback, I would happy to read them!
Scraper code here: https://github.com/JoeSchiff/Joes_Jorbs
Website here: http://joesjorbs.com/index.html
I've been working on this for a few years to help my wife find a librarian job (she did!). Maybe it can help someone else.
Search for a specific job title. NOT for browsing all available jobs.
You can search in Civil Service, schools, and universities.
You can limit your search to a geographic area.
The website will return a match if the job title is found anywhere on the page, so sometime false positives occur.
I created an AI bot that:
And all of this while I was sleeping! In just one month, this method helped me secure around 50 interviews. The tailored CVs and cover letters, customized based on each job description, made a significant difference.
Artificial intelligence is rapidly reshaping the recruiting landscape:
This method is incredibly effective at passing through automated screening systems. By generating CVs and cover letters tailored to each job description, my script significantly increases the chances of getting noticed by both AI and human recruiters.
In a world of AI-optimized applications:
Could become the real distinguishing factors.
Observing this technological revolution, I can't help but reflect on the profound implications for the world of work. While efficient, the automation of job applications raises questions about the very nature of professional relationships. We face a paradox: as we seek to optimize the selection process, we risk losing the human element that often makes a difference in a work environment. The challenge ahead is not just technological, but also ethical and social. We'll need to find a delicate balance between the efficiency of artificial intelligence and the richness of human interactions. Only then can we build a future of work that is not just productive, but also fulfilling and meaningful for everyone.
With the growing use of AI by candidates, will recruiters need to return to conducting interviews personally instead of relying on stupid automated screenings?
Here's what it does:
Curious? Try it here:Ā GitHub Project
(My project is completely free and open source, unlike other similar services that cost a lot and offer very little value. Since itās still in beta, every star on GitHub is a huge encouragement to keep developing it!)
P.S.Ā Remember: with great power of AI comes great responsibility. Let's use it ethically!
It checks approximately 130 security items. The assessment criteria are based on the CIS Benchmark RHEL Security Guidelines.
I hope it is helpful to those who need it.
Pythonista Scene OpenGL fire demo
I went down a procedural particle effect rabbit hole making some torch flames in Pythonista using the scene module but ran into some performance issues quickly with my naive approaches to changing the color of each particle by changing the fill_color of each shape node. This resulted in less than 10 fps when trying to render 810 particle objects.
Switching to some fancy fragment shade OpenGL code resulted in a full 60 fps for the same 810 objects.
Hereās the full code repo: code
Youāll see in there the naive approach is commented out and involved a small list of hex colors that would change over time and be set in the draw method by resetting the fill_color.
I also learned about some interesting gotchas like assigning the Shader class and code to a variable and passing it that way resulted in a global change to all objects using that shader but passing in the Shader class individually and making sure each object had its own increment and progress values allowed for the correct staggered behavior.
Hello everyone! š
I would like to share a side project witch will help you boost you LinkedIn connections.
Quick backstory: I just did not want to pay 15$/month for other soft that will do the same.
What My Project Does:
Target AudienceĀ - people who want to expand their network on LinkedIn
ComparisonĀ - all available alternatives are paid versions. This on is open source!
Why am I posting this reddit?
1st- I don't want you to pay $15 or more for other soft. šø
2nd - I ask you to practice and master your Python skills by contributing to the project(fork the project and open pull request on feature-branch). Maybe add some functionality witch will help all the people who will use the Repo. For example - implement a feature for liking the posts in your feed and etc. Have fun and enjoy! š
3rd - I would like to connect with you onĀ GitHubĀ (It's also automated, you follow me, my GitHub Follows you back) andĀ LinkedInĀ to expand mine and your network. š¤
āāāImportant: Don't get banned by LinkedInālimit yourself to sending no more than 100 connections per week!
GitHub RepoĀ -Ā https://github.com/OfficialCodeVoyage/LinkedIn_Auto_Connector_Bot/tree/master
Ask me any questions you have! Tell me what should I do next! Have fun!
Discover how to perform image segmentation using K-means clustering algorithm.
Ā
In this video, you will first learn how to load an image into Python and preprocess it using OpenCV to convert it to a suitable format for input to the K-means clustering algorithm.
You will then apply the K-means algorithm to the preprocessed image and specify the desired number of clusters.
Finally, you will demonstrate how to obtain the image segmentation by assigning each pixel in the image to its corresponding cluster, and you will show how the segmentation changes when you vary the number of clusters.
Ā
You can find more similar tutorials in my blog posts page here : https://eranfeit.net/blog/
Check this tutorial:Ā https://youtu.be/a2Kti9UGtrU&list=UULFTiWJJhaH6BviSWKLJUM9sg
QualityScaler is a Windows app powered by AI to enhance, upscale and denoise photos and videos.
ā¼Ā NEW
Video upscale STOP&RESUME
ā” Now is possible to stop and resume the video upscale process at any time
ā” When restarting (with same settings) the app will checks files already upscaled and resumes from the interrupted point
ā” NOTE - If video temporary files are deleted, upscaling will start over again
User settings save
ā” The app will now remember all the options of the user (AI model, GPU, GPU VRAM etc.)
ā” NOTE - In case of problems, delete the file QualityScaler_UserPreference.json in Documents folder
Antivirus problem fix
ā” After contacting Microsoft, Avast and AVG
ā” QualityScaler will finally no longer be recognized as Malware by these antivirus
IRCNN AI improvements
ā” IRCNN implementation is now divided into 2 separate models
ā” IRCNN_Mx1 - (medium denoise)
ā” IRCNN_Lx1 - (high denoise)
ā¼Ā BUGFIX / IMPROVEMENTS
Under-the-hood updates
ā” Updated Python to version 3.12 (improved performance)
ā” Updated FFMPEG to version 7.0.1 (bugfixes)
ā” Updated Exiftool to latest version available
AI upscale improvements
ā” Improved upscaled image/video quality and "temporal stability"
ā” Better support for images with transparent background
ā” Improved memory usage and performance
AI multithreading improvements
ā” Multithreaded video upscale is now more stable
ā” Fixed a problem that could lead to losing some upscaled frames
General improvements
ā” Bug fixes, code cleaning, performance improvements
ā” Updated dependencies
Just need an all inclusive Python, Selenium and OpenCV on iPhone, no computers
This article provides an overview of various tools that can help developers improve their testing processes - it covers eight different automation tools, each with its own strengths and use cases: Python Automation Tools for Testing Compared - Guide
I am trying to convert the csv file into pdf for that i am using a python script to convert it but i can get it by all the columns which the actual csv file have but i need some of the columns only.
Please guide me how to customize it.
Started building this game to develop my sqlalchemy fluency and was really pleased with how this screen turned out. The 1000 line script allows you to create a callable inventory swap widget for any NPC in just one line. The only external library/framework in use is sqlalchemy.
Have you heard about T Coronae Borealis (TCrB)? No? Well no surprise since this binary star is very, very faint and not visible to the naked eye... YET.
Every 60 years the white dwarf of this binary star system accumulates enough hydrogen from its red giant companion to spark nuclear fusion on its surface. A Nova occurs, releasing large amount of energy. Sice this Nova is "kinda close by" the brightness increased to "naked eye visibility".
But where is the TCrB? Well of course one can use Stellarium, but using Python and some self coding is a great way to understand how these coordinates are computed and displayed.
Thus I created a small Python script + tutorial to create the following red-eye friendly sky map; where the white "+" is the position of the star.
But WHEN is it happening?
Well... noone really knows. Potentially in the next weeks / months. So keep your eye up :)
YouTube Link: https://youtu.be/ocklQipgPEY
Cheers,
Thomas
What If we asked our deep neural network to draw itās best image for a trained model ?
What it will draw ? What is the optimized image for each model category ?
Ā
We can discover that using the class maximization method on the Vgg16 model.
Ā
You can find more similar tutorials in my blog posts page here : https://eranfeit.net/blog/
You can find the link for the video tutorial here: https://youtu.be/5J_b_GxnUBU&list=UULFTiWJJhaH6BviSWKLJUM9sg
Ā
Ā
Enjoy
Eran
LightRAG is a light, modular, and robust library, covering RAG, agents, and optimizers.
Links here:
LightRAG github: https://github.com/SylphAI-Inc/LightRAG
LightRAG docs: https://lightrag.sylph.ai/
Discord: https://discord.gg/ezzszrRZvT
We are excited to share with you an open-source library LightRAG that helps developers build LLM applications with high modularity and 100% understandable code!
LightRAG was born from our efforts to build a challenging LLM use case: a conversational search engine specializing in entity search. We decided to gear up the codebase as it had become unmanageable and insufficient.With an understanding of both AI research and the challenge of putting LLMs into production, we realized that researchers and product teams do not use shared libraries like how libraries such as PyTorch have formed a smooth transition between research and product. We decided to dive deeper and open-source the library.
After two months of incredibly hard yet fun work, the library is now open to the public. Here are our efforts to unite research and production:
- 3 Design Principles: We share a similar design philosophy to PyTorch: simplicity and quality. We emphasize optimizing as the third principle, as we notice that building product-grade applications requires multiple iterations and a rigid process of evaluating and optimizing, similar to how developers train or retrain models.
- Model-agnostic: We believe research and production teams need to use different models in a typical dev cycle, such as large context LLMs for benchmarking, and smaller context LLMs to cut down on cost and latency. We made all components model-agnostic, meaning when using your prompt or doing your embedding and retrieval, you can switch to different models just via configuration without changing any code logic. All these integration dependencies are formed as optional packages, without forcing all of them on all users.
- Ensure developers can have 100% understanding of the source code: LLMs are like water; they can be shaped into any use case. The best developers seek 100% understanding of the underlying logic, as customization can be unavoidable in LLM applications. Our tutorials not only demonstrate how to use the code but also explain the design of each API and potential issues, with the same thoroughness as a hands-on LLM engineering book.
The result is a light, modular, and robust library, covering RAG, agents, and optimizers.
LLM researchers who are building new prompting or optimization methods for in-context learning
Production teams seeking more control and understanding of the library
Software engineers who want to learn the AI way to build LLM applications
Feedback is much appreciated as always. Come and join us! Happy building and optimizing!
Sincerely,
The LightRAG Team
A secure password manager built in python with cryptography package
Hey everyone,
have you seen Saturn trough a telescope? If not: you should! You can easily see the great rings with the naked eye. But ... currently we see it "edge on", leading to a less stunning image, as shown below for the current year and 2028:
Now in my "Compressed Cosmos" coding tutorial video, where I try to create Python snippets in less than 100 lines of code, I created a small script to compute this tilt angle evolution over time. Currently it is almost 0Ā°, but the angle increases. The following plot shows this angle vs. the time, resulting from my created script (a negative angle indicates the view "from below"):
Now if you'd like to understand how I did it, check out my current Notebook on my GitHub repo. I made also a short video about it on YouTube.
Hope you can learn something from it :). I'll continue to create space related coding videos that cover different topics.
Best,
Thomas
GraphingLib is a Matplotlib wrapper that integrates data analysis in an object oriented api, with the ability to create custom figure styles.
Quick links:
GraphingLib Style Editor's Github
Iām excited to share a project my friends and I have been working on: GraphingLib, an open-source data visualization library wrapped around matplotlib and designed to make creating and designing figures as easy as possible.
Our target audience is the scientific community, though GraphingLib is versatile enough for other purposes as well. Our goto model user was someone making measurements in a lab and wanting to get a working visualization script on the spot as quickly as possible, without having to do much more afterwards to make it publication ready.
Weāve put a lot of effort into documenting GraphingLib extensively. Check out the āQuickstartā section to learn how to install and import the library. The "Handbook" has detailed guides on using different features, the "Reference" section provides comprehensive details on objects and their methods, and the āGalleryā has tons of examples of GraphingLib in action.
We want your feedback! GraphingLib is still in development, and weād love your help to make it better. There are very few people using it right now so thereās definitely plenty of things we havenāt thought of, and thatās why we need you.
In an attempt to anticipate some of your comments, here are a few things that GraphingLib was deliberately not meant to be:
GraphingLib is still evolving, so you might run into some bugs or missing features. Thanks for your patience and support as we continue to improve the library. Weāre looking forward to hearing your feedback!
Cheers,
The GraphingLib community
In this short series I'd like to show space science related Python code that is compressed in less than 100 lines to answer a dedicated scientific question
What My Project Does :
The Alibaba-CLI-Scrapper project is a Python package that provides a dedicated command-line interface (CLI) for scraping data from Alibaba.com. The primary purpose of this project is to extract product and theirs related suppliers informations from Alibaba based on keywords provided by user and store it in a local database, such as SQLite or MySQL.
Target Audience :
The project is primarily aimed at developers and researchers who need to gather data from Alibaba for various purposes, such as market analysis, product research. The CLI interface makes the tool accessible to users who prefer a command-line-based approach over web-based scraping tools.
Comparison :
While there are other Alibaba scraping tools available, the Alibaba-CLI-Scrapper stands out in several ways:
Asynchronous Scraping: The use of Playwright's asynchronous API allows the tool to handle a large number of requests efficiently, which is a key advantage over synchronous scraping approaches.
Database Integration: The ability to store the scraped data directly in a database, such as SQLite or MySQL, makes the tool more suitable for structured data analysis and management compared to tools that only provide raw data output.
User-Friendly CLI: The command-line interface provides a more accessible and automation-friendly way of interacting with the scraper, compared to web-based or API-driven tools.
Planned Enhancements: The project roadmap includes valuable features like data export to CSV and Excel, integration of a Retrieval Augmented Generation (RAG) system for natural language querying, and support for PostgreSQL, which can further enhance the tool's capabilities and make it more appealing to a wider range of users.
Here you have GitHub repository: https://github.com/poneoneo/Alibaba-CLI-Scrapper
And pypi link : https://pypi.org/project/aba_cli_scrapper/
Waiting for your review and suggestions to enhance this project.
Feel free to roast it. How would you do it better?
from msvcrt import getch, kbhit
from os import system
from time import sleep
class PAINT:
'''Console Text Decoration'''
reset = '\033[0;0m'
def clear():
'''Clear the console.'''
system('cls || clear')
class TXT:
'''Text color control.'''
black = {
1: u'\u001b[38;5;232m',
}
yellow = {
1: u'\u001b[38;5;226m',
2: u'\u001b[38;5;3m',
}
class BG:
'''Background color control.'''
black = {
1: u'\u001b[48;5;0m',
}
yellow = {
1: u'\u001b[48;5;3m',
2: u'\u001b[48;5;11m',
}
gray = {
1: u'\u001b[48;5;233m',
2: u'\u001b[48;5;234m',
}
class MENU:
'''Create new menu object'''
class EVENT:
'''Sub class for handling events.'''
def key_press(menu: object):
key = getch()
if key == b'K': # left arrow
for b in range(len(menu.menu)):
if menu.menu[b]['selected']:
if b-1 >= 0:
menu.menu[b]['selected'] = False
menu.menu[b-1]['selected'] = True
return
else:
return
elif key == b'M': # right arrow
for b in range(len(menu.menu)):
if menu.menu[b]['selected']:
if b+1 < len(menu.menu):
menu.menu[b]['selected'] = False
menu.menu[b+1]['selected'] = True
return
else:
return
elif key == b'\r': # enter key
for button in menu.menu:
if button['selected']:
button['action']()
else:
pass
def __init__(self):
self.active = True
self.selected = []
self.menu = [
{
'type': 'exit',
'text': '[EXIT]',
'selected': True,
'action': exit
},
{
'type': 'clr sel',
'text': '[CLEAR]',
'selected': False,
'action': self.clear_selected
},
{
'type': 'example',
'text': '[BUTTON 1]',
'selected': False,
'action': self.example_bttn,
'value': 'Button #1'
},
{
'type': 'example',
'text': '[BUTTON 2]',
'selected': False,
'action': self.example_bttn,
'value': 'Button #2'
},
{
'type': 'example',
'text': '[BUTTON 3]',
'selected': False,
'action': self.example_bttn,
'value': 'Button #3'
},
]
def clear_selected(self):
self.selected.clear()
def example_bttn(self):
for button in self.menu:
if button['selected']:
self.selected.append({
'value': f'{button['value']} '
})
return
def draw_buttons(self):
i = '\n\n'.center(50)
for button in self.menu:
if button['selected']:
i += (
PAINT.BG.black[1] + PAINT.TXT.yellow[1] +
button['text'] + PAINT.reset
)
else:
i += (
PAINT.BG.gray[1] + PAINT.TXT.black[1] +
button['text'] + PAINT.reset
)
print(i)
def draw_selected(self):
i = '\n'.center(50)
for sel in self.selected:
i += sel['value']
print(i)
def render(self):
while self.active:
if kbhit():
self.EVENT.key_press(self)
else:
PAINT.clear()
self.draw_buttons()
self.draw_selected()
sleep(0.025)
menu = MENU()
menu.render()