/r/IPython
If you have a question about IPython, (now Jupyter) the programming language written by scientists for scientists with an eye towards presentation, we want you here. If you have tips, Notebooks you want to share, or you want feedback we want you here. We welcome posts about the all versions of the IPython IDE, plus Markdown and LaTex. We discuss the popular libraries Mathplotlab, SciPy, NumPy, & SymPy. If you want to know about features like embedded video or animation check us out.
IPython (now Jupyter) was originally started by Fernando Perez as a way to improve the Python work flow for scientific computing. Since then it has grown in popularity, and gaining to the ability to make XKCD styled plots using matplotlib hasn't hurt. With new additions like the IPython Notebook, which runs in a browser, and the Notebook Viewer, IPython is a scientist's best friend.
Related subreddits
Useful Libraries
Cloud Services
Official IPython Sites
Official Example Notebooks
Additional Good Examples
Installation
Other Educational Resources
NBViewer Browser Extensions
Additional References
Comment Guidelines
The visitors to /r/IPython come from very different backgrounds and some even have little programming experience. Since this site is primarily here to provide help in the use of IPython, and host discussions about current and future features, make sure that it is clear how comments are relevant to the original post or the previous comment.
/r/IPython
I am taking a Python class at my college, and as part of the class, we installed Anaconda and the Jypter Notebook to write our code in. Whenever I try to open Jypter Notebook, it opens up Photoshop on my laptop.
The TA for my class had me uninstall Photoshop, and it got Jypter Notebook to run. However, I need to keep Photoshop on my laptop for my internship, so I would prefer not to have to uninstall and re-install it.
Do y'all know a potential work around for this?
I appreciate any advice, thank you all.
Edit:
I just realized that i misspelled Juypter, my bad
Hi everyone! Can anyone help with the keyboard shortcut to clear the cell output in Jupyter notebook v7.2.2? A simple Google search yields the shortcut as "O" / "Shift+O" as per different articles, but none of them works in v7.2.2 and the only way to go about it now seems as "right click -> clear cell output", which isn't optimal.
#jupyternotebook #python
Hey everyone, I made an extension that lets you chat with AI within IPython so thay you understand, debug, and write better code faster. It uses relevant context from your session to suggest the best responses to your questions. You can choose between gpt-4o and claude-3.5-sonnet, I'm planning to add local models soon. You can check out the code on GitHub and install it from PyPI using pip install ipychat
.
Here's a demo:
Hi Pythonistas! 👋
Ever needed to share your Jupyter Notebook as a professional-looking PDF but got stuck fiddling with nbconvert or other complex tools? I’ve found a super simple solution: rare2pdf.com/ipynb-to-pdf/.
✅ Just upload your .ipynb
files. and it converts to a neat PDF in seconds. Perfect for presentations, sharing with non-tech folks, or archiving your work.
I’d love to hear if this saves you some time! Give it a try and let me know what you think. 😊
I'm running the JupyterHub single-user image and noticing spikes in CPU usage ever ~30s or so
Is this normal for JH? Looking at the output of top in the container shows the python3 process is being called periodically. Any thoughts on how I can troubleshoot this assuming it's not normal behavior? Thanks.
I'm working in AWS SageMaker doing my analyses using Jupyter in an EC2 instance/in the cloud. I had been using JupyterLab for a bit now, but I've been noticing that when I close out of my tabs, my Jupyter processes end as well. I tested the same with regular notebooks/Jupyter, and those processes stay active even when I close my tabs. Is this to be expected, and is there a way to keep JupyterLab running even after closing my tabs? I'm not sure if working in SageMaker/the cloud makes a difference to working locally
I'm trying to get back into machine learning. I tried six months ago, but my operating system crashed and I had to reinstall it completely, which was a bit of a shame!
The software might have had some updates since then, which is probably why I'm having trouble. I'm trying to select a kernel with Visual Studio Code, but I'm unsure if I'm doing it right. I followed the method given by VSCode, but I'm still stuck on kernel selection.
I'm happy to say that installing the extensions and creating the Conda environment went well! However, when I select the kernel, I get this message:
I thought I'd share the list of extensions I've installed in case it helps:
I've done a lot of research online, but sadly none of the solutions I found worked.
Hi,
I try to run JEG on my windows server 2019 to connect my laptop to the kernels on the server.
Connection works fine, kernels are starting but closing after WebSocket timeout.
Here is what I can see in the JEG console
D 2024-11-17 18:54:53.267 EnterpriseGatewayApp] Launching kernel: 'Python 3 (ETL)' with command: ['C:\Users\venvs\etl-env\scripts\python.exe', '-Xfrozen_modules=off', '-m', 'ipykernel_launcher', '-f', 'C:\Users\AppData\Roaming\jupyter\runtime\kernel-c66b786d-403c-493f-84f4-458b61a41541.json'] [D 2024-11-17 18:54:53.267 EnterpriseGatewayApp] BaseProcessProxy.launch_process() env: {'KERNEL_LAUNCH_TIMEOUT': '', 'KERNEL_WORKING_DIR': '', 'KERNEL_USERNAME': '', 'KERNEL_GATEWAY': '', 'KERNEL_ID': '', 'KERNEL_LANGUAGE': '', 'EG_IMPERSONATION_ENABLED': ''} [I 2024-11-17 18:54:53.273 EnterpriseGatewayApp] Local kernel launched on 'ip', pid: 16132, pgid: 0, KernelID: c66b786d-403c-493f-84f4-458b61a41541, cmd: '['C:\Users\venvs\etl-env\scripts\python.exe', '-Xfrozen_modules=off', '-m', 'ipykernel_launcher', '-f', 'C:\Users\AppData\Roaming\jupyter\runtime\kernel-c66b786d-403c-493f-84f4-458b61a41541.json']' [D 2024-11-17 18:54:53.274 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61198 [D 2024-11-17 18:54:53.281 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61195 [I 2024-11-17 18:54:53.284 EnterpriseGatewayApp] Kernel started: c66b786d-403c-493f-84f4-458b61a41541 [D 2024-11-17 18:54:53.284 EnterpriseGatewayApp] Kernel args: {'env': {'KERNEL_LAUNCH_TIMEOUT': '40', 'KERNEL_WORKING_DIR': 'a path on my laptop', 'KERNEL_USERNAME': 'Laptop username'}, 'kernel_headers': {}, 'kernel_name': 'etl-env'} [I 241117 18:54:53 web:2348] 201 POST /api/kernels (ip) 29.00ms [D 2024-11-17 18:54:53.344 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels [D 2024-11-17 18:54:53.344 EnterpriseGatewayApp] Requesting kernel info from c66b786d-403c-493f-84f4-458b61a41541 [D 2024-11-17 18:54:53.346 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61194 [I 241117 18:54:53 web:2348] 200 GET /api/kernels (ip) 0.00ms [D 2024-11-17 18:54:53.367 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels [D 2024-11-17 18:54:53.368 EnterpriseGatewayApp] Waiting for pending kernel_info request [D 2024-11-17 18:54:53.378 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels [W 2024-11-17 18:54:53.379 EnterpriseGatewayApp] Replacing stale connection: c66b786d-403c-493f-84f4-458b61a41541:66351527-a8ee-422a-9305-f3b432ee58df [D 2024-11-17 18:54:53.380 EnterpriseGatewayApp] Found kernel ds-env in C:\Users*\AppData\Roaming\jupyter\kernels [D 2024-11-17 18:54:53.380 EnterpriseGatewayApp] Found kernel etl-env in C:\Users*\AppData\Roaming\jupyter\kernels [W 2024-11-17 18:54:53.381 EnterpriseGatewayApp] Native kernel (python3) is not available [I 241117 18:54:53 web:2348] 200 GET /api/kernelspecs (ip) 3.00ms Traceback (most recent call last): File "C:\ProgramData\Python\Python311\Lib\runpy.py", line 198, in run_module_as_main return run_code(code, main_globals, None, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\Python\Python311\Lib\runpy.py", line 88, in run_code exec(code, run_globals) File "C:\Users*\venvs\etl-env\Lib\site-packages\ipykernel_launcher.py", line 16, in from ipykernel import kernelapp as app File "C:\Users*\venvs\etl-env\Lib\site-packages\ipykernel_init.py", line 7, in from .connect import * # noqa: F403 ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users*\venvs\etl-env\Lib\site-packages\ipykernel\connect.py", line 12, in import jupyter_client File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_client_init.py", line 4, in from .connect import * File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_client\connect.py", line 28, in from jupyter_core.paths import jupyter_data_dir, jupyter_runtime_dir, secure_write File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_core\paths.py", line 24, in from .utils import deprecation File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_core\utils_init.py", line 5, in import asyncio File "C:\ProgramData\Python\Python311\Lib\asyncio_init_.py", line 42, in from .windows_events import * File "C:\ProgramData\Python\Python311\Lib\asyncio\windows_events.py", line 8, in import _overlapped OSError: [WinError 10106] The requested service provider could not be loaded or initialized
Thanks for your help
Hi, I’m in a course on data analytics - our teacher keeps saying that we will find our niche within the spectrum of visualisation, machine learning or coding. I’m not sure how that works? Like how are we supposed to get better at visualisation without mastering coding. At times he says coding is important if you are interested in becoming a junior data analyst. how does the job market work? Can someone explain it to me? I’m not sure where my strength lies.
Hy guys can you suggest me what is best way to create database for our simulation team ?
So that we can access it whenever we want to check the properties of material.
If we wanna import new material also once validated we can do that also.
Anyone out there to help me out ?
So basically I want the terminal that is launched within jupyter (specifically jupyter-lab) to be zsh instead of bash. If I am have not expressed the my querry clearly attached screenshots might help. ss-1: default zsh shell with 'ml0' conda env ss-2 : terminal launched from jupyter-lab uses bash by default and also loses the conda env And my major motive is to preserve the conda environment in the jupyter from which it is launched.
I like to use iPython notebooks to store experimental code and debugging results, but it's a pain to use version control to look at them.
So I wrote some pre-commit hooks that makes it easy to diff iPython notebooks in Git. It auto-generates a copy of the file with just the Python code, so that you can just inspect code changes.
I wrote a bit more about why here, along with instructions on how to use them:Â https://blog.moonglow.ai/diffing-ipython-notebook-code-in-git/
And the git repo for the hooks (MIT-licensed) is here:Â https://github.com/moonglow-ai/pre-commit-hooks
Excited to release ryp, a Python package for running R code inside Python! ryp makes it a breeze to use R packages in your Python projects, and includes out-of-the-box support for inline plotting in Jupyter notebooks.
Converting Jupyter notebooks to PDF can be quite handy, especially when you want to share your analyses with others who may not have Jupyter installed. However, navigating the various options for conversion can be a challenge. I've recently put together a blog post that reviews two popular methods: nbconvert
and Quarto.
In the post, I break down the setup process, features, and limitations of each method to help you decide which one might be the best fit for your needs.
nbconvert
is the official library from the Jupyter team that's designed for this task. It offers versatility by letting you convert notebooks into formats like PDF through two approaches: WebPDF and the traditional PDF via LaTeX.
The WebPDF method is simpler to set up, while the LaTeX route tends to yield higher-quality documents—ideal for complex mathematical content but comes with more installation hurdles.
On the other hand, Quarto provides a comprehensive solution for converting Jupyter notebooks into PDFs, but it does require a bit more effort to get everything working. It’s feature-rich and offers great customization, though the learning curve can be a bit steep.
In my experience, many users start out with nbconvert
using WebPDF for quick needs and then graduate to using XeTeX as their requirements grow more sophisticated. Quarto, while powerful, is often suited for those with very specific document formatting needs.
For anyone interested in learning more about these options and their respective setups, you can check out the full details in my blog post here: Converting Jupyter Notebooks to PDF
Converting a notebook using the LaTeX-based converter and hiding the code
Hi r/IPython,
Two years ago, I announced here a tool to convert Jupyter notebooks to PDF for free.
The tool has now converted more than 10,000 notebooks! So I figured I'd add some extra features.
The tool is available at https://convert.ploomber.io
A few ideas I have:
Let me know what other things might be useful!
Current methods for extracting structured outputs from LLMs often rely on libraries such as DSPy, OpenAI Structured Outputs, and Langchain JSON Schema. These libraries typically use Pydantic Models to create JSON schemas representing classes, enums, and types. However, this approach can be costly since many LLMs treat each element of the JSON schema (e.g., {}
, :
, "$"
) as separate tokens, leading to increased costs due to the numerous tokens present in JSON schemas.
Semantix offers a different and more cost-effective solution. Instead of using JSON schemas, Semantix represents classes, enums, and objects in a more textual manner, reducing the number of tokens and lowering inference costs. Additionally, Semantix leverages Python's built-in typing system with minor modifications to provide meaning to parameters, function signatures, classes, enums, and functions. This approach eliminates the need for unnecessary Pydantic models and various classes for different prompting methods. Semantix also makes it easy for developers to create GenAI-powered functions.
Semantix is designed for developers who have worked with libraries like Langchain and DSPy and are tired of dealing with Pydantic models and JSON schemas. It is also ideal for those who want to add AI features to existing or new applications without learning extensive new libraries.
Semantix supports multimodal inputs, allowing you to use images and videos effortlessly. Unlike other libraries, Semantix requires minimal code changes to achieve excellent results.
Ready to give it a try? Check out our Colab notebook here and explore our GitHub repository here for more details.