/r/IPython
If you have a question about IPython, (now Jupyter) the programming language written by scientists for scientists with an eye towards presentation, we want you here. If you have tips, Notebooks you want to share, or you want feedback we want you here. We welcome posts about the all versions of the IPython IDE, plus Markdown and LaTex. We discuss the popular libraries Mathplotlab, SciPy, NumPy, & SymPy. If you want to know about features like embedded video or animation check us out.
IPython (now Jupyter) was originally started by Fernando Perez as a way to improve the Python work flow for scientific computing. Since then it has grown in popularity, and gaining to the ability to make XKCD styled plots using matplotlib hasn't hurt. With new additions like the IPython Notebook, which runs in a browser, and the Notebook Viewer, IPython is a scientist's best friend.
Related subreddits
Useful Libraries
Cloud Services
Official IPython Sites
Official Example Notebooks
Additional Good Examples
Installation
Other Educational Resources
NBViewer Browser Extensions
Additional References
Comment Guidelines
The visitors to /r/IPython come from very different backgrounds and some even have little programming experience. Since this site is primarily here to provide help in the use of IPython, and host discussions about current and future features, make sure that it is clear how comments are relevant to the original post or the previous comment.
/r/IPython
I'm working in AWS SageMaker doing my analyses using Jupyter in an EC2 instance/in the cloud. I had been using JupyterLab for a bit now, but I've been noticing that when I close out of my tabs, my Jupyter processes end as well. I tested the same with regular notebooks/Jupyter, and those processes stay active even when I close my tabs. Is this to be expected, and is there a way to keep JupyterLab running even after closing my tabs? I'm not sure if working in SageMaker/the cloud makes a difference to working locally
I'm trying to get back into machine learning. I tried six months ago, but my operating system crashed and I had to reinstall it completely, which was a bit of a shame!
The software might have had some updates since then, which is probably why I'm having trouble. I'm trying to select a kernel with Visual Studio Code, but I'm unsure if I'm doing it right. I followed the method given by VSCode, but I'm still stuck on kernel selection.
I'm happy to say that installing the extensions and creating the Conda environment went well! However, when I select the kernel, I get this message:
I thought I'd share the list of extensions I've installed in case it helps:
I've done a lot of research online, but sadly none of the solutions I found worked.
Hi,
I try to run JEG on my windows server 2019 to connect my laptop to the kernels on the server.
Connection works fine, kernels are starting but closing after WebSocket timeout.
Here is what I can see in the JEG console
D 2024-11-17 18:54:53.267 EnterpriseGatewayApp] Launching kernel: 'Python 3 (ETL)' with command: ['C:\Users\venvs\etl-env\scripts\python.exe', '-Xfrozen_modules=off', '-m', 'ipykernel_launcher', '-f', 'C:\Users\AppData\Roaming\jupyter\runtime\kernel-c66b786d-403c-493f-84f4-458b61a41541.json'] [D 2024-11-17 18:54:53.267 EnterpriseGatewayApp] BaseProcessProxy.launch_process() env: {'KERNEL_LAUNCH_TIMEOUT': '', 'KERNEL_WORKING_DIR': '', 'KERNEL_USERNAME': '', 'KERNEL_GATEWAY': '', 'KERNEL_ID': '', 'KERNEL_LANGUAGE': '', 'EG_IMPERSONATION_ENABLED': ''} [I 2024-11-17 18:54:53.273 EnterpriseGatewayApp] Local kernel launched on 'ip', pid: 16132, pgid: 0, KernelID: c66b786d-403c-493f-84f4-458b61a41541, cmd: '['C:\Users\venvs\etl-env\scripts\python.exe', '-Xfrozen_modules=off', '-m', 'ipykernel_launcher', '-f', 'C:\Users\AppData\Roaming\jupyter\runtime\kernel-c66b786d-403c-493f-84f4-458b61a41541.json']' [D 2024-11-17 18:54:53.274 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61198 [D 2024-11-17 18:54:53.281 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61195 [I 2024-11-17 18:54:53.284 EnterpriseGatewayApp] Kernel started: c66b786d-403c-493f-84f4-458b61a41541 [D 2024-11-17 18:54:53.284 EnterpriseGatewayApp] Kernel args: {'env': {'KERNEL_LAUNCH_TIMEOUT': '40', 'KERNEL_WORKING_DIR': 'a path on my laptop', 'KERNEL_USERNAME': 'Laptop username'}, 'kernel_headers': {}, 'kernel_name': 'etl-env'} [I 241117 18:54:53 web:2348] 201 POST /api/kernels (ip) 29.00ms [D 2024-11-17 18:54:53.344 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels [D 2024-11-17 18:54:53.344 EnterpriseGatewayApp] Requesting kernel info from c66b786d-403c-493f-84f4-458b61a41541 [D 2024-11-17 18:54:53.346 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61194 [I 241117 18:54:53 web:2348] 200 GET /api/kernels (ip) 0.00ms [D 2024-11-17 18:54:53.367 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels [D 2024-11-17 18:54:53.368 EnterpriseGatewayApp] Waiting for pending kernel_info request [D 2024-11-17 18:54:53.378 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels [W 2024-11-17 18:54:53.379 EnterpriseGatewayApp] Replacing stale connection: c66b786d-403c-493f-84f4-458b61a41541:66351527-a8ee-422a-9305-f3b432ee58df [D 2024-11-17 18:54:53.380 EnterpriseGatewayApp] Found kernel ds-env in C:\Users*\AppData\Roaming\jupyter\kernels [D 2024-11-17 18:54:53.380 EnterpriseGatewayApp] Found kernel etl-env in C:\Users*\AppData\Roaming\jupyter\kernels [W 2024-11-17 18:54:53.381 EnterpriseGatewayApp] Native kernel (python3) is not available [I 241117 18:54:53 web:2348] 200 GET /api/kernelspecs (ip) 3.00ms Traceback (most recent call last): File "C:\ProgramData\Python\Python311\Lib\runpy.py", line 198, in run_module_as_main return run_code(code, main_globals, None, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\Python\Python311\Lib\runpy.py", line 88, in run_code exec(code, run_globals) File "C:\Users*\venvs\etl-env\Lib\site-packages\ipykernel_launcher.py", line 16, in from ipykernel import kernelapp as app File "C:\Users*\venvs\etl-env\Lib\site-packages\ipykernel_init.py", line 7, in from .connect import * # noqa: F403 ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users*\venvs\etl-env\Lib\site-packages\ipykernel\connect.py", line 12, in import jupyter_client File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_client_init.py", line 4, in from .connect import * File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_client\connect.py", line 28, in from jupyter_core.paths import jupyter_data_dir, jupyter_runtime_dir, secure_write File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_core\paths.py", line 24, in from .utils import deprecation File "C:\Users*\venvs\etl-env\Lib\site-packages\jupyter_core\utils_init.py", line 5, in import asyncio File "C:\ProgramData\Python\Python311\Lib\asyncio_init_.py", line 42, in from .windows_events import * File "C:\ProgramData\Python\Python311\Lib\asyncio\windows_events.py", line 8, in import _overlapped OSError: [WinError 10106] The requested service provider could not be loaded or initialized
Thanks for your help
Hi, I’m in a course on data analytics - our teacher keeps saying that we will find our niche within the spectrum of visualisation, machine learning or coding. I’m not sure how that works? Like how are we supposed to get better at visualisation without mastering coding. At times he says coding is important if you are interested in becoming a junior data analyst. how does the job market work? Can someone explain it to me? I’m not sure where my strength lies.
Hy guys can you suggest me what is best way to create database for our simulation team ?
So that we can access it whenever we want to check the properties of material.
If we wanna import new material also once validated we can do that also.
Anyone out there to help me out ?
So basically I want the terminal that is launched within jupyter (specifically jupyter-lab) to be zsh instead of bash. If I am have not expressed the my querry clearly attached screenshots might help. ss-1: default zsh shell with 'ml0' conda env ss-2 : terminal launched from jupyter-lab uses bash by default and also loses the conda env And my major motive is to preserve the conda environment in the jupyter from which it is launched.
I like to use iPython notebooks to store experimental code and debugging results, but it's a pain to use version control to look at them.
So I wrote some pre-commit hooks that makes it easy to diff iPython notebooks in Git. It auto-generates a copy of the file with just the Python code, so that you can just inspect code changes.
I wrote a bit more about why here, along with instructions on how to use them: https://blog.moonglow.ai/diffing-ipython-notebook-code-in-git/
And the git repo for the hooks (MIT-licensed) is here: https://github.com/moonglow-ai/pre-commit-hooks
Excited to release ryp, a Python package for running R code inside Python! ryp makes it a breeze to use R packages in your Python projects, and includes out-of-the-box support for inline plotting in Jupyter notebooks.
Converting Jupyter notebooks to PDF can be quite handy, especially when you want to share your analyses with others who may not have Jupyter installed. However, navigating the various options for conversion can be a challenge. I've recently put together a blog post that reviews two popular methods: nbconvert
and Quarto.
In the post, I break down the setup process, features, and limitations of each method to help you decide which one might be the best fit for your needs.
nbconvert
is the official library from the Jupyter team that's designed for this task. It offers versatility by letting you convert notebooks into formats like PDF through two approaches: WebPDF and the traditional PDF via LaTeX.
The WebPDF method is simpler to set up, while the LaTeX route tends to yield higher-quality documents—ideal for complex mathematical content but comes with more installation hurdles.
On the other hand, Quarto provides a comprehensive solution for converting Jupyter notebooks into PDFs, but it does require a bit more effort to get everything working. It’s feature-rich and offers great customization, though the learning curve can be a bit steep.
In my experience, many users start out with nbconvert
using WebPDF for quick needs and then graduate to using XeTeX as their requirements grow more sophisticated. Quarto, while powerful, is often suited for those with very specific document formatting needs.
For anyone interested in learning more about these options and their respective setups, you can check out the full details in my blog post here: Converting Jupyter Notebooks to PDF
Converting a notebook using the LaTeX-based converter and hiding the code
Hi r/IPython,
Two years ago, I announced here a tool to convert Jupyter notebooks to PDF for free.
The tool has now converted more than 10,000 notebooks! So I figured I'd add some extra features.
The tool is available at https://convert.ploomber.io
A few ideas I have:
Let me know what other things might be useful!
Current methods for extracting structured outputs from LLMs often rely on libraries such as DSPy, OpenAI Structured Outputs, and Langchain JSON Schema. These libraries typically use Pydantic Models to create JSON schemas representing classes, enums, and types. However, this approach can be costly since many LLMs treat each element of the JSON schema (e.g., {}
, :
, "$"
) as separate tokens, leading to increased costs due to the numerous tokens present in JSON schemas.
Semantix offers a different and more cost-effective solution. Instead of using JSON schemas, Semantix represents classes, enums, and objects in a more textual manner, reducing the number of tokens and lowering inference costs. Additionally, Semantix leverages Python's built-in typing system with minor modifications to provide meaning to parameters, function signatures, classes, enums, and functions. This approach eliminates the need for unnecessary Pydantic models and various classes for different prompting methods. Semantix also makes it easy for developers to create GenAI-powered functions.
Semantix is designed for developers who have worked with libraries like Langchain and DSPy and are tired of dealing with Pydantic models and JSON schemas. It is also ideal for those who want to add AI features to existing or new applications without learning extensive new libraries.
Semantix supports multimodal inputs, allowing you to use images and videos effortlessly. Unlike other libraries, Semantix requires minimal code changes to achieve excellent results.
Ready to give it a try? Check out our Colab notebook here and explore our GitHub repository here for more details.
when i try to run this command : for label in labels: !mkdir {'Tensorflow/workspace/images/collectedimages//'+label} cap = cv2.VideoCapture(0) print('Collecting images for {}'.format(label)) time.sleep(5) for imgnum in range(number_imgs): ret, frame = cap.read() imagename = os.path.join(IMAGES_PATH, label, label+'.'+'{}.jpg'.format(str(uuid.uuid1()))) cv2.imwrite(imgname, frame) cv2.imshow('frame', frame) time.sleep(2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
it shows this error: The syntax of the command is incorrect.
Does anyone know why?
Hey r/ipython! Today, I'm lauching nb2dash, a tool to convert Jupyter notebooks into dashboards, and I'd love to get your feedback.
https://nb2dash.ploomberapp.io
You can see a sample dashboard here: https://nb2dash.ploomberapp.io/notebook/bb8086c0
https://reddit.com/link/1f9rajw/video/hr40j3dht0nd1/player
Data practitioners who want to easily share an interactive analysis, machine learning model or any other interactive app.
When people want to share a notebook, they often convert it into HTML or PDF. However, this is hinders interactivity. Alternatively, you might use Voila to deploy it as a web app but that requires paying to a hosting provider. Voici uses WASM, meaning your notebook is a static site and all compute happens locally, reducing cost by a huge margin. Note that WASM and Voici are still early technologies and there are packages that won't work.
I'd really appreciate it if you could try it out and let me know what you think. Any feedback, feature requests, or bug reports are welcome!
(Note: This is a side project, so please be patient if there are any hiccups. I'm actively working on improvements!)
in jupyter notebook when i try to save a photo for object detection it shows this error:
<string>:1: SyntaxWarning: invalid escape sequence '\w'
even when i try to set variables
Does anyone know the reason?
I've created CLI, a tool that generates semantic commit messages in Git
Here's a breakdown:
What My Project Does Penify CLI is a command-line tool that:
Key features:
penify-cli commit
: Commits code with an auto-generated semantic message for staged files.penify-cli doc-gen
: Generates documentation for specified files/folders.Installation: pip install penify-cli
Target Audience Penify CLI is aimed at developers who want to:
Comparison Github-Copilot, aicommit:
Note: Currently requires signup at Penify (we're working on Ollama integration for local use).
Check it out:
I'd love to hear your thoughts and feedback!