/r/dip
A community for digital image processing
News and discussion about digital image processing
/r/dip
Hello everyone!
As a developer who frequently works with images, I recently encountered a common problem - the need to precisely adjust image file sizes. Sometimes I need to increase file sizes to meet minimum requirements for certain platforms, and other times I need to compress files for email attachments or to improve website loading speeds.
While searching for a solution, I stumbled upon a really useful online tool: https://increasekb.com. This tool allows users to easily adjust image file sizes to specific KB or MB values
Here are a few features I found particularly helpful:
Supports multiple formats: JPG, JPEG, PNG, and even PDF.
Clean and intuitive interface, making it effortless to use.
Ability to precisely increase or decrease file sizes.
Fast processing and free to use.
Most importantly, this tool maintains image quality as much as possible while adjusting file sizes. For those who frequently work with images, this is a real time-saver.
I thought this tool might be useful for many of you, so I wanted to share it here. If you often need to adjust image sizes, it's worth giving it a try.
Feel free to check it out and share your experiences if you do!
Has anyone else found similar tools they'd recommend? I'm always interested in learning about new resources for image processing.
Looking for directions to creating a digital image for a building proposal. On its own to scale and with it placed into a real picture of the area if possible.
Free app or software out there?
Any help would be greatly appreciated!
For example;
Is there a tool that will divide a 40x40 cm drawing into 1x1 cm squares and tell me how many squares the black spot covers when I separate and upload the colors as in the image?
Or it can be like this;
When I load the image in color, a tool that can calculate the percentage of the area covered by the colors can also be useful to me.
Actually, my goal here is to calculate how many grams of paint I will spend while making this drawing.
Hi, I have an idea for making a warm bruschetta dip, including chopped Roma tomatoes ( maybe roasted?), roasted garlic cloves?, pesto, caramelized onion, serving it on toasted sourdough bread. I would start out chopping up the Roma tomatoes and getting rid of the seeds roasting it in the oven. I would spend probably close to 40 to 45 minutes taking some sweet onions/cooking onions caramelizing them. Still trying to figure out what kind of pesto to use. I welcome any wonderful ideas that you all might have on ingredients and execution of the recipe.!
Hello image processing noob here. I am trying to do a project where I am comparing two images of someone's arm with a scrape (or some type of wound). Image one is just the arm at a certain distance and angle. Image two is the same arm but possibly at a different angle, position, and distance than the original. The goal is to look at the scrape and be able to tell if it has shrank in size.
I don't really know how to apply image matching in this situation or if that's even the right method. Can someone give me some direction on how to approach this problem? ( maybe get some keywords for me to google)
tldr: How do I merge snippets of pictures based on content, without having to go through every picture manually?
Hi, I am currently working on merging a bunch of images. I have hundreds of images that look like this:
My goal is to have something like this:
And afterwards clean it up to have just the dark parts left.
In the end, the goal is, to align each dark area with each other along the horizontal lines running through them. The result should be one continuous wave, consisting of each section of the sample pictures. There is probably still a lot of work to use it for what I have in mind, but I want to avoid having to manually move and align hundreds of these.
Does anyone have any ideas on how I could best approach that? I do have access to Photoshop, but I am also very open for other tools.
Hi, I do image processing with huge files in Photoshop. I need to speed it up. I have 2 choices for a computer: Intel i5-11400H CPU with Nvidia Quadro T1200 w/4GB Graphics memory or Intel i7-1355U CPU with Nvidia GeForce MX550 w/2GB Graphics memory
same RAM and HD. Other specs comparable.
So, which one should I choose?
Not really trying to resolve anything except my curiosity here. My laptop has a broken screen- vertical lines across the LCD, you can give it some specific accupressure (haha) in an area of the screen to get it to go away, however, I recently screencapped it while the lines were there assuming it would save a clear image- just like if I was using an external display, the image data isn't corrupted, just the physical display, right? Anyway, that's not the case for some odd reason. Further, I noticed when assembling this graphic in photoshop, that the preview image in the sidebar is intact, and also that I can scale the layer without effecting the "rainbow", so it's getting weirder the more I explore. Can someone explain this to me?
Link a photo of the screen, the correct screen, & the screencap while the bands are showing.
https://www.researchgate.net/publication/350075026_Image_Processing_Masterclass_with_Python
Codes available here: https://github.com/sandipan/Solving-Image-Processing-Problems-with-Python-Part1
Some images from the book:
Hey guys,
I am a Comp Science student, taking a DIP course this semester.
I was wondering if anyone could give some tips/suggestions/resources on what I could make for our term project or how should I approach it?
We'll have roughly two months to complete it and so far we have learnt some basic concepts in class such as Histogram equalisation, specification, filters, stretching, thresholding, transformation functions etc.
Thanks a lot!
Hey all, I'm wanting to use the classic baboon image for some research I'm doing but I can't seem to find source information on it. USC SIPI lists it as a
Scan from a magazine picture. Copyright belongs to original publisher or photographer.
but I can't for the life of me find the original publisher or photographer. Wikipedia lists it as public domain in the USA, but I'm in Canada so I'm unsure if that applies to me or not. Anyone know the original source of it?
I’m doing a project in which I’m using blood smear microscopic images and identify leukemia (wbc cancer) by their shape size and texture. How should I do it? I’m very new to these. Please help
Hi everyone,
I want to know if there is a way to improve quality of a picture with a resolution of 1000*1500 pixel in order to make a 11,7 X 16,5 po poster. I know quality of the print poster depends of the dpi of the picture. My picture is 72 dpi. Is there an Artificial intelligence or other way to improve it ? With which tool?
Thank you in advance
If this isn't the right subreddit for this question, could someone suggest a better one? Thank you!
I'm trying to make a photo collage that's under 20 MB to upload to a custom puzzle website (https://www.venuspuzzle.com/). A png is coming out as 22 MB now.
I think the question I need to ask is, will I get better quality by scaling down the collage and saving it as a png, or by saving it as a jpg from its original size? I use GIMP for photo editing. The collage is made from 3 jpgs taken on a newer iphone (10 Pro I think) and the png was coming out 22 MB at the size I made it. Scaling the image to 93% of original size makes the png just under 20 MB. A jpg with the quality setting at 100% is 12 MB.
When I start with 3 photos that are 2-3 MB each, and I make a collage of them in GIMP xcf format, why does the PNG save out as 22 MB and a 100% quality jpg as 12 MB? What is the extra data being saved, just interpolated pixels from when the jpgs get uncompressed into layers? Is there a way I can avoid that?
Background if it helps, or if anyone has ideas about a better way to do this:
I started testing by uploading a single photo that's 3024 x 4032 pixels. The interactive tool on the Venus website says it's good enough quality to print as a 1000-piece puzzle. However, the site also says they print at approximately 1200 dpi, and the puzzle will be 19 x 27 inches, so does it make sense that my photo is good enough? Is there something in how jpgs are compressed that makes it look ok even though it doesn't have as many pixels as the printer capability? (I will refer to this approved-as-good-enough photo as the "first photo" in the rest of this post.)
Now I want to put the first photo into a collage of 3 photos, in a similar design to the Shutterfly 3-photo puzzle. (I prefer to order from Venus because I've worked on custom puzzles from both sites and prefer the quality at Venus - thicker cardboard and the pieces fit together better.) So I figure if my first photo is part of a collage, it will print smaller and therefore still good enough quality. By the way, Shutterfly also says my 3 photos are good enough quality when I make a test 3-photo collage on their site, and the Shutterfly puzzle I've done was definitely made from iphone photos.
Where I'm getting confused is how to make the collage myself in GIMP without losing resolution but also not ending up with an enormous file, because 19" x 27" x 1200 dpi results in a multi-GB file, and obviously way more pixels than I'll get from 3 iphone photos.
So instead I calculated how many pixels I need in the collage to hold my photos without scaling them any larger than their original resolution. I ended up with 5149 x 3623 pixels. (I can show my math if anyone cares. I didn't use the full area of the first photo, so I took the pixel dimensions for the area I wanted, used it to set the height, and used the puzzle's aspect ratio to get the width.)
I guess what I'm doing is taking compressed images, uncompressing them, and then if I recompress the collage into a jpg, I do lose information/quality. So is the downsized collage + png better? Or is there a better way to stitch jpgs? Maybe I would be better off using Shutterfly for the photo quality, even though their puzzle pieces aren't as easy to put together.
Sorry if this post is way too long. Thank you for any advice or explanation you can offer.
Are traditional sharpening filters linear? I'm confused because of the laplacian thingy.
I'm trying to rewrite bilinear interpolation in python but I'm running into some issues. The best explanation I've found about bilinear interpolation is here, but when the person says "Specifically, if you want to compute f5, this means that (x,y) = (1.5,1.5) because we're halfway in between the 100 and 50 due to the fact that you're scaling the image by two." I get lost. How did he get these values? And how would I use these values to interpolate on an image?
Thanks in advance!
Hi All,
I'm trying to manually rescale a grayscale image in Matlab in the same way histogram equalisation may operate. I have a range of grey-level values, say [50-80] that I wish to rescale into a different range such as [0-40]. As they are different lengths, I've tried using Matlab rescale() to help out but I have yet to reach success in applying the rescale to my image values that I want to update ie taking the 31 values [50-80] and rescaling them to values between [0-40]. Any thoughts on the matter? Am I approaching it correctly? Please let me know if I have been unclear with my wording and I will update as required.
Many thanks :)
Hey, does there exist an invertible image transformation function F such that if I apply F on image X I get Y and Y looks as noisy and random as possible. If I apply blur to Y to get Y* and then take F***^(-1)***(Y*) to get X* then X and X* should be as similar as possible (like a blur version of X).
I would really appreciate any help that I can get
To calculate difference between 2 grayscale images of same size, can I represent each image using 512 × 512 pixels where each pixel value can be anywhere between 0 to 255?
Now, can I use cityblock distance or mean squared difference formula to calculate the difference between these two vectors, each of 512 x 512 (without any headers)? Is there any flaw in that? Also, what would be physical size of each image (or what physical size is advisable)? Can I apply the above method on Jpeg, PNG, or any other format images? Please let me know, for which format images, the above method can be used to calculate the difference/similarity?
Please help me solve this problem.
A 6x6 image f (x,y) filtered with a vertical Prewitt kernel g (x,y) measuring 3x3, each as follows:
Calculate the output of filtered images, where h (x,y) = f (x,y) * g (x,y)
Thank you
Working on an assignment about real life applications of dip.
I'm researching computer methods for the problem of gender determination out of fingerprints. I'd be glad to have your opinions on this.
1- Partial prints
2- Full prints
3- What algorithms are used? AI methods (deep learning, etc)
Thanks
Hi everyone,
i will try to make a simple image processing project using python. i want to detect contours of a book. i will use a A4 paper as a background. i have two problems.
1- i will use a webcam. how many megapixels do i need? is this webcam enough ?
2- i want to use a led strip. where to i should put it? around the camera?
I am a first year student with an interest in imaging/signals and DIP seems like a cool thing to explore. Any advice on projects/pursuits to get started in DIP. I know how to use python if that helps.