/r/TransQualityGifs

Photograph via //r/TransQualityGifs

r/HighQualityGifs, but trans!

Welcome to Trans Quality .gifs!

It's basically r/HighQualityGifs, but trans!

Rules

  1. All gifs must be high-quality, trans-themed gifs. Please review our criteria

  2. Original gifs only please. Trying to pass someone else's off as your own may result in post removal and/or ban.

  3. Personal attacks, racism, bigotry, homophobia, transphobia, ableism, witch hunts, etc. will not be tolerated. This includes politically polarized gifs that attack people who share different views than you.

  4. Any criticisms you have for submissions must be constructive. Complaints about source choices or gif quality will be removed, and multiple offenses may result in a ban. The giffers here work hard to provide you with their creations. Don't be a dick.

  5. Please tag NSFW posts when appropriate. No nudity, no gifs from porn videos, and no real-life gore.

  6. Direct links are required, meaning that the URL must end in ".gif, .gifv, .webm, .mp4" etc. For gfycat webm links, use the format http://gfycat.com/name, do not include .webm

  7. Cross posts are encouraged. Made a new gif for /r/gssp? /r/traa? Post it here!

If you have any questions about any rules, please message one of the moderators.

Pro-tip: You can also use r/TQG to get here.

Giffing Tutorials

  • Coming Soon

Other Trans Meme Subs

/r/TransQualityGifs

6,512 Subscribers

566

My Partner Helping Me Feel Better

11 Comments
2020/11/19
17:03 UTC

314

Me Vs Dysphoria (X-Post from r/transclones)

2 Comments
2020/05/24
05:12 UTC

321

It be like that (X-Post from r/transclones)

6 Comments
2020/05/21
07:42 UTC

75

If anyone has any ideas on how to block the recent spam posts we've been getting, please message me 🤔

4 Comments
2019/12/11
06:20 UTC

212

When you're trans bf is coming to visit, so you gotta give the family "The Talk"

0 Comments
2019/01/18
18:51 UTC

523

The weight of casual disrespect

5 Comments
2019/01/18
14:32 UTC

205

Best sailor scout reveals her true colors!

8 Comments
2018/10/18
00:31 UTC

244

If you wanted me to stop coming home.

22 Comments
2018/09/09
15:15 UTC

106

Transgirl finally getting her boobs pills caught on tape (rare one)

2 Comments
2018/08/10
23:08 UTC

235

When your parents tell you boys don't cry

13 Comments
2018/08/10
17:26 UTC

75

[Tutorial] Let's make a simple gif step-by-step!

Good evening catgirls, catboys, nonbinary catpersons and the "totally cis, honestly" amongst you.

We're going to make this gif using nothing but open source software. We'll be doing nothing complicated this time. No complicated animation, tracking, masking, particles, looping, or anything that majorly detracts from the core gif. We'll add in a dropshadow to get used to the compositor because of how important the compositor is to this kind of work, but mostly we'll just be getting used to the general gifmaking process.

We'll be using three bits of software:

  • youtube-dl will let us download the source video from YouTube
  • ffmpeg to extract the clip we want and convert between formats including making the final webm for upload
  • Blender to make the gif itself.

I'll attempt to assume little prior knowledge of Blender but in all honesty this requires at least a small amount of familiarity. If you start to lose track of what's happening feel free to take a break from the tutorial and come back later after you've experimented or followed some "intro to Blender" tutorials. I highly recommend Blender 3D Noob to Pro which will give you a very strong foundational knowledge of Blender that you can apply to any kind of project you want to do in it including making gifs.

I'm a Linux user so I do much of my work from the command line. Command line input here starts with $, don't type the $. Commands themselves will be the same across operating systems. There are some GUI tools that offer frontends to ffmpeg but it's worthwhile learning the commands since the GUI tools are usually at least a few versions behind and don't offer access to all elements of ffmpeg.

The last thing I want to say before we begin is that the ways I'm demonstrating things here aren't the only ways of doing things, and in some cases not even the most optimal way, just a way that I think makes a good and useful introduction. You should experiment and find ways of doing things that work for you. There's no wrong way to be creative.


#Step 1: Getting the Clip

The first thing we want to do is grab a short segment of video that we can use to create the gif. youtube-dl will by default download the best available quality that contains both video and audio so if you do: $ youtube-dl https://www.youtube.com/watch?v=j1dJ8whOM8E

then you will get a nice, easy to use version of the video, but at 720p quality. While uploading in 720p will be fine, when working with video we want to maintain the highest possible quality through the entire process then downsample at the very end. This makes it easy to do postprocessing work as well as to adjust and reupload different quality versions with a single command. So to figure out the highest video quality we need to do: $ youtube-dl --list-formats https://www.youtube.com/watch?v=j1dJ8whOM8E

where we can see that the highest format video quality has code 137 but doesn't come with audio. To make it easier to work with we'll add an audio format into the mix, 171 seems like a good choice on this occasion. We don't need the highest quality because gifs with sounds aren't called gifs, they're called videos. We just want to be able to accurately get the timestamps of the dialogue we want to extract, for which any audio format will suffice. So let's do: $ youtube-dl -f 137+171 -o contra https://www.youtube.com/watch?v=j1dJ8whOM8E

At this point if ffmpeg is set up correctly, ffmpeg will perform one of its many roles here and mix the two files into a single container file (which we'll call contra.mkv - the extension is added for us) which contains both the audio and the video. Now we need to watch the video and figure out the timestamp. The line "why is nobody talking about the mouthfeel" occurs at about 9m21s into the video and by 9m24s it's over, so extracting 3s of video from 9m21s gives us a nice buffer with some extra frames to work with: $ ffmpeg -ss 09:21 -i contra.mkv -t 00:03 -c copy mouthfeel.mkv

Read up on seeking with ffmpeg to get more information on what's happening here, but the gist is that we're using keyframe seeking which is approximate (but very fast) to extract a perfect copy of a chunk of the video. Because Blender lets us be accurate with start and end frames, we don't need to waste time with accurate seeking with ffmpeg, which especially if you have a long video could take forever. We don't want to transcode the video because, again, we want to maintain the highest possible quality at every stage.

Now we have the video clip that we'll make a gif from and we can move on to...


#Step 2: Making the Gif ##Part 1: Importing the Video and Setting it up Open up Blender, make sure you're using the Cycles renderer along the top (by default the dropdown is set to Blender Internal) and delete the default cube (right-click it, press X on the keyboard then left-click the confirmation dialog). Set the view to the Front view (Numpad 1) and the align the camera to that view (Ctrl+Alt+Numpad 0). The grey box around the camera view helps differentiate what is inside vs outside the camera view - if it goes away you've probably navigated away from the camera perspective, and you can return to it by pressing Numpad 0. It's not necessary to use the front view, some people prefer to use the top view instead but I prefer to use the front view since it means that the bottom of the screen is "down" in the world which is useful when working with advanced effects like particles and physics.

Along the top row, switch the view layout from "Default" to "Motion Tracking" - this rearranges the windows to be suitable for Motion Tracking, which we're not going to do on this gif, but it's good to know where to find it later. Near the bottom you'll see "Open" - use it to navigate to where you stored mouthfeel.mkv and open it. We now have our clip imported into Blender, but we're going to make a small adjustment before we start editing things by making a Proxy. This has Blender generate a low-quality version of our clip so that Blender runs faster while we're editing, but Blender will still use the full-quality version of the clip when we export the final animation.

Directly next to the now-imported video on the right hand side there is a list of options, scroll down it and check the "Proxy/Timecode" option, expand the section, ensure 50% is selected and the others are deselected, then press Build Proxy/Timecode. After this is done we have our Proxy ready to use, so set the Proxy Render Size to 50% like so. Now Blender is only rendering a small version of the video instead of the full thing, this saves a lot of memory and time and stops Blender getting bogged down. In this case there's not much (if any) improvement, but especially on larger videos the difference is huge.

Now we want to chop out the useless frames we got from ffmpeg. Scrub the timeline back and forth until you find a start point you're happy with, then at the very bottom of the window go to Frame > Set Start Frame, and do the same for the end point. In this case I've chosen to start at frame 15 and end at frame 68. This retains the zoom in on Contra's face but cuts out the changes in camera view.

Swap back to the Default view using the same dropdown at the top you used earlier.

Press N to bring up the property panel at the right hand side of the window. Check Background Images and expand the section. Set the background to Movie Clip, uncheck Camera Clip, Proxy Size to 50% matching what we generated earlier, and Opacity to 1.0 like so. Now the background of the camera is the movie clip we imported earlier, but only as a reference view - this clip isn't part of the scene proper yet.

##Part 2: Adding The Text Time to add some text: With your cursor over Contra's face, press Shift+A and select Text. The text by default faces upwards so to face it towards the camera we're going to press R then X then 90 then press Enter. This is a quick way to Rotate the selected object around the X axis by 90 degrees.

Press Tab to enter Edit mode and replace the text with what we want, "Why is nobody talking about the mouthfeel?" and press tab to re-enter Object Mode. In this case I'm using line breaks to make it more visually interesting rather than a single continuous line of text, but you're free to alter it as you see fit - it's your gif. Under the Text's Font properties on the right hand side, scroll down to Paragraph and change the Horizontal Alignment and Vertical Alignment to Center. Press G to grab the text object and move it around and press Enter when you're happy. It'll still be a bit big so press S then move your mouse cursor to scale the object down and again press Enter when you're happy. Continue grabbing and scaling until the text appears the way you want it to.

Change the font if you want to using the Font options and adjust its position/scale to compensate. I'm going to leave it at the default, but you'll need to change it if you want to add in bold or italic text which the default Blender font doesn't support.

Now to alter the text's appearance: if we render just now, the text will be dark and hard to read because it doesn't have a suitable material applied. Switch to the Text Object's Materials tab, add a New Material and change its Surface to Emission. As the name implies, this means that the material itself emits light which means we get a consistent bright appearance across the text without worrying about lighting. It also means the light emitted by the material can bounce off other objects in our scene, but since we don't have any other objects we don't need to worry about the implications right now. We can see our changes if we set the Render Mode to Material along the bottom, now the text in the viewport is a nice bright white.

##Part 3: Animating The Text What we have here is fine, and if you're struggling to keep up feel free to skip this step, but at this point we're going to add in a very simple animation to the text to give it a bit of flair.

We kept the camera zoom in the gif, so now we're going to use that to our advantage. We'll add in a simple animation that makes it seem like the text is zooming in along with the camera. Because we have our text already set up the way we want it, we'll start by keyframing the end position. Move carefully along the timeline until you find the point where the zoom ends. That seems to be frame 22. Go into the Object tab, mouse over Scale and press I on the keyboard. It'll turn yellow to indicate that we're currently on a keyframe.

Now go to the frame where the zoom starts, which here is frame 18. Look at the Scale and it's turned green, indicating that it's animated but not currently on a keyframe. Set the Scale to 0 and press I with the cursor over it again, it'll turn yellow indicating that we've added another keyframe.

At this point if you use the left/right arrows to move between frames you can see the text automatically interpolates the Scale to match the frame, giving us a simple but nice animation. Because the zoom is so fast the end result will be a bit too quick to see (about 1/10th of a second) but understanding the process is what's important here.

##Part 4: Compositing the result We've done everything we want to do inside the gif, now it's time to put the different parts of it together, add a bit more flair and export it.

Switch to the Compositing layout at the top, then check the Use Nodes and Backdrop boxes in the middle of the screen. The node graph for the compositor will appear now. By default the Compositor will take the Render Layer we just made (our text) and feed it into the Composite node which generates the final output. Our video is nowhere to be seen yet so go to Add > Input > Movie Clip and place it into the scene by left-clicking at a suitable spot. From the drop-down inside that node select the clip we've been using. These two elements aren't connected yet so to combine them we're going to add an Alpha Over node which will overlay the render layer over the video. Go to Add > Color > Alpha Over and place that node into the scene. Then drag the yellow Image dot from the Movie Clip into the top yellow Image dot on the Alpha Over clip and the Image from the Render Layers node into the bottom on the Alpha Over. The order is important: if you swap it round, the video will be rendered on top of the scene so it will seem like nothing is visible. Now connect the Image from the output (right hand side) of the Alpha Over to the Input (left hand side) of the Composite.

We're also going to add a Viewer node (Add > Output > Viewer) and connect the Alpha Over Image to that Viewer Image, which gives us our backdrop so we can see what we're working on. The Backdrop options on the right hand side let you adjust zoom and offset so you can get a good view of what you're looking at.

There's only one remaining problem here - Blender lets you export a render scene as any size, but our video has a fixed size of 1920x1080 and that might not match up with what we render as. So we're going to add one final node: Add > Distort > Scale, drag it over the line that connects Movie Clip to Alpha Over and left click when that line turns yellow. Then change the drop down to Render Size. Now Blender will adjust the video resolution to match our render size whenever we export so everything lines up as we expect it to. At this point your node graph will look like this.

In Blender it's possible to automatically have a video be part of the background of a scene without going through the Compositor - while that may have been "easier" than this approach, it's far more important to understand how the Compositor works. It's arguably the most important part of Blender to understand when it comes to gif-making and you won't get very far if you're unfamiliar with it. We're going to sweeten it up a bit and make it worth your while by adding a drop shadow to the text, which will add contrast to the gif and make the text easier to read.

Add a Set Alpha node (Add > Color > Set Alpha) and connect the Alpha output of Render Layers to the Alpha input of Set Alpha. Leave Set Alpha's Image as pure black (you can click on it to adjust the color if you want, but black is fine here). What this does is take Alpha input values and converts them to colors, meaning that any part of the scene that has something visible will now be a solid color, and everything else will be transparent. If you connect the Image output to the Viewer, you can see the effect for yourself (and hopefully understand the utility of the Viewer node better now). This black silhouette of our text will become our drop shadow.

To combine the text with the shadow we're going to add another Alpha Over node. Connect the output of Set Alpha to the top Image input of our new Alpha Over and then connect the Image output from Render Layers to the bottom input of the new Alpha Over. Now we have a combined image of our text and its (currently still invisible, don't worry) backdrop which we can feed in to the bottom node of the other Alpha Over like so.

Because our drop shadow is a perfect silhouette of our scene, it's rendering directly underneath our text right now, which means we can't see it. What we need to do is offset it so that it becomes visible. We do this with a Transform node (Add > Distort > Transform) which we will drop in between Set Alpha and Alpha Over. Blender will make room for the node, but feel free to drag nodes around to make more room and make things easier to see. Now if we adjust the X and Y values on our transform node we can see the drop text appear: we're adjusting the underlying shadow generated by the Set Alpha node to be slightly offset from its original position, and the Alpha Over node we feed it into takes care of making sure that the text appears above the silhouette. Adjust it until you're happy with it. Feel free to play around with the Set Alpha color to adjust it and see the result in real time or change settings to get a better sense of what's happening.


#Step 3: Exporting & Uploading We're done! Everything is finished gif-wise. Now we have two things left to do: first, adjust the export settings to export our gif in the highest quality possible, then afterwards use that high quality export to generate a webm suitable for gfycat uploads.

##Part 1: Settings & Export In our Scene Render tab we're going to change a few key settings. Change the render to Keep UI (we don't care about seeing the render result on screen), change to GPU Compute to improve render times, ensure the resolution is 1920x1080 at 100% (our desired video size) at 23.98 fps (our input video FPS).

Under Output change the path to //outputframes/ - this will create a directory called outputframes next to where you've been saving your Blend file (you have been saving your work, haven't you?) - change to PNG, RGB, 90% compression. Under "Film" check Transparent and under Performance change the Tile size to something suitable like 256x256. What is suitable for some GPUs or scenes may not be suitable for all, read more about Tile Size and other Cycles settings here. We have such a small and simple scene here that we don't care much about perfect optimisation, but you should read into it if you want longer and more complex animations.

Once your settings look like this press Animation up at the top between Render and Audio. Blender will spend a few minutes (times will vary based on your PC power and other settings) running through the animation frame by frame and exporting one high quality PNG file for each individual frame of the animation. You might think that exporting dozens of 1920x1080 PNG files is wasteful when you're uploading a small webm, but if you move on to loops or other effects you will be glad you did it all at high quality so you only need to redo the final conversion rather than re-export the entire animation all over again.

##Part 2: Conversion & Upload In the directory next to our blend file, we're going to tell ffmpeg to combine all these PNG files into a webm. gfycat lets you bypass Encoding if your settings are already good, which means vp8 format and even numbers for width and height. Because people will be streaming this we'll scale it down to half size and keep the bitrate high enough to maintain quality but low enough to be streamable. 2000Kbps is a good starting bitrate, feel free to adjust as needed:

$ ffmpeg -framerate 23.98 -start_number 0015 -i outputframes/%4d.png -c:v libvpx -b:v 2M -filter:v "scale=iw/2:-1" mouthfeel.webm

The command tells ffmpeg to: create a video at 23.98fps (matching our original source) from the PNG files in outputframes/ directory starting from frame number 15, use the libvpx encoder at a bitrate of 2Mbps, scale the video down to 50% of the input width (iw/2) and automatically adjust the height to maintain aspect ratio (-1), then save all that as mouthfeel.webm

Now on gfycat.com you can go and upload your very own mouthfeel.webm!


#Epilogue and Homework And you're done! Congratulations! Go back over things, experiment with different settings, and try making your own gif from the same video. Your homework for this tutorial is to make a gif from this same video that has multiple lines of dialogue and post it in this thread. How you do that is up to you, but as a hint, I've already walked you through everything you'd need to achieve it. Good luck!

10 Comments
2018/08/04
16:36 UTC

Back To Top