/r/deeplearning

Photograph via snooOG

Resources for understanding and implementing "deep learning" (learning data representations through artificial neural networks).

/r/deeplearning

171,346 Subscribers

0

Is General Intelligence(AGI) Computational or Non-Computational?

2 Comments
2024/12/02
15:32 UTC

7

L1 vs L2 Regularization

1 Comment
2024/12/02
10:15 UTC

1

Last month in AI | Nov, 2024

šŸ”Ā Inside this Issue:

  • šŸ¤–Ā Latest Breakthroughs:Ā This month itā€™s all about whatā€™s new in AI and what is just a bunch of old rehashed ideas.
  • šŸŒĀ AI Monthly News:Ā Discover how these stories revolutionize industries and impact everyday life:Ā NVIDIAā€™s new voice modulating AI, Challenges in scaling AI and AI to identify domestic abuse.
  • šŸ“šĀ Editorā€™s Special:Ā This covers the interesting talks, lectures, and articles we came across recently.

AIGuys Newsletter: https://medium.com/aiguys/aiguys-digest-nov-2024-be08364047a1

Latest Breakthroughs

Everything is moving at such a rapid pace with new models and strategies coming every few weeks, it is becoming quite tough to keep track of everything. But if you look closely you will see only a little has changed except the scale of compute and data.

Somehow we are still working with decade-old ideas. One example I like to give about not coming up with new ideas is the exorbitant use of XgBoost or other tree-based models, most financial models are still running on these, not on deep learning-based models.

We Are Just Scaling AI And Not Coming With Novel Ideas

Ever since the release of LLMs, we have been trying to reduce the memory of our models. Over the years, we have come across many innovations like different types ofĀ Quantization,Ā Dropout,Ā etc. We even tried to completely change the model architectures to solve the scaling problems of Transformers.

Research likeĀ Flash Attention, RetNet, State Space Models, and many others show great potential, but somehow Transformer remains the king. Today we are looking at some brand-new research papers and see whatā€™s happening in this space. Have we made some real improvements or not?

Are Tiny Transformers The Future Of Scaling?

Recently we heard a lot of noise about LLMs hitting a wall. Is this true? Is a new AI winter upon us? Or is it just a hiatus? As a matter of fact, people need to know what is happening with scaling laws truly.

It is not hard to find AI experts having completely opposing views on the future of AI. This reminds me of Kenneth Stanleyā€™s book, ā€œWhy Greatness Canā€™t Be Plannedā€ which primarily argues that no one knows what it takes to make a breakthrough in a certain field and thatā€™s exactly the same thing happening lately with the AI.

In the last few weeks, we saw many big labs and researchers showcasing their disappointment with diminishing returns on AI as well as others hyping it even more.

Are We Hitting The Scaling Limits Of AI?

AI Monthly News

Nvidia shows an AI model that can modify voices, generate novel sounds

Nvidia unveiled Fugatto, an AI model capable of modifying voices and generating novel sounds, targeting creators in music, film, and gaming. The company is cautious about public release due to potential misuse.

News Article:Ā Click here

Challenges in AI Advancement

Industry leaders from companies like OpenAI and Nvidia acknowledge potential slowdowns in AI advancements due to limited computing power and data availability. Strategies to overcome these challenges include utilizing multimodal data, and synthetic data, and improving AI systemsā€™ reasoning capabilities.

  • The rate of AI-model improvement appears to be slowing, but some tech leaders say thereā€™s no wall.
  • Itā€™s prompted a debate over how companies can overcome AI bottlenecks.

News Article:Ā Click here

AI can help police predict if someone is at risk of domestic abuse

AI tools are being developed to assist police in predicting the risk of domestic abuse, and analyzing responses to specific questions to forecast future incidents with significant accuracy. This technology aims to enhance preventive measures and support for at-risk individuals.

ā€˜Lizzyā€™ the AI gives the probability of physical violence within three months with 84 percent accuracy and could be made available to British forces soon.

News Article:Ā Click here

Editorā€™s Special

  • Visualizing transformers and attention | Talk for TNG Big Tech Day ā€˜24Ā Click here
  • Geoff Hinton ā€” Will Digital Intelligence Replace Biological Intelligence? | Vectorā€™s Remarkable 2024Ā Click here
  • Lecture Series in AI: ā€œHow Could Machines Reach Human-Level Intelligence?ā€ by Yann LeCunĀ Click here
  • Unreasonably Effective AI with Demis Hassabis!:Ā Click here
0 Comments
2024/12/02
08:29 UTC

2

F5-TTS is highly underrated for Audio Cloning !

0 Comments
2024/12/02
04:41 UTC

40

PyTorch implementation of Levenberg-Marquardt training algorithm

Hi everyone,

In case anyone is interested, hereā€™s a PyTorch implementation of the Levenberg-Marquardt (LM) algorithm that Iā€™ve developed.

GitHub Repo: torch-levenberg-marquardt

A PyTorch implementation of theĀ Levenberg-Marquardt (LM)Ā optimization algorithm, supportingĀ mini-batch trainingĀ for bothĀ regressionĀ andĀ classificationĀ problems. It leverages GPU acceleration and offers an extensible framework, supporting diverse loss functions and customizable damping strategies.

A TensorFlow implementation is also available:Ā tf-levenberg-marquardt

Installation

pip install torch-levenberg-marquardt
3 Comments
2024/12/02
03:54 UTC

1

[R] Queries on DeepAR in AWS Sagamaker

Hi,

I'm trying to implement deepAr for various stores to predict futures sales (each store with ~10k SKU of different products). Due to sheer size of the SKU I wouldn't be able to just do only single training for all the data at once. I'm thinking to train it by store.

  1. How do I do parallelism in AWS for the training purpose? Each store training process would take up to 30mins;
  2. How to deal with unseen SKUs which are not present in the data?

Thanks.

0 Comments
2024/12/01
23:33 UTC

1

[Discussion] Qwen VL 7B 4bit Model from Unsloth - Poor Results Before and After Fine-Tuning

Hi everyone,

Iā€™m having a perplexing issue with the Qwen VL 7B 4bit model sourced from Unsloth. Before fine-tuning, the model's performance was already questionableā€”itā€™s making bizarre predictions like identifying a mobile phone as an Accord car. Despite this, I proceeded to fine-tune it using over 100,000+ images, but the fine-tuned model still performs terribly. It struggles to detect even basic elements in images.

For context, my goal with fine-tuning was to train the model to extract structured information from images, specifically:

  • Description
  • Title
  • Brand
  • Model
  • Price
  • Discount price

I chose the 4-bit quantized model from Unsloth because I have anĀ RTX 4070 Ti Super GPU with 16GB VRAM, and I needed a version that would fit within my hardware constraints. However, the results have been disappointing.

To compare, I tested the base Qwen VL 7B model downloaded directly from Hugging Face (8-bit quantization with bitsandbytes) without fine-tuning, and it worked significantly better. The Hugging Face version feels far more robust, while the Unsloth version seemsā€¦ lobotomized, for lack of a better term.

Hereā€™s my setup:

  • Fine-tuned model: Qwen VL 7B (4-bit quantized), sourced from Unsloth
  • Base model: Qwen VL 7B (8-bit quantized), downloaded from Hugging Face
  • Data: 100,000+ images, preprocessed for training
  • Performance issues:
    • Unsloth model (4bit): Poor predictions evenĀ beforeĀ fine-tuning (e.g., misidentifying objects)
    • Hugging Face model (8bit): Performs significantly better without fine-tuning

Iā€™m a beginner in fine-tuning LLMs and vision-language models, so I could be missing something obvious here. Could this issue be related to:

  • The quality of the Unsloth version of the model?
  • The impact of using a 4-bit quantized model for fine-tuning versus an 8-bit model?
  • My fine-tuning setup, hyperparameters, or data preprocessing?

Iā€™d love to understand whatā€™s going on here and how I can fix it. If anyone has insights, guidance, or has faced similar issues, your help would be greatly appreciated. Thanks in advance!

Here is the code sample I used for fine-tuning!

# Step 2: Import Libraries and Load Model
from unsloth import FastVisionModel
import torch
from PIL import Image as PILImage
import os

import logging

# Configure logging
logging.basicConfig(
    level=logging.INFO,  # Set to DEBUG to see all messages
    format='%(asctime)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler("preprocessing.log"),  # Log to a file
        logging.StreamHandler()  # Also log to console
    ]
)

logger = logging.getLogger(__name__)

# Define the model name
model_name = "unsloth/Qwen2-VL-7B-Instruct"

# Initialize the model and tokenizer
model, tokenizer = FastVisionModel.from_pretrained(
    model_name,
    load_in_4bit=True,  # Use 4-bit quantization to reduce memory usage
    use_gradient_checkpointing="unsloth",  # Enable gradient checkpointing for longer contexts

)

# Step 3: Prepare the Dataset
from datasets import load_dataset, Features, Value

# Define the dataset features
features = Features({
    'local_image_path': Value('string'),
    'main_category': Value('string'),
    'sub_category': Value('string'),
    'description': Value('string'),
    'price': Value('string'),
    'was_price': Value('string'),
    'brand': Value('string'),
    'model': Value('string'),
})

# Load the dataset
dataset = load_dataset(
    'csv',
    data_files='/home/nabeel/Documents/go-test/finetune_qwen/output_filtered.csv',
    split='train',
    features=features,
)
# dataset = dataset.select(range(5000))  # Adjust the number as needed

from collections import defaultdict
# Initialize a dictionary to count drop reasons
drop_reasons = defaultdict(int)

import base64
from io import BytesIO

def convert_to_conversation(sample):
    # Define the target text
    target_text = (
        f"Main Category: {sample['main_category']}\n"
        f"Sub Category: {sample['sub_category']}\n"
        f"Description: {sample['description']}\n"
        f"Price: {sample['price']}\n"
        f"Was Price: {sample['was_price']}\n"
        f"Brand: {sample['brand']}\n"
        f"Model: {sample['model']}"
    )

    # Get the image path
    image_path = sample['local_image_path']

    # Convert to absolute path if necessary
    if not os.path.isabs(image_path):
        image_path = os.path.join('/home/nabeel/Documents/go-test/finetune_qwen/', image_path)
        logger.debug(f"Converted to absolute path: {image_path}")

    # Check if the image file exists
    if not os.path.exists(image_path):
        logger.warning(f"Dropping example due to missing image: {image_path}")
        drop_reasons['missing_image'] += 1
        return None  # Skip this example

    # Instead of loading the image, store the image path
    messages = [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "You are a expert data entry staff that aims to Extract accurate product information from the given image like Main Category, Sub Category, Description, Price, Was Price, Brand and Model."},
                {"type": "image", "image": image_path}  # Store the image path
            ]
        },
        {
            "role": "assistant",
            "content": [
                {"type": "text", "text": target_text}
            ]
        },
    ]

    return {"messages": messages}

converted_dataset = [convert_to_conversation(sample) for sample in dataset]

print(converted_dataset[2])

# Log the drop reasons
for reason, count in drop_reasons.items():
    logger.info(f"Number of examples dropped due to {reason}: {count}")

# Step 4: Prepare for Fine-tuning
model = FastVisionModel.get_peft_model(
    model,
    finetune_vision_layers=True,     # Finetune vision layers
    finetune_language_layers=True,   # Finetune language layers
    finetune_attention_modules=True, # Finetune attention modules
    finetune_mlp_modules=True,       # Finetune MLP modules

    r=32,           # Rank for LoRA
    lora_alpha=32,  # LoRA alpha
    lora_dropout=0.1,
    bias="none",
    random_state=3407,
    use_rslora=False,  # Disable Rank Stabilized LoRA
    loftq_config=None, # No LoftQ configuration
)

# Enable training mode
FastVisionModel.for_training(model)

# Verify the number of trainable parameters
trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f"Number of trainable parameters: {trainable_params}")

# Step 5: Fine-tune the Model
from unsloth import is_bf16_supported
from unsloth.trainer import UnslothVisionDataCollator
from trl import SFTTrainer, SFTConfig

# Initialize the data collator
data_collator = UnslothVisionDataCollator(model, tokenizer)

# Define the training configuration
training_config = SFTConfig(
    per_device_train_batch_size=1,       # Reduced batch size
    gradient_accumulation_steps=8,       # Effective batch size remains the same
    warmup_steps=5,
    num_train_epochs = 1,                        # Set to a higher value for full training
    learning_rate=1e-5,
    fp16=False,                           # Use FP16 to reduce memory usage
    bf16=True,                          # Ensure bf16 is False if not supported
    logging_steps=1,
    optim="adamw_8bit",
    weight_decay=0.01,
    lr_scheduler_type="linear",
    seed=3407,
    output_dir="outputs",
    report_to="none",                     # Disable reporting to external services
    remove_unused_columns=False,
    dataset_text_field="",
    dataset_kwargs={"skip_prepare_dataset": True},
    dataset_num_proc=1,                   # Match num_proc in mapping
    max_seq_length=2048,
    dataloader_num_workers=0,             # Avoid multiprocessing in DataLoader
    dataloader_pin_memory=True,
)

# Initialize the trainer
trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    data_collator=data_collator,
    train_dataset=converted_dataset,  # Use the Dataset object directly
    args=training_config,
)

save_directory = "fine_tuned_model_28"

# Save the fine-tuned model
trainer.save_model(save_directory)

# Optionally, save the tokenizer separately (if not already saved by save_model)
tokenizer.save_pretrained(save_directory)

logger.info(f"Model and tokenizer saved to {save_directory}")

# Show current GPU memory stats
gpu_stats = torch.cuda.get_device_properties(0)
start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)
max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)
print(f"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.")
print(f"{start_gpu_memory} GB of memory reserved.")

# Start training
trainer_stats = trainer.train()


# Enable inference mode
FastVisionModel.for_inference(model)

# Example inference
# Define the path to the image for inference
inference_image_path = '/home/nabeel/Documents/go-test/finetune_qwen/test2.jpg'  

# Check if the image exists
if not os.path.exists(inference_image_path):
    logger.error(f"Inference image not found at: {inference_image_path}")
else:
    # Load the image using PIL
    image = PILImage.open(inference_image_path).convert("RGB")
    
    instruction = "You are a expert data entry staff that aims to Extract accurate product information from the given image like Main Category, Sub Category, Description, Price, Was Price, Brand and Model."
    
    messages = [
        {"role": "user", "content": [
            {"type": "image", "image": inference_image_path},  # Provide image path
            {"type": "text", "text": instruction}
        ]}
    ]
    
    # Apply the chat template
    input_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
    
    # Tokenize the inputs
    inputs = tokenizer(
        image,
        input_text,
        add_special_tokens=False,
        return_tensors="pt",
    ).to("cuda")
    
    from transformers import TextStreamer
    text_streamer = TextStreamer(tokenizer, skip_prompt=True)
    
    # Generate the response
    _ = model.generate(
        **inputs,
        streamer=text_streamer,
        max_new_tokens=128,
        use_cache=True,
        temperature=1.5,
        min_p=0.1
    )
0 Comments
2024/12/01
19:01 UTC

0

{Intelligence is Statistics}!

Intelligence, whether human or artificial, is fundamentally rooted in the principles of mathematics and statistics. It involves recognizing patterns, making predictions, and adapting decisions based on probabilistic reasoning and optimization. By leveraging mathematical frameworks, we can model and understand how intelligent systems learn, represent knowledge, and interact with the world.
1. Intelligence as Prediction:

  • Intelligence involves predicting outcomes based on patterns in data.
  • Mathematically, this boils down to statistical inferenceā€”estimating probabilities of future events based on past data.

2. Learning from Data:

  • Humans and machines learn by identifying statistical regularities in data.
  • Techniques like gradient descent and optimization are mathematically grounded methods to find these patterns.

3. Probability Distributions:

  • The brain (and machine learning systems) often operates by estimating and updating probability distributions.
  • Bayes' theorem is a key mathematical framework here, helping refine beliefs as new information comes in.

4. Representation of Information:

  • Neural networks, inspired by the brain, learn representations of data using layers of abstract mathematical transformations.
  • These representations reduce high-dimensional data into meaningful, compressed formsā€”another statistical task.

5. Decision Making:

  • At its core, decision-making relies on maximizing expected outcomes, often modeled mathematically through utility functions and optimization.

6. Reinforcement Learning:

  • Intelligence involves acting in environments to achieve goals.
  • Reinforcement learning formalizes this through Markov Decision Processes (MDPs) and optimization of cumulative rewards.

7. Uncertainty and Noise:

  • Real-world data is noisy and incomplete. Intelligence must deal with this uncertainty, often modeled with tools like Gaussian distributions or stochastic processes.

8. Emergent Properties:

  • Higher-level cognitive functionsā€”reasoning, abstractionā€”emerge from the interplay of simpler statistical mechanisms.

Discovery call

1 Comment
2024/12/01
16:39 UTC

0

Can GOT_OCR2_0 Model Be Used for Gujarati Document Level OCR?

Iā€™ve been working on an OCR project for the Gujarati language and have uploaded my dataset to Hugging Face here.

I am currently training the model to recognize Gujarati words using the GOT_OCR2_0 model here.

My goal is to teach the model a Gujarati word initially, and eventually, I would like to perform document-level OCR for Gujarati text.

  • What are the best practices to ensure it works well with Gujarati text at the document level?

  • Are there any specific challenges I should be aware of when performing OCR for a language like Gujarati, especially for documents that include complex characters or mixed scripts?

0 Comments
2024/12/01
15:27 UTC

0

I asked AI what will be obsolete by 2025 *SHOCKING*

2 Comments
2024/12/01
15:26 UTC

4

Help Me with My Diploma Study on Autonomous Vehicles! šŸš—šŸ¤–

Hi everyone,

Iā€™m currently working on my diploma study, and I need your help! My research focuses on autonomous vehicles and their impact on society. To gather insights, Iā€™ve created a short survey that explores peopleā€™s opinions, expectations, and concerns about self-driving technology.

The survey only takes about 5-10 minutes to complete, and your responses will play a vital role in shaping my research.

Hereā€™s the link to the survey: https://forms.gle/PvjPK2brohdwXiC69

Iā€™d greatly appreciate it if you could spare a few minutes to participate. Your input means a lot, and itā€™ll help me complete this important step in my academic journey.

Feel free to share the survey with friends or communities who might be interested!

Thank you so much for your time and support!

0 Comments
2024/11/30
20:14 UTC

0

GPU buying advice

I am looking for help buying a 3090 with a decent price. It's too expensive and I have to train a model which needs higher VRAM. Where can I look for a decent price for 3090.

8 Comments
2024/11/30
17:49 UTC

0

Writing a recommendation algorithm

Hello everyone I want to write a song recommendation algorithm , I am not sure how to proceed with this project really looking forward to some advice

2 Comments
2024/11/30
15:46 UTC

2

Fine tuning diffusion models vs. APIs

I am trying to generate images of certain style and theme for my usecase. While working on this I realised it is not that straight forward thing to do. Generating an image according to your needs requires good understanding of Prompt Engineering, Lora/Dreambooth fine tuning, configuring IP-Adapters or ControlNets. And then there's a huge workload for figuring out the deployment (trade-off of different GPUs, different platforms like replicate, AWS, GCP etc.)

Then you get API offerings from OpenAI, StabilityAI, MidJourney. I was wondering if these API is really useful for custom usecase? Or does using API for specific task (specific style and theme) requires some workarounds?

Whats the best way to build your product for GenAI? Fine-tuning by your own or using APIs from renowned companies?

0 Comments
2024/11/30
14:38 UTC

0

Is the notion of "an epoch" outdated?

From what I remember, an epoch consists of "seeing all examples one more time". With never-ending data coming it, it feels like a dated notion. Are there any alternatives to it? The main scenario that I have in mind is "streaming data". Thanks!

31 Comments
2024/11/30
02:11 UTC

1

Python Implementation of Softmax that takes integer input

0 Comments
2024/11/30
01:20 UTC

2

from interior image to 3D interactive model

hello guys , hope you are well , is their anyone who know or has idea on how to convert an image of interior (panorama) into 3D model using AI .

4 Comments
2024/11/29
21:16 UTC

0

Best Homeworkify Alternatives for Chegg Answers

Any good ways to unlock Chegg answers for free on Reddit? Iā€™m looking for the easiest way to access Chegg solutions for studying in 2024. After doing some research, there are a lot of options, but I want to find an alternative that's completely safe, easy to use, and doesnā€™t cost anything. Iā€™ve spent a lot of time comparing different methods to get free access to Chegg answers, but Iā€™m still unsure if I should even bother.

EDIT: Best Homeworkify Alternative: https://discord.gg/xCNQGya76q

Here are a few options Iā€™ve found that seem promising:

Homework Unlocks: This seems to be my top pick after searching. The platform offers a way to earn free unlocks for Chegg without paying anything. It also supports other popular study services like Bartleby, Brainly, and Quizlet. Basically, all major study platforms are included, all for free.

Uploading Documents: A separate way to earn free access is by sharing your own study materials on certain platforms. After uploading helpful resources, you may be rewarded with credits or access to premium content.

Community Contributions: Some websites or communities value user feedback. Through using the platform, rating documents or providing answers, you can sometimes earn free access to premium content.

Now, Iā€™d love to hear your thoughts. Hereā€™s what Iā€™m curious about:

  • How can I access Chegg for free using Reddit?
  • What is the best method to unlock Chegg answers in 2024?
  • Best Chegg downloader or Homeworkify alternative?
  • Best way to view Chegg solutions free?

Iā€™d really appreciate your advice and experiences. Your advice will be super helpful for me and other students trying to find good ways to access study resources for free in 2024.

1 Comment
2024/11/29
06:49 UTC

0

Deep Learning Masterclass

Hello All!! Are you curious about how AI and machine learning are transforming the world? Whether you're a beginner or looking to solidify your foundation,

Weā€™ve got you covered! We are Biomed Bros, aiming to bring innovation in education. We teach AI in a simplified and conceptual manner.

Introducing '3 hour DL Masterclass', a 3-part video series breaking down the fundamentals of Deep Learning-no prior experience needed!

Video 1- A Masterclass on Fundamentals of Deep Learning

This video covers on the introduction to deep learning, the various tasks in DL, the hype behind DL and the practicality, the fundamental working of a neuron, construction of a neural network with their types.

Link-Ā https://www.youtube.com/watch?v=0FFhMcu9u3o

Video 2- Easy 5-Step Guide to Backpropagation, Heart of Neural Nets

This video is the second part of Sairam Adithya's 'Deep Learning Masterclass.' It covers the five-step working principle of backpropagation, which is considered the heart of DL algorithms. It also covers some of the challenges in implementing deep learning.

Link-Ā https://www.youtube.com/watch?v=EwE2m4rsvik

Video 3- All About CNN- The wizard of Image AI

This video covers on the fundamentals of convolution operation and the convolutional neural network, which is the forefather of Image DL. Some potential solutions to the challenges in implementing deep learning are covered in this video.

Link-Ā https://www.youtube.com/watch?v=ljV_nEq5S7A

Donā€™t miss out! Deep learning is shaping the future of technology, and it all starts with understanding the basics. Ready to dive in?

0 Comments
2024/11/28
21:01 UTC

8

NLP or LLM research ideas

Hey guys, Iā€™m currently exploring research ideas in the field of NLP and LLMs, and Iā€™d love to hear your suggestions for any interesting topics...

2 Comments
2024/11/28
18:10 UTC

3

Multi-TPUs/XLA devices support for ComfyUI! Might even work on GPUs!

A few days ago, I created a repo adding initial ComfyUI support for TPUs/XLA devices, now you can use all of your devices within ComfyUI. Even though ComfyUI doesn't officially support using multiple devices. With this now you can! I haven't tested on GPUs, but Pytorch XLA should support it out of the box! Please if anyone has time, I would appreciate your help!

šŸ”— GitHub Repo:Ā ComfyUI-TPU
šŸ’¬ Join the Discord for help, discussions, and more:Ā Isekai Creation Community

https://github.com/radna0/ComfyUI-TPU

0 Comments
2024/11/28
18:07 UTC

1

Multi-TPUs/XLA devices support for ComfyUI! Might even work on GPUs!

A few days ago, I created a repo adding initial ComfyUI support for TPUs/XLA devices, now you can use all of your devices within ComfyUI. Even though ComfyUI doesn't officially support using multiple devices. With this now you can! I haven't tested on GPUs, but Pytorch XLA should support it out of the box! Please if anyone has time, I would appreciate your help!

šŸ”— GitHub Repo:Ā ComfyUI-TPU
šŸ’¬ Join the Discord for help, discussions, and more:Ā Isekai Creation Community

https://github.com/radna0/ComfyUI-TPU

0 Comments
2024/11/28
18:07 UTC

2

Will it work for reverse image search?

I have planned to use clip for searching purpose but how do I localize the image for extracting feature vector? What steps should i take? Considering I'm still ib learning phase of machine learning

2 Comments
2024/11/28
14:20 UTC

13

Should i make a data augmentation library for pytorch?

I was training a model using pytorch, and when i was training it, loading the augmented images, were slower than doing backpropogation. The CPU was bottlenecking the training process, and there is no library for doing all the augmentation work on gpu, so i was thinking of making an image augmentation library which supports cuda for pytorch.

What are your thoughts?

8 Comments
2024/11/28
11:52 UTC

0

Generate Up to 256 Images per prompt from SDXL for Free!

The other day, I posted about building the cheapest API for SDXL atĀ Isekai ā€¢ Creation, a platform to make Generative AI accessible to everyone. You can join here: https://discord.com/invite/isekaicreation

What's new:

- Generate up to 256 images with SDXL at 512x512, or up to 64 images at 1024x1024.

- Use any model you like, support all models on huggingface.

- Stealth mode if you need to generate images privately

Right now, itā€™sĀ completely freeĀ for anyone to use while weā€™re growing the platform and adding features.

The goal is simple: empower creators, researchers, and hobbyists to experiment, learn, and create without breaking the bank. Whether youā€™re into AI, animation, or just curious, join the journey. Letā€™s build something amazing together! Whatever you need, I believe there will be something for you!

https://discord.com/invite/isekaicreation

0 Comments
2024/11/28
06:11 UTC

0

Building a Free Data Science Learning Community ā€“ Join the Discord!

Hey Reddit, Iā€™m Ryan! Iā€™m working on DataScienceHive.com, a free platform for anyone whoā€™s into data science, analytics, or engineeringā€”or even just curious about it. My goal is to create structured learning paths using 100% free content and build a community where people can learn, collaborate, and work on real-world projects together.

The site is still in its early stages (Iā€™m teaching myself web development along the way), so itā€™s not perfect yet. But weā€™ve already got an awesome and growing Discord community with 15+ active members who are sharing ideas, brainstorming learning paths, and shaping what this platform will become.

Hereā€™s what Iā€™m trying to build:

-A place to explore free, structured learning paths with curated open resources.

-Opportunities to work on real-world projects to apply what youā€™ve learned.

-A welcoming and collaborative community where beginners and pros can grow together.

Iā€™d love your help to bring this vision to life. Whether you want to help test the site, share ideas, curate content for learning paths, or just hang out and chat, thereā€™s a place for you here.

Jump into the Discord and join the conversation: https://discord.gg/NTr3jVZj

Whether youā€™re here to learn, teach, or connect, youā€™re invited. Letā€™s build something amazing together and make data science education accessible for everyone!

0 Comments
2024/11/28
04:40 UTC

0

The hottest new programming language is English

1 Comment
2024/11/27
23:49 UTC

7

Any good sites to practice linear algebra, statistics, and probability for machine learning?

Hey everyone!
I just got accepted into a master's program in AI (Coursework), and also a bit nervous. I'm currently working as an app developer, but I want to prepare myself for the math side of things before I start.

Math has never been my strong suit (Iā€™ve always been pretty average at it), and looking at the math for linear algebra reminds me of high school math, but Iā€™m sure itā€™s more complex than that. Iā€™m kind of nervous about whatā€™s coming, and I really want to prepare so Iā€™m not overwhelmed when my program starts.

I still remember when I tried to join a lab for AI in robotics. They told me I just needed "basic kinematics" to prepareā€”and then handed me problems on robotic hand kinematics! It was such a shock, and I donā€™t want to go through that again when I start my Masterā€™s.

I know theyā€™ll cover the foundations in the first semester, but I really want to be prepared ahead of time. Does anyone know of good websites or resources where I can practice linear algebra, statistics, and probability for machine learning? Ideally, something with key answers or explanations so I can learn effectively without feeling lost.

Does anyone have recommendations for sites, tools, or strategies that could help me prepare? Thanks in advance! šŸ™

2 Comments
2024/11/27
17:23 UTC

0

On tokenization step, i encounterd sentencepiece.

In sentencepiece, should i pass the text as it is , or is it okay if i split the text on basis of whitespaces and then train sentencepiece tokenizer?
for eg i love ml
----->['i','love','ml']
------> and pass this token to train sentencepiece?

3 Comments
2024/11/27
03:45 UTC

Back To Top