/r/CUDA

Photograph via snooOG

/r/CUDA

8,785 Subscribers

17

Playing 2048 with CUDA

This article explores how CUDA C++ is leveraged to accelerate an AI for the game 2048. The techniques discussed can be widely applied.

https://trokebillard.com/blog/2048-ai/

Feel free to share your thoughts.

I'm looking to meet fellow CUDA developers. Please DM me.

10 Comments
2024/11/30
22:11 UTC

5

How many warps run on an SM at a particular instant of time

Hi I am new to CUDA programming.

I wanted to know at maximum how many warps can be issued instructions in a single SM at the same time instance, considering SM has 2048 threads and there are 64 warps per SM.

When warp switching happens, do we have physically new threads running? or physically the same but logically new threads running?

If its physically new threads running, does it mean that we never utilize all the physical threads (CUDA cores) of an SM?

I am having difficulty in understanding these basic questions, it would be really helpful if anyone can help me here.

Thanks

9 Comments
2024/11/30
19:38 UTC

5

Loading a matrix tile from global memory to shared memory

Hi guys, I'm reading this code and confused about how the process of loading a matrix tile from global memory to shared memory works. As I understand it, the author performs matrix multiplication on 2 matrices of size 4096-by-4096 laid out in a 1D array, and he declares his kernel to be

  • A 2D grid of 32-by-32 thread blocks
  • Each block is a 1D array of 512 threads

https://preview.redd.it/iwqhxvd6v54e1.png?width=766&format=png&auto=webp&s=17769771f25d949036f3da1c01780e361b763c25

Regarding the loading process of matrix A alone (which can be accessed by global_ptr in the code), here's what I'm able to grasp from the code:

Each block in the grid will load (in a vectorized manner) a 128-by-128 tile of matrix A into its shared memory. However, since there are only 512 threads per block, each block can only load 1/4 of the tile (referred to as sub-tile from now on) at a time. This means that each thread will have access to 8 consecutive elements of the matrix, so 512 threads should be able to cover 128x32 elements. The local position of an element inside this sub-tile is represented by offset_.row and offset_.col in the code.

To assign different sub-tiles (row-wise) to different thread blocks, the author defines a variable called blockOffset=blockIdx.y * Threadblock::kM * K, where Threadblock::kM=128 refers to the number of rows of a tile, and K=4096 is the number of columns of matrix A. So for different blockIdx.y, global_ptr + blockOffset will give us the first elements of the first sub-tiles of each row in matrix A (see the small red square in the figure below).

https://preview.redd.it/z5c5d2gdv54e1.png?width=921&format=png&auto=webp&s=91ac222cafa3b0982e1269676ceeed8addfd616a

Next, The author converts the local positions (offset_.row, offset_.col) within a sub-tile to the linear global positions with respect to the 4096-by-4096 matrix A: global_idx = offset_.row * K + offset_.col. So elements with the same (offset\_.row, offset_.col) across different sub-tiles will have the same global_idx in the 4096x4096 1D array.

https://preview.redd.it/qzfmvzffv54e1.png?width=821&format=png&auto=webp&s=52eb7a6bb7135cc349f2e2b94c5583d66b8e54b2

Then, to distinguish these orange positions, the author computes src = global_ptr + row * K + global_idx, which results in the figure below.

https://preview.redd.it/r1hsfn5jv54e1.png?width=1051&format=png&auto=webp&s=149f00edad20a975b891465b3a8724e7e1a09c22

However, as can be seen, the element across sub-tiles on the same row will access the same position (same color) in the 4096x4096 1D array.
Can someone provide an explanation for how this indexing scheme can cover the whole 4096x4096 elements of matrix A? I'll be thankful for any help or guidance!! 🙏🙏🙏

Link to the code: https://forums.developer.nvidia.com/t/cuda-kernel-slower-when-using-cuda-pipelines-despite-avoiding-bank-conflicts/280643

0 Comments
2024/11/30
15:22 UTC

0

Cudf and cupy

I tried a lot but was unsuccessful in installing these libs. Does anyone know of any solutions or guides for this?

2 Comments
2024/11/29
20:55 UTC

13

Need resources/guidance to learn gpu programming.

Hi there, I used to work as an intern in making drones autonomous, there a problem stuck me which is to run orbslam3 on jetson nano. But the most cpu computing power is consumed by slam alone.So, that navigation and motion planning would be really difficult to execute on the embedded device alone. So, I had a plan that to parallelize the slam as much as possible since the nano has a lot of gpu cores which are under utilised.

Can anyone suggest me textbooks to learn gpu programming with C++ and Cuda.

6 Comments
2024/11/29
04:24 UTC

12

Confusion about nvidia matrix multiplicaton guide

I am reading matrix-multiplication background user guide by nvidia.

I am confused by the statement as follows:

nvidia tiled matrix mul

A is a M x K matrix, B is a K X N matrix, and C is M x N matrix.

If I understand tiled matrix correctly, C is tiled into multiple submatrices, and the submatrix will be calculated by certain row and col of A and B, respectively.

The problem is, since M = 6912, N = 2048, C will be tiled into (6912 x 2048) / (256 x 128) = 432 submatrix, while an A100-SXM-80GB only has 108 SMs.

That means it needs one SM to handle four tiles.

What's more, in the Wave Quantization chapter, it says that:

An NVIDIA A100 GPU has 108 SMs; in the particular case of 256x128 thread block tiles, it can execute one thread block per SM, leading to a wave size of 108 tiles that can execute simultaneously.

But A100 only has 2048 maximum threads per SM, which is far more smaller than 256 x 128 ?

These two questions may be quite dumb, but I wish someone can help to enlight me.

Here are my information sources:

nvidia matrix performance guide

A100 gpu architecture

5 Comments
2024/11/28
08:37 UTC

6

Help! Simple shared memory usage.

Hello, I am a student new to cuda.

I have an assignment of making flash attention in cuda with shared memory.

I have read some material but I just don't know how to apply it.

For example, this is a 1D kernel launch.

__global__ void RowMaxKernel(float *out, float *in, int br, int bc) {
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < br) {
        float max_val = in[i * bc];
        for (int j = 1; j < bc; j++) {
            max_val = fmaxf(max_val, in[i * bc + j]);
        }
        out[i] = max_val;
    }
}

this is 2D kernel launch

__global__ void QKDotAndScalarKernel(float *out, float *q, float *k, int br, int bc, int d, float scalar) {

    int i = blockIdx.x * blockDim.x + threadIdx.x;
    int j = blockIdx.y * blockDim.y + threadIdx.y;

    if (i < br && j < bc) {
        float sum = 0.0F;
        for (int t = 0; t < d; t++) {
            sum += q[i * d + t] * k[j * d + t];
        }
        out[i * bc + j] = sum * scalar;
    }
}

Non of the TA or student are providing help. Please somebody so kind to demonstrate how to use shared-memory with these 2 example codes, please.

2 Comments
2024/11/28
07:50 UTC

1

Hash tables in CUDA program, bug!

So, I have this program where I count the number of times a string appears in a given text file. So, I've defined an upper limit to the length of the string to be compared and which can be analyzed. My code finds all the substrings possible of the length of that upper limit and lesser and converts them into a Hash value using a hash function. The code is running smoothly in C++ but when I rewrote the code for CUDA C++ it's just not counting anything, it runs and every time gives "Substring not found!". Also, the CUDA program takes the same time for all cases, which means it's not doing things properly and is stuck in some particular area.
So, if someone can please look at the excerpt of the program and let me know of any possible flaws, it would be beneficial. Here is the CUDA kernel for my program:

Please let me know if more details are needed, I'm happy to discuss.

__global__ void countSubstringsKernel(const char* content, int* substringCount, int contentLength, int maxSubstringLength) {
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i >= contentLength) return;
    // printf("Block ID: %d, Block Dim: %d, Thread ID: %d\n", blockIdx.x, blockDim.x, threadIdx.x);
    // std::cout<<blockIdx.x<<"and"<<blockDim.x<<"and"<<threadIdx.x;


    for (int len = 1; len <= maxSubstringLength; ++len) {
        int hashValue = 0;
        int power = 1;
        // compute the hash for the current substring
        for (int j = i; j < i + len && j < contentLength; ++j) {
            hashValue = (hashValue + (content[j] - 'a' + 1) * power) % MOD;
            power = (power * PRIME) % MOD;

        }

        // atomically increment the hash count
        atomicAdd(&substringCount[hashValue], 1);
    }
}
0 Comments
2024/11/27
08:43 UTC

18

Writing generalizable optimized kernels

Newbie to CUDA here (Undergrad CS/math background), currently optimizing cuda kernel(s). I use Nsight compute and systems.

My target device is unfortunately not the current device and details regarding its architecture/specs is unknown atm.

With the currant kernel, I’m able to obtain max warp occupancy but overall would like to write good code that can support reducing register usage as end device most likely does not support enough registers per thread (for max warp occupancy)

I have a couple of questions, any help would be appreciated :)

I’m considering using 16 bit __halfs but I know CUDA registers are 32 bits. Does NVCC/PTX compiler know to pack 2 __halfs into 1 register? How? Is it better to explicitly use __half2 instead? Does reading/writing to a __half become (equivalent or) more expensive than to a 32 bit float?

Warp shuffling is also used for multiple registers, but I believe shuffling is limited to 32 bits. So shuffling __halfs is a no-go? Is it necessary that we shuffle __half2 and unpack them? Potential costs of this?

I currently use shared memory but with hard coded sizes. Ideally if our device can’t get max warp occupancy with 32 bit variables, I’d like to switch over to 16 bit halfs. And also, if device doesn’t have enough shared mem, I’d like to reduce shared memory into smaller “chunks” where we load smaller portions from global to shared, use it and do tons of computations, then load second batch again, etc (i.e., reuse shared mem). Is this potentially a bad idea? If bad, it’s probably better to just divide the problem into smaller pieces and just load into shared mem once? Or could it be good due to one block having multiple cases of altering between 2 states: high read/write memory and high computation good (Allowing warps waiting on memory operation to be put aside)?

For writing highly optimized yet general CUDA kernels targeting different devices, do you guys have any suggestions? Are launch bounds parameters necessary? Will I have to write separate kernels for devices that can’t reach max occupancy unless I use __halfs? I assume there is no NVCC/PTX compiler flag to automatically convert all 32 bits register variables into 16 bits for a specific kernel? I’ve tried maxrregcount but degrades performance a ton since my 32 bit usage is near max register usage already.

2 Comments
2024/11/27
01:17 UTC

6

Help! Odd results when running program in quick succession

UPDATE: Turns out the issue was with RNG seeding, I didn't realise that time(null) only gave time to the nearest second! Now using randutils to create separate seeds for each thread and its working fine.

I have CUDA simulations I am executing in rapid succession (using python subprocess to run them). In my simulations I have random processes occurring. If I have a one second gap between each run my results are as expected. However, if I do not, then the rate at which random processes occur is incorrect... photos below

I've checked for memory leaks and fixed them, I'm not using more VRAM than my device has. I do have the number of threads set to the number of CUDA cores my device has.

So far I know that normal functioning require between a 0.3-0.7 s gap.

I am running the simulations sequentially for different values of dirTheta (oops forgot to label as radians).

With a one second wait:

With 1 second wait: What I would expect, some random noise

Without a one second wait:

Without the wait: clearly some correlated behaviour

4 Comments
2024/11/25
21:02 UTC

13

Feasibility of porting a mutable hash map from host memory (DRAM) to GPU memory (HBM)

Hi experts, I am looking for advice to move a mutable hash map from host DRAM to GPU HBM.

Currently, we maintain a large lookup hash map in host memory. The hash map will be read during user request servintg time and updated in a background cron job concurrently. The usage of the hash map is as follows. In each user request, it will have a list of ids of some sort. The ids are used to look up tensor data against the hash map. The lookup results are copied to GPU memory for computation for each user request. In this usage pattern, the GPU memory util percentage is not very high.

The optimization we are looking into is to increase the HBM utilization rate and hopefully increase overall performance as well. The main motivation is that the hash map is increasing over time and the host DRAM size might become a bottleneck. Conceptually, we will need to mirror the current operations of the current hash map into a new counterpart that sits in HBM. Specifically, we need something like below (in psuedo code; very high-level):

// request serving time
vector<MyTensor> vec;
for (auto id : ids):
  auto tensor_ptr = gpu_lookupMap.get(id)
  vec.push_back(tensor_ptr)
gpu.run(vec)

// background update
// step 1: in host memory
Buffer buffer
for (auto record : newUpdates):
  buffer.add(record)
// step 2: in gpu memory
gpu_lookupMap.update(hostBuffer)

In this way, host DRAM doesn't need to be big enough to contain the entire hash map but rather big enough to accommodate the temporary buffer during update. We will also increase the ROI on the GPU HBM. So, here are my questions.

  1. Is our intended new flow feasible with CUDA?

  2. What caveats are there for having the hash map (mutated concurrently) in GPU memory?

Thank you in advance for your kind assistance.

3 Comments
2024/11/24
18:52 UTC

5

Can block clusters be made up of more than a single SM?

Link: https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51119/

The information I have in my head is inconsistent. I thought that block clusters could only group blocks within a single SM, but in this video he implies at past the 12m mark, that they can group up to 16 SMs which'd allow the blocks in a cluster to access up to 3648 kb of shared memory. Nevermind that 224 * 16 is 3584.

Could you set me straight on this?

0 Comments
2024/11/24
15:03 UTC

23

Learning CUDA or any other parallel computing and getting into the field

I am 40 years old and have been working in C,C++ and golang. Recently, got interest in parallel computing. Is that feasible to learn and do I hold chance to getting jobs in the parallel computing field?

6 Comments
2024/11/23
13:27 UTC

9

Why float is not faster than double in terms of kernel excecution?

Edited: This may be not a CUDA related problem.Running the same multiplication on CPU also results in same excecution time with float and double.

I'm a beginner in CUDA programming, and I'm working on a simple matrix multiplication program. What I found is when I change the input and output variable type from double to float, the time spent on moving data between host and device is halved, but the time spent on kernel execution is almost the same (even with large matrix size). I've already tried using Nsight Compute to profile the program, it confirmed that the two excecution is in float and double respectively, and the excecution time is the almost the same. Does anyone have an idea about this? Thank you.

11 Comments
2024/11/22
20:32 UTC

42

Seeking Advice: Is it too late to pivot toward GPU programming and parallel computing?

Hi everyone,

I'm currently in the 2nd year of my master's program. Before starting my graduate studies, I worked for 3 years as a backend web developer, mainly focusing on building and maintaining web services. Recently, I got an exciting opportunity to work as a research assistant under a professor on a GPU-related project. The work involves using CUDA and Kokkos, and it has sparked a genuine interest in GPU programming, low-level development, and parallel computing.

I've been thinking about pivoting my career in this direction, as I feel the web development field has become highly saturated, making it tough to stand out in the current job market (especially as an international student). Even though I'm completely new to this field, I find it incredibly interesting and believe I can learn and grow in it.

My question is:

  1. Is it a good idea to pivot into GPU programming and parallel computing at this stage in my career?
  2. If so, what skills or topics should I focus on learning to prepare myself for a career in this field?

I’d appreciate any advice, insights, or resources you can share to help me make an informed decision and succeed in this area.

Thank you in advance!

15 Comments
2024/11/21
20:52 UTC

10

How to recreate the performance result of cuBLAS GEMM?

Hi, I'm new to GPU programming and doing research on GEMM optimization. I came across a few online posts ( this and this) that mentions the performance of cuBLASS GEMM is roughly 50TFLOPS. I went on Google Colab to confirm this number using this code (generated by ChatGPT):

#include <cublas_v2.h>
#include <cuda_runtime.h>
#include <iostream>
#include <chrono>
void checkCudaError(cudaError_t status, const char* msg) {
  if (status != cudaSuccess) {
    std::cerr << msg << " Error: " << cudaGetErrorString(status) << std::endl;
    exit(EXIT_FAILURE);
  }
}
void checkCublasError(cublasStatus_t status, const char* msg) {
  if (status != CUBLAS_STATUS_SUCCESS) {
    std::cerr << msg << " Error: " << status << std::endl;
    exit(EXIT_FAILURE);
  }
}
int main() {
  const int N = 8192; // Matrix size (N x N)
  const float alpha = 1.0f, beta = 0.0f;
  // Allocate host memory
  float *h_A, *h_B, *h_C;
  h_A = new float[N * N];
  h_B = new float[N * N];
  h_C = new float[N * N];
  // Initialize matrices
  for (int i = 0; i < N * N; ++i) {
    h_A[i] = 1.0f;
    h_B[i] = 2.0f;
    h_C[i] = 0.0f;
  }
  // Allocate device memory
  float *d_A, *d_B, *d_C;
  checkCudaError(cudaMalloc(&d_A, N * N * sizeof(float)), "CUDA malloc failed for d_A");
  checkCudaError(cudaMalloc(&d_B, N * N * sizeof(float)), "CUDA malloc failed for d_B");
  checkCudaError(cudaMalloc(&d_C, N * N * sizeof(float)), "CUDA malloc failed for d_C");
  // Copy data to device
  checkCudaError(cudaMemcpy(d_A, h_A, N * N * sizeof(float), cudaMemcpyHostToDevice), "Memcpy to     d_A failed");
  checkCudaError(cudaMemcpy(d_B, h_B, N * N * sizeof(float), cudaMemcpyHostToDevice), "Memcpy to d_B failed");
  // Create cuBLAS handle
  cublasHandle_t handle;
  checkCublasError(cublasCreate(&handle), "cuBLAS initialization failed");
  // Warm-up GEMM to stabilize performance
  checkCublasError(cublasSgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N,
                      N, N, N, &alpha, d_A, N, d_B, N, &beta, d_C, N),
                      "cuBLAS Sgemm warm-up failed");
  cudaEvent_t start, stop;
  float time;
  cudaEventCreate(&start);
  cudaEventCreate(&stop);
  cudaEventRecord( start, 0 );
  // Perform GEMM
  checkCublasError(cublasSgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N,
                      N, N, N, &alpha, d_A, N, d_B, N, &beta, d_C, N),
                      "cuBLAS Sgemm failed");
  cudaEventRecord( stop, 0 );
  cudaEventSynchronize( stop );
  cudaEventElapsedTime( &time, start, stop );
  printf("Time taken for GEMM: %f ms\n", time);
  cudaEventDestroy( start );
  cudaEventDestroy( stop );
  // Cleanup
  delete[] h_A;
  delete[] h_B;
  delete[] h_C;
  cudaFree(d_A);
  cudaFree(d_B);
  cudaFree(d_C);
  cublasDestroy(handle);
  return 0;
}

which output about 209ms for running cublasSgemm kernel. I then calculate the throughput = (2 * M * N * K) / (elapsed_time * 1e12) = (2 * 8192^3) / (0.209 * 1e12) = 5.26 TFLOPS.
Can someone please help clarify this phenomenon? Thank you in advance!

8 Comments
2024/11/20
08:23 UTC

7

Booking system for GPU with other people

Hi everyone,

My friends and I are working on a project: we have access to a GPU, and we want to ensure that each of us can use the GPU when needed. Do you know of any app that allows us to book time slots? Essentially, we’re looking for a shared calendar that’s convenient and easy to use.

Thanks, everyone!

5 Comments
2024/11/18
08:34 UTC

8

Books and resources

I am a backend software engineer and a comp science grad . I am interested in learning Cuda but see that the intro books are having obsolete topics as per reviews. Should that matter ? Can I get any suggestions on which book or website to start with for fundamentals?

12 Comments
2024/11/18
07:42 UTC

11

Is there any way to trace the interaction between Vulkan and CUDA devices?

Hello everyone! I'm a new researcher working on Vulkan Compute Shader issues. I'm trying to reproduce a branch divergence issue on a Vulkan Compute Shader, but confusingly, the versions with and without divergence have the same average runtime. Through my investigation, I found an interface in NVAPI called NvReorderThread, and I'm wondering if this might be the reason why the issue can't be reproduced.

My questions are:

  • Regardless of whether NvReorderThread is the problem, is there a way to trace which interfaces are being called or how the shader files are ultimately converted? I've tried various profilers (the program is quite simple and runs in less than a second), but for some reason, none of them can capture or analyze the program.
  • Is my suspicion reasonable? I'd like to emphasize that this is about compute shaders, not graphics rendering shaders.

I would greatly appreciate any responses!

0 Comments
2024/11/17
08:21 UTC

1

illegal memory access when using fixed size array

I initialized an array as

_FTYPE_ c_arr[64] = {0.0};

When I try to call c_arr[8] to write it to global memory, I get Cuda error: Error in matrixMul kernel: an illegal memory access was encountered. However, if I just write c_arr[0] to memory, it works. Does anyone know why this might be?

4 Comments
2024/11/15
19:17 UTC

8

Can’t find CUDA Static libraries

I am trying to export my code as an exe with static libraries, so I can use it on any system with a GPU in the office. Unfortunately, I can’t find the static libraries in my install. When I try to reinstall CUDA, there is no option to install static libraries. Have I completely misunderstood static libraries in CUDA? Do I need to get them elsewhere? Can the dynamic libraries be used as static libraries? I’d appreciate any help.

17 Comments
2024/11/15
14:01 UTC

10

Wondering if anyone understand the GEMM structure of this code

I am trying to implement this CUTLASS version of linear algebra matrix multiplication found here: https://developer.nvidia.com/blog/cutlass-linear-algebra-cuda/

I was wondering if anyone understood what BlockItemsK would be in this picture where the tile from A is 128x8 and the tile from B is 8x128:

https://preview.redd.it/db4ntgir6t0e1.png?width=922&format=png&auto=webp&s=6b6401b5df2b37cfce23914c07cd41b60b01a881

This is the incomplete sample code found on the site:
// Device function to compute a thread block’s accumulated matrix product
__device__ void block_matrix_product(int K_dim) {
    
    // Fragments used to store data fetched from SMEM
    value_t frag_a[ThreadItemsY];
    value_t frag_b[ThreadItemsX];

    // Accumulator storage
    accum_t accumulator[ThreadItemsX][ThreadItemsY];

    // GEMM Mainloop - iterates over the entire K dimension - not unrolled
    for (int kblock = 0; kblock < K_dim; kblock += BlockItemsK) {

        // Load A and B tiles from global memory and store to SMEM
        //
        // (not shown for brevity - see the CUTLASS source for more detail)
        ...

        __syncthreads();
        
        // Warp tile structure - iterates over the Thread Block tile
        #pragma unroll
        for (int warp_k = 0; warp_k < BlockItemsK; warp_k += WarpItemsK) {

            // Fetch frag_a and frag_b from SMEM corresponding to k-index 
            //
            // (not shown for brevity - see CUTLASS source for more detail)
            ...

            // Thread tile structure - accumulate an outer product
            #pragma unroll
            for (int thread_x = 0; thread_x < ThreadItemsX; ++thread_x) {
                #pragma unroll
                for (int thread_y=0; thread_y < ThreadItemsY; ++thread_y) {
                    accumulator[thread_x][thread_y] += frag_a[y]*frag_b[x];
                }
            }
        }

        __syncthreads();
    }   
}
4 Comments
2024/11/14
06:12 UTC

7

Laptop options for cuda

Hello everyone!

I'm a university student and I write a FEM code as research. First I have writed an Octave code for it, but because of the performance I have rewrote it to C++. The code itself has a lot of matrix operations so I started using Cuda for the matrices. I have a pc with an RTX 2060(12GB), however I need a laptop. I have to do some of the coding in the university. There are ocasions, where I have to run a quick test for my code, to show it to my professors. Internet is not always available in the university. That's why I need a cuda capable laptop. I would like to ask for some advice, what kind of laptop should I buy? My budget is 1000USD at max, but preferebly less than that. Would a used, but not so old workstation with a T-series(with about 4GB) GPU be enough or should I choose a 5 years old workstation with an RTX4000? Or maybe a new gaming laptop with like an RTX 4050 or 4060 would be better? I have some future plans/project ideas for honing my cuda skills, so I want it to be a long-time investment.

11 Comments
2024/11/13
13:34 UTC

6

Inheritance and polymorphism

Hi! Do you know of any updated resources or examples that use CUDA with inheritance and polymorphism? I've searched, and most sources say that virtual functions are not supported, but the information is quite old. I tried migrating from inheritance to AoS + tagged union, but I have a very large structure that isn't used often, so this approach isn't ideal for my use case.

3 Comments
2024/11/11
16:35 UTC

36

Should I learn CUDA programming?

I have deep interest in High Performance Computing and Reinforcement Learning. Should I learn CUDA programming to kickstart my journey. Currently, I am a python developer and have worked with CPP before. Please advise.

27 Comments
2024/11/08
20:12 UTC

9

GPU as a service

Hi all, I have a few GPUs left over from mining, and I’m interested in starting a small-scale GPU-as-a-service. My goal is to set up a simple, side income that could help pay off my credit cards, as I already have a primary job.

What steps are needed for getting started with a small-scale GPU-as-a-service business focused on machine learning or AI? Any insights would be greatly appreciated!

Thanks in advance for any advice you can share!

19 Comments
2024/11/08
17:47 UTC

0

is 4060 CUDA capable?

I just bought a 4060 for my desktop only to be able to use cuda for machine learning task. The CUDA compatibility website does not list the 4060 for desktop as CUDA capable. Does that mean that i will not be able to use CUDA on my 4060?

12 Comments
2024/11/07
19:26 UTC

12

Why doesn't CUDA have built-in math operator overloading / functions for float4?

float4 a,b,c;
// element-wise multiplication
a = b * c; // does not compile
// element-wise square root
a = sqrtf(a); // does not compile

Why? Is it because nobody using float4 in computations? Is it only for vectorized-load operations?

It is a bit repeating itself too much this way:

// duplicated code x4
a.x = b.x * c.x;
a.y = b.y * c.y;
a.z = b.z * c.z;
a.w = b.w * c.w;
a.x = sqrtf(a.x);
a.y = sqrtf(a.y);
a.z = sqrtf(a.z);
a.w = sqrtf(a.w);

// one-liner no problem but still longer than dot(a)
float dot = a.x*a.x + a.y*a.y + a.z*a.z + a.w*a.w;

// have to write this to calculate cross-product
a.x = b.y*c.z - b.z*c.y;
a.y = b.x*c.z - b.x*c.y; // human error in cross product implementation? yes, probably
a.z = b.y*c.x - b.z*c.x;
14 Comments
2024/11/05
08:54 UTC

Back To Top