/r/LLVM

Photograph via snooOG

A place to discuss or ask questions relating to the LLVM Compiler Infrastructure.

To learn more about LLVM visit http://llvm.org/

/r/LLVM

3,754 Subscribers

0

Can someone PLEASE just to where and how to begin???

Hello everyone i hope you're doing well

Am a university student and i've been given a project and that's to use LLVM to make a compiler the thing is i don't know what software to use to do such thing i've been trying to use VS studio to code but was unable to compile IR code and now am truly stuck and don't know what to do

2 Comments
2025/01/28
18:27 UTC

2

Building backend for 16-bit minicomputer

Hi,

I have for a while been contemplating building an LLVM backend for a 16-bit minicomputer from the 1980's. The closest computer I could compare this to is the PDP/11. I don't have any experience with building anything in the LLVM ecosystem, so it's a bit overwhelming for me.

  • Where do I start
  • Are there samples I could look at
  • Is this even achievable (I have lot of experience with C, but I mainly write in C#)
  • I might also need to write an assembler

I know the architecure and opcodes for the cpu pretty well as I have implemented a macrocode emulator, a microcode emulator, and implemented the cpu in Verilog from the original design documents.

Thanks for all hints

0 Comments
2025/01/28
13:14 UTC

2

how to download debug binaries?

Hi, I'm interested in debugging LLVM / MLIR itself. However, I can't build it in debug mode as my computer does not have enough memory (It uses > 30GB ram and always sigkills even if I run it with 1 thread).

How do I find and download pre-built binaries in debug mode?

Update: Thanks for comments, it was indeed running out of memory during the linking stage by linker ld.
After everyone's help, I searched and followed these stackoverflow links and it is able to compile debug binaries now (although took more than 8 hours on a single thread with gold linker). Thanks everyone!
Below were the links I referred to:
- https://stackoverflow.com/questions/75741547/how-to-build-llvm-clang-lld-mlir-release-16-x
- https://stackoverflow.com/questions/40536508/is-it-possible-to-compile-link-clang-llvm-using-the-gold-linker
- https://stackoverflow.com/questions/65633304/not-able-to-build-llvm-from-its-source-code

6 Comments
2025/01/27
20:07 UTC

1

lldb configuration ?

https://preview.redd.it/7qvwqs4s9rde1.png?width=2004&format=png&auto=webp&s=009ff42f9a751fcb315fa816b985a89cb784a7b2

Hello,

I have compiled a very simple file with the command

clang++ main.cpp -g -o test

then

lldb ./test
b main.cpp:17 , r, gui

and as you can see in the upper part of the screenshot, the variable named “unordered” wich is an unordered set , has a size of zero , though it doesn’t …the code prints out “1” for unordered.size(), and GDB do not have this problem with the same binary

IDK how to do this in cmd lldb but through vscodium debuging panel the variable inspector allows me to see the “raw” content of “unordered” wich has something like “m_element_count” so there is the information …

any sugestion ? thx

5 Comments
2025/01/18
13:39 UTC

3

Run LLVM Custom Register Allocator as Out of Tree shared library

How do I run a custom register allocator in LLVM using the llc compiler out of tree?

I've written a custom register allocator that calls

...

RegisterRegAlloc MinimalRegAllocator ("minimal", "Minimal Register Allocator", [] () -> FunctionPass * {
        return new RAMinimal ();
});

I successfully compiled this with this Makefile

LLVM_FLAGS := $(shell llvm-config --cxxflags --ldflags --libs --system-libs)
ZSTD_FLAGS := -I/opt/homebrew/Cellar/zstd/1.5.6/include -L/opt/homebrew/Cellar/zstd/1.5.6/lib

all:
	g++ -g -dynamiclib $(ZSTD_FLAGS) $(LLVM_FLAGS) RegAllocMinimal.cpp -o libRegAlloc.dylib

Trying to run this register allocator with llc as follows:

llc -load libRegAlloc.dylib -regalloc minimal test/Foo.c

gives the following error:

llc: for the --regalloc option: Cannot find option named 'minimal'!

Many examples online seem to only show how to do it within the llvm source tree, however, I don't want to have the entire codebase crowding my editor just to write a register allocator.

5 Comments
2025/01/16
03:28 UTC

3

Tips on getting into clang source code

I have experience building a few compilers with LLVM and have a decent understanding of LLVM IR. However, I’m struggling to fully grasp how Clang works.

My goal is to understand the Clang pipeline and eventually contribute to the LLVM codebase (particularly Clang subproject) by submitting PRs. I’ve watched several LLVM talks about Clang, but I still find it challenging to navigate and understand the codebase due to its complexity. Maybe this is the most complex code base that I'm trying to understand.

I’d greatly appreciate any advice or guidance from someone familiar with Clang (or the LLVM codebase in general).

3 Comments
2025/01/11
17:30 UTC

2

Why is the LLVM optimizer breaking my code?

Here is the source code I'm compiling (the syntax is basically the same as Rust) - my compiler uses LLVM for codegen.

struct Thing {
    val: int
}

fn main() {
    let t = Thing{val: 2}
    take(t)
}

fn take(t: Thing) {
    assert(t.val == 2, "expected 2")
}

When I make my compiler attach the byval attribute to function arguments that are passed by value, it generates this IR (with optimization turned off - i.e. -O0).

define void @"ignore/dyn.bl::main"() #1 {
entry:
  %t_ptr = alloca %"ignore/dyn.bl::Thing", align 8
  store %"ignore/dyn.bl::Thing" { i64 2 }, ptr %t_ptr, align 8
  call void @"ignore/dyn.bl::take"(ptr %t_ptr)
  ret void
}

define void @"ignore/dyn.bl::take"(ptr byval(%"ignore/dyn.bl::Thing") %t) #1 {
entry:
  %val_ptr = getelementptr inbounds %"ignore/dyn.bl::Thing", ptr %t, i32 0, i32 0
  %val = load i64, ptr %val_ptr, align 8
  %eq = icmp eq i64 %val, 2
  call void @"std/backtrace/panic.bl::assert"(i1 %eq, %str { ptr @"expected 2", i64 10 })
  ret void
}

Notice how I'm telling LLVM that the pointer argument to take is pass-by-value. This IR looks perfectly fine to me, and when I compile it to an executable and run it, it works fine! No assertion failures.

However, as soon as I enable optimization (-O2), LLVM generates this code.

define void @"ignore/dyn.bl::main"() local_unnamed_addr #1 {
entry:
  %t_ptr = alloca %"ignore/dyn.bl::Thing", align 8
  tail call void @"ignore/dyn.bl::take"(ptr nonnull %t_ptr)
  ret void
}

define void @"ignore/dyn.bl::take"(ptr nocapture readonly byval(%"ignore/dyn.bl::Thing") %t) local_unnamed_addr #1 {
entry:
  %val = load i64, ptr %t, align 8
  %eq = icmp eq i64 %val, 2
  tail call void @"std/backtrace/panic.bl::assert"(i1 %eq, %str { ptr @"expected 2", i64 10 })
  ret void
}

Notice how all the data on the stack are gone! Now the assertion fails. I haven't changed any code in my compiler, just the optimization level I'm passing to LLVM.

If I keep -O2 and comment out the line of code inside my compiler that attaches the byval attribute, it generates this code.

define void @"ignore/dyn.bl::main"() local_unnamed_addr #1 {
entry:
  %t_ptr = alloca %"ignore/dyn.bl::Thing", align 8
  store i64 2, ptr %t_ptr, align 8
  call void @"ignore/dyn.bl::take"(ptr nonnull %t_ptr)
  ret void
}

define void @"ignore/dyn.bl::take"(ptr nocapture readonly %t) local_unnamed_addr #1 {
entry:
  %val = load i64, ptr %t, align 8
  %eq = icmp eq i64 %val, 2
  tail call void @"std/backtrace/panic.bl::assert"(i1 %eq, %str { ptr @"expected 2", i64 10 })
  ret void
}

This code works fine too.

Why does the LLVM optimizer decide that, when I'm passing something byval, it can just erase the data and pass a pointer to uninitialized memory instead? That seems totally broken, so I must be misunderstanding something about that attribute, or I'm using LLVM wrong somehow.

7 Comments
2025/01/01
06:13 UTC

5

MSVC vs LLVM for Windows C++ Development: Which One’s Better?

I’m working on a Windows-only C++17 project and trying to decide between MSVC and LLVM/Clang. I know MSVC is the go-to for Windows dev, but I’ve heard LLVM is getting more popular for C++. Has anyone here used both for Windows development? What’s your experience? Is MSVC still the best for performance, or does LLVM have any advantages on Windows? Would love to hear your thoughts!

6 Comments
2024/12/27
09:09 UTC

2

Confused about `byval` attribute

I'm using LLVM for codegen in my compiler. I'm using pointers for function arguments with aggregate types. In other words, if a function argument in my high-level language is an aggregate type (struct, array, etc), then I pass it by reference in my generated LLVM code. So far, this works perfectly all the time, and I don't need to generate copies of these arguments because my compiler enforces move semantics (i.e. it's safe to pass references, even when passing by value, because the value is considered "moved").

In other words, this high level code

struct Thing {}

fn take(thing: Thing) {}

fn main() {
    take(Thing{})
}

would compile to this LLVM IR

%"Thing" = type {}

define void @"main"() #0 {
entry:
  %arg_0_literal_ptr = alloca %"Thing", align 8
  call void @"take"(ptr nonnull %arg_0_literal_ptr)
  ret void
}

define void @"take"(ptr readonly %thing) #0 {
entry:
  ret void
}

define void @main() {
entry:
  call void @"main"()
  ret void
}

Notice how the generated LLVM IR has never copies the argument to `take`.

Recently, I decided to disable move semantics, so I needed to automatically copy function arguments when passing by value. I figured I could keep aggregate arguments types as pointer types, and just add the `byval` attribute to them to make LLVM automatically make copies of them for me. The docs for this attribute state:

The attribute implies that a hidden copy of the pointee is made between the caller and the callee, so the callee is unable to modify the value in the caller.

To me, this means "LLVM will make sure to generate a safe copy of the data reference by a `byval` pointer argument for the callee so the callee can't mess with the caller's data".

So, all I did was add the `byval` attribute to aggregate function arguments, and all of a sudden my code segfaults! What?? How?? To be clear, the generated LLVM code works perfectly until I simply add `byval` to function arguments that are pointers to aggregate types, and now it's all broken. I can't fathom how that's possible, so I figure I must be totally misunderstanding what that attribute does.

2 Comments
2024/12/22
06:42 UTC

1

Unable to cross compile libunwind for 32-bit (Arch linux)

So I'm trying to compile libc++, libc++abi, libunwind and compiler-rt for 32-bit on a 64-bit install of Arch Linux, but for some reason LLVM is adding "-m64" to compile arguments which is resulting in x86_64 code trying to be linked into i386 binary.

1 Comment
2024/12/18
01:29 UTC

4

There are some good sources to learn to generate LLVM IR from scratch?

I've already learn how LLVM IR works, writing IR that works is pretty trivial now, but I'm struggling on how to generate IR from an AST without LLVM C++ Codegen library, could you give me some sources on how to learn it? I think that maybe some non-llvm content would help too. Thx.

1 Comment
2024/12/13
03:05 UTC

1

LLVM build failure on Solaris

Hiya, so we're doing an LLVM 16.0 build, and it all seems to work, right up until it goes to link llvm-tblgen, about 3% into the build. llvm-tblgen, apparently, needs arc4random. Or more specifically, ../../lib/libLLVMSupport.a(Process.cpp.o): in function `llvm::sys::Process::GetRandomNumber()' - Process.cpp:(.text+0xb9c). Alright, that's fine. Arc4random() and friends are in libbsd.so on this system, due to Solaris 10 not actually having those functions. We made damn sure -lbsd and both -L and -R directories pointing to /opt/FSYS/packages/lib (where libbsd is) are included in our linker flags, and they are; you can see them in the link.txt linker script. Despite that, and for no reason we can accurately determine, the linker sees we asked for libbsd, sees the file, opens it... and utterly and completely ignores the very clearly obvious set of arc4random functions in said libbsd.so. Trust us, we checked. They're there. We captured a full run of the link attempt, using GCC 9.5.0 and GNU Binutils 2.43, and it is here. If anyone knows wt actual f is going on here, please, let us know, cause this is super weird. https://pastebin.com/rzYM670B

0 Comments
2024/12/06
03:35 UTC

1

Getting “Failed to set breakpoint site at ….. Unable to write breakpoint trap to memory”

Hi. I'm compiling a project in Swift and debugging it using the lldb. A couple of weeks ago, it was working just fine. But now I'm getting this message, and my breakpoints aren't working anymore.

Could you give me some tips on where I should start to investigate the problem?

0 Comments
2024/12/03
12:59 UTC

3

Advice on migrating from LLVM legacy FunctionPassManager to new PassManager

I currently have a compiler where I use the legacy FunctionPassManager. My code for this is essentially identical to the Kaleidoscope implementation here: https://llvm.org/docs/tutorial/BuildingAJIT2.html.

Here is the relevant snippet from the tutorial:

class KaleidoscopeJIT {
private:
  ExecutionSession ES;
  RTDyldObjectLinkingLayer ObjectLayer;
  IRCompileLayer CompileLayer;
  IRTransformLayer TransformLayer;

  DataLayout DL;
  MangleAndInterner Mangle;
  ThreadSafeContext Ctx;

public:

  KaleidoscopeJIT(JITTargetMachineBuilder JTMB, DataLayout DL)
      : ObjectLayer(ES,
                    []() { return std::make_unique<SectionMemoryManager>(); }),
        CompileLayer(ES, ObjectLayer, ConcurrentIRCompiler(std::move(JTMB))),
        TransformLayer(ES, CompileLayer, optimizeModule),
        DL(std::move(DL)), Mangle(ES, this->DL),
        Ctx(std::make_unique<LLVMContext>()) {
    ES.getMainJITDylib().addGenerator(
        cantFail(DynamicLibrarySearchGenerator::GetForCurrentProcess(DL.getGlobalPrefix())));
  }

static Expected<ThreadSafeModule>
optimizeModule(ThreadSafeModule M, const MaterializationResponsibility &R) {
  // Create a function pass manager.
  auto FPM = std::make_unique<legacy::FunctionPassManager>(M.get());

  // Add some optimizations.
  FPM->add(createInstructionCombiningPass());
  FPM->add(createReassociatePass());
  FPM->add(createGVNPass());
  FPM->add(createCFGSimplificationPass());
  FPM->doInitialization();

  // Run the optimizations over all functions in the module being added to
  // the JIT.
  for (auto &F : *M)
    FPM->run(F);

  return M;
}

I'm struggling to understand how to adapt this to use the new PassManager, since I will also have to change how the TransformLayer is used `TransformLayer(ES, CompileLayer, optimizeModule)`, since optimizeModule must return a ThreadSafeModule and I'm not sure how to do that with the new PassManager.

I have read the docs on using the new pass manager and I have been looking at how other people have done the migration on their Githubs but I can't find an example that is similar to mine.

I would really appreciate any pointers, or if someone has resources to share. Thanks in advance!

0 Comments
2024/11/28
00:56 UTC

3

LLVM-IR/MLIR bindings for Rust

I have a compiler project which I have been working on for close to three months. The first iteration of development, I was spawning actual assembly code and then one month ago my friend and I transferred the code to LLVM. We are developing the entire compiler infrastructure in C++.

Since LLVM-IR and MLIR are natively in C++, is there any way to bring the core to Rust? Because we could frankly use a lot of type safety, traits, memory safety, etc. Rust provides over C++.

Any ideas or suggestions?

3 Comments
2024/11/26
15:06 UTC

1

How do I run opt on a specific loop in Input IR?

I want to run a loop pass, which in my case is IndVars, but I want to run the pass only on a specific loop. How do I use opt tool to achieve this? I'm hoping for answers using new pass manager.

0 Comments
2024/11/20
17:03 UTC

1

How do I get llvm to return an array of values using calc function.

Hey guys I am starting to learn llvm. I have successfully implemented basic DMAS math operations, now I am doing vector operations. However I always get a double as output of calc, I believe I have identified the issue, but I do not know how to solve it, please help.

I believe this to be the issue:

    llvm::FunctionType *funcType = llvm::FunctionType::
get
(builder.
getDoubleTy
(), false);
    llvm::Function *calcFunction = llvm::Function::
Create
(funcType, llvm::Function::ExternalLinkage, "calc", module.
get
());
    llvm::BasicBlock *entry = llvm::BasicBlock::
Create
(context, "entry", calcFunction);    llvm::FunctionType *funcType = llvm::FunctionType::get(builder.getDoubleTy(), false);
    llvm::Function *calcFunction = llvm::Function::Create(funcType, llvm::Function::ExternalLinkage, "calc", module.get());
    llvm::BasicBlock *entry = llvm::BasicBlock::Create(context, "entry", calcFunction);

The return function type is set to DoubleTy. So when I add my arrays, I get:

Enter an expression to evaluate (e.g., 1+2-4*4): [1,2]+[3,4]
; ModuleID = 'calc_module'
source_filename = "calc_module"

define double u/calc() {
entry:
  ret <2 x double> <double 4.000000e+00, double 6.000000e+00>
}
Result (double): 4

I can see in the IR that it is successfully computing it, but it is returning only the first value, I would like to print the whole vector instead.

I have attached the main function below. If you would like rest of the code please let me know.

Main function:

void 
printResult
(llvm::GenericValue 
gv
, llvm::Type *
returnType
) {
    
//
 std::cout << "Result: "<<returnType<<std::endl;
    
if
 (
returnType
->
isDoubleTy
()) {
        
//
 If the return type is a scalar double
        double resultValue = 
gv
.DoubleVal;
        std::cout 
<<
 "Result (double): " 
<<
 resultValue 
<<
 std::
endl
;
    } 
else
 
if
 (
returnType
->
isVectorTy
()) {
        
//
 If the return type is a vector
        llvm::VectorType *vectorType = llvm::
cast
<llvm::VectorType>(
returnType
);
        llvm::ElementCount elementCount = vectorType->
getElementCount
();
        unsigned numElements = elementCount.
getKnownMinValue
();

        std::cout 
<<
 "Result (vector): [";
        
for
 (unsigned i = 0; i < numElements; ++i) {
            double elementValue = 
gv
.AggregateVal
[
i
]
.DoubleVal;
            std::cout 
<<
 elementValue;
            
if
 (i < numElements - 1) {
                std::cout 
<<
 ", ";
            }
        }
        std::cout 
<<
 "]" 
<<
 std::
endl
;

    } 
else
 {
        std::cerr 
<<
 "Unsupported return type!" 
<<
 std::
endl
;
    }
}

//
 Main function to test the AST creation and execution
int 
main
() {
    
//
 Initialize LLVM components for native code execution.
    llvm::
InitializeNativeTarget
();
    llvm::
InitializeNativeTargetAsmPrinter
();
    llvm::
InitializeNativeTargetAsmParser
();
    llvm::LLVMContext context;
    llvm::IRBuilder<> 
builder
(context);
    auto module = std::
make_unique
<llvm::Module>("calc_module", context);

    
//
 Prompt user for an expression and parse it into an AST.
    std::string expression;
    std::cout 
<<
 "Enter an expression to evaluate (e.g., 1+2-4*4): ";
    std::
getline
(std::cin, expression);

    
//
 Assuming Parser class exists and parses the expression into an AST
    Parser parser;
    auto astRoot = parser.
parse
(expression);
    
if
 (!astRoot) {
        std::cerr 
<<
 "Error parsing expression." 
<<
 std::
endl
;
        
return
 1;
    }

    
//
 Create function definition for LLVM IR and compile the AST.
    llvm::FunctionType *funcType = llvm::FunctionType::
get
(builder.
getDoubleTy
(), false);
    llvm::Function *calcFunction = llvm::Function::
Create
(funcType, llvm::Function::ExternalLinkage, "calc", module.
get
());
    llvm::BasicBlock *entry = llvm::BasicBlock::
Create
(context, "entry", calcFunction);
    builder.
SetInsertPoint
(entry);
    llvm::Value *result = astRoot
->codegen
(context, builder);
    
if
 (!result) {
        std::cerr 
<<
 "Error generating code." 
<<
 std::
endl
;
        
return
 1;
    }
    builder.
CreateRet
(result);
    module
->print
(llvm::
outs
(), nullptr);

    
//
 Prepare and run the generated function.
    std::string error;
    llvm::ExecutionEngine *execEngine = llvm::
EngineBuilder
(std::
move
(module)).
setErrorStr
(&error).
create
();
    
    
if
 (!execEngine) {
        std::cerr 
<<
 "Failed to create execution engine: " 
<<
 error 
<<
 std::
endl
;
        
return
 1;
    }

        std::vector<llvm::GenericValue> args;
    llvm::GenericValue gv = execEngine->
runFunction
(calcFunction, args);

    
//
 Run the compiled function and display the result.
    llvm::Type *returnType = calcFunction->
getReturnType
();

    
printResult
(gv, returnType);

    delete execEngine;
    
return
 0;
}void printResult(llvm::GenericValue gv, llvm::Type *returnType) {
    // std::cout << "Result: "<<returnType<<std::endl;
    if (returnType->isDoubleTy()) {
        // If the return type is a scalar double
        double resultValue = gv.DoubleVal;
        std::cout << "Result (double): " << resultValue << std::endl;
    } else if (returnType->isVectorTy()) {
        // If the return type is a vector
        llvm::VectorType *vectorType = llvm::cast<llvm::VectorType>(returnType);
        llvm::ElementCount elementCount = vectorType->getElementCount();
        unsigned numElements = elementCount.getKnownMinValue();


        std::cout << "Result (vector): [";
        for (unsigned i = 0; i < numElements; ++i) {
            double elementValue = gv.AggregateVal[i].DoubleVal;
            std::cout << elementValue;
            if (i < numElements - 1) {
                std::cout << ", ";
            }
        }
        std::cout << "]" << std::endl;


    } else {
        std::cerr << "Unsupported return type!" << std::endl;
    }
}


// Main function to test the AST creation and execution
int main() {
    // Initialize LLVM components for native code execution.
    llvm::InitializeNativeTarget();
    llvm::InitializeNativeTargetAsmPrinter();
    llvm::InitializeNativeTargetAsmParser();
    llvm::LLVMContext context;
    llvm::IRBuilder<> builder(context);
    auto module = std::make_unique<llvm::Module>("calc_module", context);


    // Prompt user for an expression and parse it into an AST.
    std::string expression;
    std::cout << "Enter an expression to evaluate (e.g., 1+2-4*4): ";
    std::getline(std::cin, expression);


    // Assuming Parser class exists and parses the expression into an AST
    Parser parser;
    auto astRoot = parser.parse(expression);
    if (!astRoot) {
        std::cerr << "Error parsing expression." << std::endl;
        return 1;
    }


    // Create function definition for LLVM IR and compile the AST.
    llvm::FunctionType *funcType = llvm::FunctionType::get(builder.getDoubleTy(), false);
    llvm::Function *calcFunction = llvm::Function::Create(funcType, llvm::Function::ExternalLinkage, "calc", module.get());
    llvm::BasicBlock *entry = llvm::BasicBlock::Create(context, "entry", calcFunction);
    builder.SetInsertPoint(entry);
    llvm::Value *result = astRoot->codegen(context, builder);
    if (!result) {
        std::cerr << "Error generating code." << std::endl;
        return 1;
    }
    builder.CreateRet(result);
    module->print(llvm::outs(), nullptr);


    // Prepare and run the generated function.
    std::string error;
    llvm::ExecutionEngine *execEngine = llvm::EngineBuilder(std::move(module)).setErrorStr(&error).create();
    
    if (!execEngine) {
        std::cerr << "Failed to create execution engine: " << error << std::endl;
        return 1;
    }


        std::vector<llvm::GenericValue> args;
    llvm::GenericValue gv = execEngine->runFunction(calcFunction, args);


    // Run the compiled function and display the result.
    llvm::Type *returnType = calcFunction->getReturnType();


    printResult(gv, returnType);


    delete execEngine;
    return 0;
}

Thank you guys

0 Comments
2024/11/20
07:47 UTC

1

Segmentation fault encountered at `ret void` in llvm-ir instructions

I'm currently making a compiler that outputs bare LLVM-IR instructions and implementing variadic function calls. I have defined a println function that accepts a (format) string and variable amount of arguments for the printf call. I have included printf calls to see where my program faults and it is as the return of the function, which would make me think that there is something wrong with cleaning up the \@llvm.va_end calls, since it does what i wanted it to do before the fault.

Here is the llvm instrucitons:

declare void u/llvm.va_start(i8*)
declare void @llvm.va_end(i8*)
declare void @vprintf(i8*, i8*)
@.str_3 = private unnamed_addr constant [2 x i8] c"\0A\00"
declare void @printf(i8*, ...)
@.str_5 = private unnamed_addr constant [4 x i8] c"%i\0A\00"
@.str_6 = private unnamed_addr constant [16 x i8] c"number is %i %i\00"

define void @println(i8* %a, ...) {
entry:
    call void @printf(i8* @.str_5, i32 1) ; debug, added prior
    %.va_list = alloca i8*
    call void @printf(i8* @.str_5, i32 2) ; debug, added prior
    call void @llvm.va_start(i8* %.va_list)
    call void @printf(i8* @.str_5, i32 3) ; debug, added prior
    call void @vprintf(i8* %a, i8* %.va_list)
    call void @printf(i8* @.str_3)
    call void @printf(i8* @.str_5, i32 4) ; debug, added prior
    call void @llvm.va_end(i8* %.va_list)
    call void @printf(i8* @.str_5, i32 5) ; debug, added prior
    ret void
}

define void @main() {
entry:
    call void @printf(i8* @.str_5, i32 0) ; debug, added prior
    call void @println(i8* @.str_6, i32 5, i32 2)
    call void @printf(i8* @.str_5, i32 6) ; debug, added prior
    ret void
}

Output of running the built program:

0
1
2
3
number is 5 2
4
5

As you can see here i get the segmentation fault between printf(5) and printf(6) which would entail that there is something going on at the return/deallocating of memory or something in the println function.

SOLUTION:
Put this ast the va_list definition

%.va_list = alloca i8, i32 128
3 Comments
2024/11/16
17:19 UTC

0

Implement a side-channel attack using LLVM on branch predictor

Hi guys! Any idea on how can I implement a side-channel attack using LLVM?

It can be any known attack, I just want to do it using LLVM to be able to log the information.

P.S.: I just started LLVM and I'm an absolute beginner.

4 Comments
2024/11/11
01:10 UTC

3

How to compile IR that uses x86 intrinsics?

I have the following IR that uses the @ llvm.x86.rdrand.16 intrinsic:

%1 = alloca i32, align 4
%2 = call { i16, i32 } @llvm.x86.rdrand.16.sl_s()
...
ret i32 0

I then try to generate an executable using clang -target $(gcc -dumpmachine) -mrdrnd foo.bc -o foo.o. This however gives the error:

/usr/bin/x86_64-linux-gnu-ld: /tmp/foo-714550.o: in function `main':
foo.c:(.text+0x9): undefined reference to `llvm.x86.rdrand.16.sl_s'

I believe I need to link some libraries for this to work but I'm not sure what or how, and couldn't find any documentation on the subject of using intrinsics. Any help would be appreciated! TIA.

11 Comments
2024/11/09
17:39 UTC

2

LLVM 17 prebuilt binaries for Windows

Looking at the [LLVM 17.0.6 releases] I cannot find a Windows build other than LLVM-17.0.6-win64.exe and LLVM-17.0.6-win32.exe. These installers do not install the full LLVM toolchain, only the core tools like clang and lld. Do I need to build LLVM myself?

3 Comments
2024/11/03
23:04 UTC

1

Do I need to build libcxx too to develop clang?

I have built llvm and clang but when I want to use the built clang++ version it cannot find the headers. My system clang implementation is able to find them and it works fine. Using the same headers as my local (v.15) version with -I also doesn't work.

So is it normal to also have to build libc/libcxx for clang development or what else do I need?

2 Comments
2024/10/31
21:03 UTC

2

How can I display icu_xx::UnicodeString types in Visual Studio Code debugger variables menu

0 Comments
2024/10/28
11:37 UTC

2

weird behaviour in Libfuzzer

When I run the fuzze by default (The default memory should be 2048MB) , I get out-of-memory at rss:119MB.

But when I run it with -rss_limit_mb=10000. it works forever and the rss stops at 481MB.

I know there may be memory leaks but It's still a weird behaviour.

0 Comments
2024/10/24
17:01 UTC

2

Changing the calling convention of a function during clang frontend codegen

I want to change the calling convention of a function during clang frontend codegen (when LLVM IR is generated from AST). The files of interest are clang/lib/CodeGen/CodeGenModule.cpp. I see that EmitGlobal() is working with the Decls passed on, where I can change the calling convention in the FunctionType associated with the FunctionDecl, this change reflects in the function declaration and definition but not at the call site where this function is called.

The callsite calling convention is picked form the QualType obtained from CallExpr, and not the FunctionType of the callee. This can be seen in the function CodeGenFunction::EmitCallExpr() in clang/lib/CodeGen/CGExpr.cpp.

I wish to change the calling convention of a function at one place, and this should reflect at all callsites where given function is called.

What should be the best approach to do this?

3 Comments
2024/10/17
06:45 UTC

2

How to optimize coremark on RISC-V target?

Hi all, AFAIK, GCC performs better on coremark based on RISC-V than LLVM.

My question is: are there any options we can use to achieve same even better score on RISC-V coremark? If not, I would like to achieve this goal with optimizing LLVM compiler, can anyone guide how to proceed on it?

2 Comments
2024/10/15
02:10 UTC

0

No wasm as target in llvm windows

I am really sorry if this is the wrong place to as this question but I do not know where to ask.

The compilation targets available in my llvm binary for windows ( 18.1.8) does not have wasm as a target. Neither does any older versions or higher versions (19.1.0) of llvm binaries for windows.

this is the output received when I type clang --version :

clang version 18.1.8

Target: x86_64-pc-windows-msvc

Thread model: posix

Emscripten? - I need to do it in hard way to learn more stuff. I am not willing to use Emscripten to compile my c code to wasm but only use llvm

Is the only solution is to build from source all by myself? for which I need to get that huge visual studio stuff?

I am sorry if this question was already answered . But I dd not find a solution when searched through google.

Thank you for helping me

Have a good day :)

0 Comments
2024/10/14
19:01 UTC

1

How Do We Make LLVM Quantum? - Josh Izaac @ Quantum Village, DEF CON 32

0 Comments
2024/10/03
12:25 UTC

Back To Top