/r/osdev
I am following the RISC-V bare bones on osdev wiki (https://wiki.osdev.org/RISC-V_Bare_Bones)
When I try to run the kernel.elf I get:
[kittycat@kittycat purros]$ qemu-system-riscv64 -machine virt -bios none -kernel kernel.elf -serial mon:stdio
Hello world!
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhheeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee...
Is this normal?
and how do I prevent it?
Hi there guys I'm looking for help for making my own mobile operating system based on Bada OS (Samsung) with these specs
Features:
Sensors: Accelometer Messaging: SMS MMS Email Browser: HTML/WAP Games: some simple games Java: no Facebook and Twitter (X) application AntennaPod app Google Sky Map and Google Earth app Organizer (clock, calculator and to-do list) Periodic Table (cherkykh.tech) Music player (MP3 WAV WMA AAC+) Video player (MP4) Document viewer (Word Excel PowerPoint and PDF) Predictive Text Input
So i am working with multiboot2 and i am tring to setup the gdt for hours idk where is the fault the emulator looks to be rebooting idk what's wrong.
here os my code:
gdt_flush:
mov 16(%esp), %eax // this one does it correctly using the early boot log i print the addr and then check eax looks to be fine.
<hlt> // debugging here cuz after this it's rebooting.
lgdt (%eax)
mov $0x10, %ax
mov %ax, %ds
mov %ax, %es
mov %ax, %fs
mov %ax, %gs
mov %ax, %ax
ljmp $0x08, $.flush
.flush:
ret
/// you could find the definition of the used things in osdev ([https://wiki.osdev.org/GDT_Tutorial](https://wiki.osdev.org/GDT_Tutorial))
#define GDT_SIZE 3
struct gdt_ptr_s gdt_ptr = {0};
struct gdt_desc_s gdt_desc[GDT_SIZE];
extern void gdt_flush(u32);
struct gdt_desc_s init_gdt_desc(u32 base, u32 limit, u16 flag) {
struct gdt_desc_s desc = {0};
// tried fillipng these (desc.upper and desc.lower values but didn't work).
// Create the high 32 bit segment
desc.upper = limit & 0x000F0000; // set limit bits 19:16
desc.upper |= (flag << 8) & 0x00F0FF00; // set type, p, dpl, s, g, d/b, l and avl fields
desc.upper |= (base >> 16) & 0x000000FF; // set base bits 23:16
desc.upper |= base & 0xFF000000; // set base bits 31:24
// // Shift by 32 to allow for low part of segment
// desc <<= 32;
// Create the low 32 bit segment
desc.lower |= base << 16; // set base bits 15:0
desc.lower |= limit & 0x0000FFFF; // set limit bits 15:0
return desc;
}
void gdt_setup() {
gdt_desc[0] = init_gdt_desc(0, 0, 0);
gdt_desc[1] = init_gdt_desc(0, 0x000FFFFF, (GDT_CODE_PL0));
gdt_desc[2] = init_gdt_desc(0, 0x000FFFFF, (GDT_DATA_PL0));
// i also tried this: gdt_ptr.len = (GDT_SIZE - 1) * sizeof(struct gdt\_desc\_s);
gdt_ptr.len = (GDT_SIZE * sizeof(struct gdt_desc_s)) - 1;
gdt_ptr.ptr = (u32)&gdt_desc;
early_logf("struct addr: 0x%x\\n", (u32)&gdt_desc);
gdt_flush((u32)&gdt_ptr);
current code monitor informations:
(qemu) info registers
EAX=00100b78 EBX=00100b48 ECX=00102b58 EDX=001007bb
ESI=00100b48 EDI=00103750 EBP=001030a8 ESP=00102b44
(qemu) x/16x 0x100b78
0000000000100b78: 0x00000000 0x00000000 0x0000ffff 0x00cf9a00
0000000000100b88: 0x0000ffff 0x00cf9200 0x0b780010 0x00000010
I finally setup my scheduler such that my thread gets executed after the iretq
from contextSwitch.asm file - the only problem is after my thread is done executing, it jumps to a random memory address and it crashes the program
after debugging I've found that due to some mistake on my end, when the thread is over, the top of the stack is storing the pointer to an invalid place with no "useful" code, instead of the next instruction after the contextSwitch
Hi guys,
Is there any guideline on how to design an executable format? How does one decide on what type of executable format is needed for their OS? What are different questions that come up when deciding this?
Hello,
I've been trying to read the CPU Temp, i remember doing it once in my old kernel which the code now is lost, this is how i remember doing it:
uint32_t ax, cx = 0x1A2, dx;
asm volatile("RDMSR;" : "=a"(ax), "=d"(dx) : "c"(cx));
This gives me a general protection fault with Error code "0"
if i for example put 0x277 instead it works just fine and AX & DX values aren't zero
So i don't really get why it's giving me a GP Fault, maybe i need to set up something first?
Hello
I have been writing my own AML interpreter for around a week. It’s mostly working and I am able to initialize devices, evaluate _PTS and shutdown qemu and my own laptop.
I just have some confusion about loading SSDTs. ACPI spec for SSDT says that you are supposted to load SSDTs with unique OEM table id. My machines report all ACPI tables with the same OEM table id, so I thought that I only needed to load first one. Maybe I am interpreting unique id wrong as _PTS calls a method from the second SSDT.
I changed the code to load all SSDTs, but one SSDT is defined twice in RSDT/XSDT (on different physical address, but identical data). I currently load both and just abort the duplicate SSDT when interpreting fails on inserting already existing named object. Is this how it’s supposted to be done?
Also acpidump on my machine reports same tables as my os finds in RSDT/XSDT, just without the duplicate.
So , I am trying to make my own OS purely in assembly, but it crashes every single time i try to enable interupts it instacrashes. What could be causing this issue?
EDIT: SOLVED (for anyone that might be experiencing similiar issue, here i found the correct way: https://github.com/MaaSTaaR/539kernel)
;; Protected mode correctly loaded, some string printed, and then this function is loaded.
setup_idt:
mov ebx, idt
mov eax, isr0
mov ecx, 0
fill_idt_loop:
mov word[ebx], ax
add ebx, 6
rol eax, 16
mov word[ebx], ax
add ebx, 2
inc ecx
cmp ecx, 46
jne fill_idt_loop
mov eax, idt
mov [eax], bx
lidt [idt_descriptor]
sti ; CRASHES HERE!
ret
idt_descriptor:
dw 255
dd idt
idt:
times 47 dq 0x00008e0008000000
isr0:
jmp $ ; Example isr
I am trying to learn about operating systems and have found some books to start with. Among the books I found, 'Operating System Design: The Xinu Approach' seems to be the most suitable for me. However, the issue is that I don't know how to compile the sample code snippets in the book. I hope someone who has experience with Xinu can help me.
I come from an embedded (microcontroller) background and am used to interfacing to external peripherals through iic or spi busses. Both these busses need some configuration and have a few memory mapped registers to perform this configuration. There are also registers on the other side of the bus (that is the device that communicates via iic with the microcontroller) that require configuration.
My problem is understanding how that translates to (say for example) the PCIe bus on an x86 platform. Let's say I want to send a packet of data through a ethernet card that is connected to the PCIe bus. How would I know what memory locations to write to? How does the ethernet card get configured? Do you always need the datasheet of the ethernet card (or chip set) to write a driver for it?
Are there any links that you can recommend to help me understand memory mapped io better?
Hi everyone I am new to kernel,os all these stuff .I want instructions to build a new shell and kernel for the os.Any suggestions for this
Hello, I have a doubt about using a kernel with a userland and other components. Suppose that we have separate source code repositories for the kernel, the userland, the package manager, and the installer. If we compile the kernel independently of the other components, how do we later compile the other components and link them with the kernel?
I ask this question specifically in regards to the illumos kernel. I have found materials covering the compiling of the kernel, but nothing that specifically concerns linking everything together to make one cohesive unit.
Thanks in advance
Hi! I wanted to learn more about cryptography and Secure Boot and TPM seems really interesting, I could find some great resources on how TPM works and how it's used to keep keys secure but not as much on how the Platform Configuration Registers are set, and not able to be spoofed, and for Secure Boot, other than that it checks the signature of the EFI executable, and it boots if it's valid and it maybe has something to do with windows drivers and kernel modules on Linux somehow, I don't know much, can you guys point to any resources to learn more about this? Thanks in advance!
Hi, I am a CS student who is thinking about writing a custom OS as a hobby project. While thinking about it, I had this idea that it would be cool if I could somehow make it easy to run existing software on it. Would doing these two things basically allow me to do that? Would it maybe even enough to just do the syscalls and make sure the OS can handle the ELF file format? Another idea would be to write custom syscalls with a custom standard library and then making a custom target for Rust, C, C++ etc. Thanks!
Hello, I am an art student who dabbles in tech and I'm looking to build a very simple USB stored operating system that can be booted through BIOS. I want it to be a story game disguised as an OS so it wouldn't have to be very flexible in terms of app or graphics. Just a simple terminal type setup that guides you through a mostly text based story that is affected by commands which you learn as you go. I'm wondering what recommendations people have for how to get started/what language to use/any online tutorials that might be helpful.
Thank you so much!
How do i add a shutdown function to my os
i tried reading the OSDev page but it wasnt clear
btw im working on c
Hello reddit, I am new to OS development and a complete beginner, currently reading up on operating-system structures and how processes work. I have just a question, in the process context switching, how does exactly the kernel determine the size of the allocated stack over a period from the new process state up until the termination state? the system is utilizing a single core, hence time-sharing principle.
I have a very basic OS (https://www.github.com/Game-dev2233/ThorinOS) if you even call it that and I have a basic ATA driver so how do I implement a filesystem and userspace also why do I need stuff like a GDT, IDT, etc?
i made a keyboard input system. but when i started working on the backspace system (deletes a character). i decided to use the \b. but it displayed a weird character. it was a rectangle box. with a 45 degree angle square in the middle.
Hello people!
Recently i wrote an AHCI Driver, read & write, i've been trying to find a way to implement a filesystem (like fat32 or ext) but i just cant get grasp of where & how i should start.
I did try searching on google, i've also read the filesystem category in osdev.org but again i just cant get grasp of where & how i should start implementing a filesystem.
A point to a right direction will be very helpful!
Guys can anyone tell me if i can use Nim programming language for OS development. I want to give a little bit of challenge to myself.
like the title, there's not much source about this
Hi there,
Been recently playing around with writing a kernel that is called from UEFI. Now, I am using Qemu with TianoCore to emulate the UEFI environment. This is all going well, I am able to get the graphics output buffer and draw random stuff on the screen from my kernel code. However, when trying this out on actual hardware, it seems that my computer simply refuses to call the kernel code?
I am a bit unsure as to how to debug this, as it takes a little to even copy to usb -> reboot -> etc., so I was wondering if any of you have any idea as to what may be the cause of this.
My repo is at https://github.com/florianmarkusse/homegrown
the code that calls the kernel starts at code/uefi/hello.c:344
and the kernel code is located at code/kernel/kernel.c
(For build instructions, you can just run ./install-dependencies.sh && ./build-create-run.sh )
If you have any questions, also happy to answer!
If you have any clue what may be going wrong, please share :).
Thanks for your time!
I had made a uefi bootloader to load a elf file (my kernel) but when testing on real hardware I realized that not all memory addresses work, so now I memory map everything. I wanted to test on some older devices (no uefi) so I am now using grub. Here is my plan, Grub will load some 32bit code as well as a module (my 64bit kernel) the 32bit code will then map the kernel into the upper half. But when creating the page tables how can I avoid the kernel? multiboot does give me a LOAD_BASE_ADDR but what about if there are multiple load sections like this?
Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x001000 0x00200000 0x00200000 0x025e4 0x025e4 R E 0x1000
LOAD 0x004000 0x00203000 0x00203000 0x00058 0x05050 RW 0x1000
i creating function to store registers in c struct
but getting fault without volatile and getting segmentation fault in qemu if use volatile
asm volatile("cli");
regs registers;
asm volatile("mov %0, %%eax" : "=g"(registers.eax));
asm volatile("mov %0, %%ebx" : "=g"(registers.ebx));
asm volatile("mov %0, %%ebp" : "=g"(registers.ebp));
asm volatile("mov %0, %%edx" : "=g"(registers.edx));
asm volatile("mov %0, %%esp" : "=g"(registers.esp));
// asm volatile("mov %0, $." : "=g"(registers.eip));
asm volatile("mov %0, %%ecx" : "=g"(registers.ecx));
asm volatile("sti");
I implemented kprintf but got a problem
digits (numbers) displaying fine but other characters doesnt display or display random characters
VGAPrintString and VGAPutEntry works fine
static
char
*
vsphlp(
char *dest,
long num
)
{
if (num <= -10)
dest = vsphlp(dest, num/10);
*dest++ = '0' - num % 10;
return dest;
}
char
*
vasprintf(
long num
)
{
char *buff;
if (num < 0)
*buff++ = '-';
else
num = -num;
*vsphlp(buff, num) = '\0';
return buff;
}
void
kprintf(
char *fmt,
...
)
{
__builtin_va_list varg;
va_start(varg, fmt);
va_end(varg);
while (*fmt != 0)
{
switch(*fmt++)
{
case '%':
switch(*fmt++)
{
case 'd':
VGAPrintString(vasprintf(va_arg(varg, int)), 0x17);
default:
VGAPutEntry('%', 0x17);
}
default:
VGAPutEntry(*fmt++, 0x17);
}
*fmt++;
}
}
I am a solo man with knowledge of js code(learned for web-dev) , basic knowledge on python and C. So I need to learn before I can even start right? And there's also processor compatibility. Which processor architecture is the best to make OS for?
My motive behind this is to make devices inaccessible to others and to have complete control over everything. If anyone gets my devices, they can't access anything of mine because the operating system is new to them. Even for the FBI(maybe a dream). If that works like that. I am new. Where to start?
we often find in unix-like operating systems (like Linux for example) we have this one user root who has access to everything in the system and can do anything to the system. Of course this naturally means that most exploits targeted at the system will aim at getting root privileges because that will allow them to do anything they want.
My question regarding this is why at all should we give all power to a "root" user and make them to a certain extent "overpowered" and with the ability to do anything? Isn't that just opening the system to exploits directed at getting root access? Even the smallest bugs in kernel-side code can lead to compromise of entire systems at large and give a prospective attacker full control over the system.
Instead of giving all the system privileges to just one user what other alternatives do you think are possible which will enhance system security?
Thanks in advance.
So, I am making a simple memory allocate/deallocate module for heap allocation. I just need to make sure that the code doesn't have any fatal flaws(other than using goto and casting ints into pointers). (The system is supposed able to only allocate 4kb at the time, I will upgrade it over time)
EDIT: I edited the code to a new version, that does work, but still interested in your suggestions!
#include "types.hh"
#pragma GCC diagnostic ignored "-Wint-to-pointer-cast"
#pragma GCC diagnostic ignored "-Wtype-limits"
u8 freeCounter = 0;
u32 allocCount = 0;
u8 *membuffer = (u8 *)0x100000; // Memory starting at 1meg
u32 *allocations = (u32 *)0x80000;
void *malloc() {
u32 alloc = 0x100000;
begin:
for(u32 i =0; i < allocCount; i++){
if(alloc == allocations[i]){
alloc += 4096;
goto begin;
}
}
allocations[allocCount++] = alloc;
return (void*)alloc;
}
void *malloc(u32 size) {
u16 times;
u32 alloc = 0x100000;
if(size > 4096) times = size / 4096;
if(size > times * 4096) times++;
else return malloc();
begin:
for(u32 i =0; i < allocCount; i++){
i64 diff = allocations[i] - alloc;
if(diff < (times - 1) * 4096){
alloc += 4096;
goto begin;
}
}
allocations[allocCount++] = times;
for(int i =0; i < times; i++)
allocations[allocCount++] = alloc + i * 4096;
return (void*)alloc;
}
void compress() {
for(u32 i =allocCount -1; i >= 0; i--){
if(allocations[i] == 0)allocCount--;
else break;
}
for(u32 i =0; i < allocCount; i++){
if(allocations[i] == 0)
allocations[i] = allocations[--allocCount];
}
}
u8 free(void* ptr) {
freeCounter++;
if(freeCounter >= 128)compress();
for(u32 i =0; i < allocCount; i++){
if(ptr == (u8*)allocations[i]){
// For big allocation
u32 last = allocations[i-1];
if(last > 0 && last < u16_max){
for(u32 n =0; n < last; n++)
allocations[i+n] = 0;
i--;
}
allocations[i] = 0;
return 0;
}
}return 1;
}