/r/arm
Just wondering why this hasn't happened yet? Is it because of the different architecture? It would make a lot of sense to fully unify the two. I'm having a difficult understanding how customers can continue to use DGX platforms knowing how inefficient it is (i.e. power consumption is awful) when it would take Arm a few development life cycles to make this happen.
Hello,
I am looking for a network-oriented ARM workstation. It will be connected tk a bunch of devices (potentially 30+) and needs to act as a WiFi access point. It also needs fast (ideally 10Gbps+) Ethernet ports and a multithreaded architecture.
I will operate it as local server to monitor the status of the devices and at the same time offer a LAN-only web frontend. Think of it as a powerful router/switch.
I looked around and found a bunch of mini PC oriented towards this application, but they all mount either an Intel or an AMD chip. As I prefer working with ARM tools, I am struggling to find an off-the-shelf equivalent. I am also open to evaluate a traditional, non-network-oriented workstation and adding a powerful NIC via PCIe.
Budget is around 1000$. Any suggestions?
Since ARM uses MMIO, assume a GIC on the system being discussed.
When a packet arrives, does the network card place the packet in memory and then signal the GIC?
Hello, I had an lab assignment of implementation a debayering algorithm design on my digital VLSI class and also as a last step comparing the runtime with a scalar C code implementation running on the FPGA SoCs ARM cpu core. As of that I found the opportunity to play around with neon and create a 3rd implementation.
I have created the algorithm listed in the gist below. I would like some general feedback on the implementation and if something better could be done. In general my main concern is the pattern I am using, as I parse the data in 16xelement chucks in a column major order and this doesn't seem to play very good with the cache. Specifically, if the width of the image is <=64 there is >5x speed improvement over my scalar implementation, bumping it to 1024 the neon implementation might even by slower. As an alternative would calculating each row from left to right first but this would also require loading at least 2 rows bellow/above the row I'm calculating and going sideways instead of down would mean I will have to "drop" them from the registers when I go to the left of the row/image, so
Feel free to comment any suggestions-ideas (be kind I learned neon and implemented in just 1 morning :P - arguably the naming of some variables could be better xD )
https://gist.github.com/purpl3F0x/3fa7250b11e4e6ed20665b1ee8df9aee
I was reading the GICv3 spec and notices it supports system registers, ICC_*_EL1,etc. and also memory mapped registers for the distributor and cpu interface, GICD_*, GICC_*.
Why is this and which registers should one use while writing software?
It has been a while since I've posted about the Gentle Introduction to ARM 64 Bit Assembly Language". The free book is written for the person knowing C and C++ to bridge your existing knowledge backwards into assembly language.
Many improvements have been made including a more detailed discussion of variadic functions on Apple M series.
Reminder, this book includes a macro package that lets the same assembly language build on Apple and Linux machine.
Here is the link to the book on Github.
We are getting more readers making suggestions for improvement and correction. We are grateful to them.
Thank you
Does anyone know if there's likely to be a successor to the Honeycomb LX2 anytime soon?
It's nearly there, especially with reasonable uefi support, but ideally it'd have two m.2 slots and a pcie for a gpu.
Performance wise it also leaves a bit to be desired as my understanding it was based on an older arm architecture when it was released and that was five years ago, which isn't great given the price.
Or alternatively any other mid range arm systems that fulfill the above and have 10gbe, but after looking it seems like you can get small pi style boards, or large power hungry servers with nothing in between.
I have built linux kernel for arm64 defconfig and it runs very well on qemu.
Now I am trying to boot it with arm trusted firmware. When I build the trusted firmware with BL33=kernel-image and ARM_LINUX_KERNEL_AS_BL33=1, it generates qemu_fw.bios binary.
So, according to the tfa documentation, I am supposed to pass -bios qemu_fw.bios option to QEMU. But when I do it, the boot fails [ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]
So its not able to read the initrd image.
This does not happen without the -bios option.
What might have gone wrong?
A memory popped into my my mind of a fresh out of college coworker hyping ARMs IPO/stock in the late 90s or early 2000 cant remember exactly. Out of curiosity I couldn't find if you wouldve made out of you held until SoftBank acquired them.
I was working on a painting at my desk today and realized at the end of the day I had been leaning on the edge of it with my forearm. I know have a red stinging spot on my arm where the pressure was and my forearm is killing me all the way into my thumb and fingers. How do I relieve this pain and what could it be? I know the leaning was the issue but did I anger a nerve? Help me
I don't really know if this is the sub to ask this, if it isn't, i'll remove the post (sorry in advance :) )
I have to do an assigment for class, creating a routine on arm5 assembly that multiplies two numbers and checks if there is an overflow (the format of the numbers is signed Q12). It should return (by r0) 0 if there isn't overflow and 1 if there is.
This code is form last year's solution of a fellow student, and i was just reviewing it bc, ngl, i'm pretty lost. But i do not understand anything. Why the lsr and lsl to the low part of the result of the multiplication? why comparing it then against 0?.
Thanks in advance.
I am interested in learning about ARM embedded development, in pure C.
I have a Raspberry Pi Zero W, and an Arduino R4 WiFi, which are both ARM based. Would these be good starting points to to studying this topic? Which one of these two do you think would be more useful, difficult, easier, well documented, etc. Please tell me your opinions, thank you :)
hi, i have a tiva-c MC the TM4C123GXL to be precise , i tried flashing on it today and it gave me a strong static shock and wouldn't burn the code, what might be the reason?
This is a cool project I stumbled across that compiles a tiny subset of python into x86 assembly language: https://github.com/benhoyt/pyast64
Enjoy!
-Jeff
What is the fastest arm64 processor found on a laptop and available on the market right now?
EDIT: Apple silicon excluded.
Thanks
Hi there,
looking for a hardware recommendation for board / case etc for a home-built NVMe only NAS
Requirements:
ARM-based, power efficient, no hardware-intensive tasks needed, something at the level or above a Raspberry 5
must run a standard linux distribution and some docker containers
4x NVMe (more is fine) + eMMC or internal NVMe for OS
decent build quality, good looking case, quiet
optional: PoE powered, 60W can be delivered by my switch
price is not an issue
Looking forward to your recommendations. I looked at the new UGREEN NASync which has a nice x86-bases model with the DXP480T.
Str1atum
Is the most powerful ARM ITX(or micro-atx) board with pcie and UEFI support still the Coretex A72? Why can't other companies make ARM like Apple? Can I wait some board with Qualcomm?
Hi! I am searching for an ARM laptop and cant find the one for me. Why do I want ARM? I am interested in the architecture and absolutely want the long battery life. It should be 14 inch, have 4-8 GiB of RAM, preferably 8. It needs to be able to handle running stuff like Pulsar-edit or VScode, Firefox, and spotify at the same time without lag. I want to use it with either armbian or endeavourOS-arm. It should also have functioning bluetooth and wifi with these linux distros.
Do you have a recommendation?
I've got a MacBook Pro M2, and I'm facing a bit of a pickle. At work, my boss sent over an ancient Excel spreadsheet that relies on ODBC and a MYSQL connection. I've tried everything I can think of, but it seems that none of the Excel versions on my MacBook can handle it due to its age.
I attempted to tackle this by setting up a Windows 11 ARM virtual machine and installing Excel, ODBC, and the MYSQL connector. Unfortunately, no dice. Every time I try to open the spreadsheet, I get hit with a message about incompatible DNS architecture.
Has anyone else dealt with something like this before? Any suggestions on what I could try next?
(I'm not very proficient in English, so I did my best to explain the problem.)
As the title suggests, I’m looking for a decently powerful (around 3 gHz) ARM (or RISC) based SBC that has a PCIe or PCIe Mini expansion slot that I could put a GPU in and make my Linux dream machine?
Hi,I would love to develop (or find)quite cheap but power full,easy to use board computer that is open source (what can do) Parameters that would be nice: Speed: around 1GHz core maybe dual core Grafics: it would be nice to have easy to use gpu 2d rendering would do 3d would be bonus Memory: ram > 8MB Interface: i2c,analog,digital,spi,i2s at least 45 pins Wifi would be HUGE plus
The main part: Easy (enough) to program Visual studio IDE compatible RTOS capable C++ barebones
Im coming from ESP32 s3 but it wont cut it anymore and p4 isnt fast enough
For this:https://www.reddit.com/r/esp32/comments/1blz53a/new_huge_thread_about_esp_nottebook/
Hey yall, so I know you can't virtualize x86 on ARM cuz they have a different instruction set, but based on what I have learned modern architectures are using a hybrid of complex and reduced instruction sets.
For example, x86 now has a RISC core with CISC compatibility layers ( From my limited understanding kind of like a hardware translation component ) Even ARM based chips have some more complex instructions for doing specific operations.
Now with software like QEMU I can emulate an x86 system on my phone, but its still pretty slow. So I was kind of wondering about something like "hardware accelerated emulation" wherein the x86 instruction set being emulated, through compiler optimization, gets some dedicated ARM instructions that significantly improve performance.
I'm currious about what a processor design specialist might think about this.
Hello all, I'm curious to know if there's any way to add threading support in the toolchain of arm cpp. I am trying to cross compile a rlottie graphics library for arm9 MCU IMX1050 using MCUXPRESSO IDE but I'm getting error like "std::mutex is not a member of std" I believe it is due to current toolchain doesn't support the threading is there any way to add threading or compile it other way around? I tried compiling it with different C++ standards (C++11 & 14).
What is the main difference between the two architectures to win favour of chip makers?