/r/arm
I'm debating grabbing a ARM Snapdragon laptop versus AMD Ryzen AI 9 laptop. But my concern is that i really enjoy older and more obscure software (not too obscure). And i don't know if i would often run into compatibility issues.
These are really similar in build but the Snapdragon has a longer battery and less weight which is noticeable.
Any advice?
https://nanoreview.net/en/laptop-compare/hp-omnibook-x-vs-hp-omnibook-ultra-14?m=r.1-and-c.2_r.1
I invested in arm before the earning call but don’t know the reason for its fall Even though it has a positive earning
Today, Google released their Axion instances powered by custom ARM chips, and we got early access to test them out for our internal workloads. The results were impressive!
I am sharing our blog post where we shared the results and observations, and I am happy to discuss if anyone got similar result.
https://cloudfleet.ai/blog/partner-news/2024-10-google-cloud-new-arm-instance-axion/
Just got an interview at ARM next week, was wondering does anyone know what type of questions they ask?
Hi, I've tested this code on the MSP430G2553 launchpad and it worked well, but on proteus it doesn't, as shown in the image. What could be the problem here?
#include <msp430.h>
/**
* main.c
*/
int i = 0;
int main(void)
{
WDTCTL = WDTPW | WDTHOLD; // stop watchdog timer
P1DIR |= 0x01;
while(1){
P1OUT ^= 0x01;
for(i=0;i<50000;i++);
}
}
i was looking at how to run some steam games on linux and thery where using proton.... that got me thinking, will proton work on linux on a ARM machine?
Hi everyone,
I have a device with the Snapdragon 8 Gen 2 chipset, and I'm planning to install Windows on it using QEMU for virtualization. I know that the Adreno 740 GPU in this chipset supports DirectX 12, but I'm having trouble finding the appropriate drivers for it under Windows.
Has anyone successfully managed to run Windows on ARM with Snapdragon 8 Gen 2 and get DirectX 12 working? Are there any drivers available or workarounds to get hardware-accelerated graphics (DirectX, OpenGL, or Vulkan) on this platform? Any help with driver sources, tweaks, or tips on optimizing performance in a virtualized environment would be greatly appreciated.
Thanks in advance!
Windows on Arm #ReleaseTheNvidiaDriver
Hello to everyone.
I'm trying to find a way to use a kernel newer than the 4.9 on the Jetson Nano.
To achieve the goal I’ve emulated the Raspi OS (based on Debian Bookworm running with kernel 6.x) and I’ve enabled the KVM nested virtualization inside of it.
The Jetson Nano has 4 cpus,so 2 can be assigned to Ubuntu and 2 cpus to the RaspiOS or maybe 1 and 3.
Now the question is : can I do the passthru of the jetson nano GPU from the host OS (Ubuntu 22.04) to the guest OS (RaspiOS / Debian Bookworm) ?
If it can be done,what will happens ? will the gpu be usable within Debian ?
Can VMWare ESXi for ARM be useful in this scenario ?
Was lucky enough to get invited to do a HireVue asynchronous video interview as the first round for hardware engineering. Wondering if this is going to be a technical or just behavioral, and if anyone has any tips. Thanks!
This is my New Architecture on RISC
Hey everyone,
I’m seriously considering moving my homelab to an Ampere-based server setup, but I have a few questions and concerns I’m hoping the community can help with.
Lastly, I had a Mac Studio for a while and wasn’t satisfied with the server options available. So now I’m keen on exploring the Ampere route but would appreciate any insights or advice from those who have already made the leap!
Thanks in advance for any help!
I suck at titling things; so let me explain.
I want to build a dedicated AI server to run LocalAI and adjacent tools with a Radeon Instinct (because they're cheaper...) and I was looking at the performance of the ARM CPUs I have faced so far; RockChip 3588, Ampere Altra (of which my VPS has 4 cores) and the lot built into the Raspberry Pi.
But going from an RK3588 to Ampere is such an insane price jump that I wondered: Is there really nothing inbetween?
The RK3588 has amazing performance and has been a "rock" solid solution for me and my homelab. But it caps at 8 cores, and it's PCIe interface would be an insane bottleneck when plugging a Radeon Instinct in... so I am looking for something above the RK, but below the Ampere 32-Core (Q32-17).
Does that exist?
Arm has device/Io types in the CHI specs, which are RE, nRE, RnE and nRnE. nR mean no reorder of loads and stores as far as I understand it. That is why all loads and stores to this type of device should be ordered, one operation LD/ST completes than the new operation starts. My question in most of the systems are based on Weak Memory Model, than why we need such kind of ordering in case of ARM Device type nR?
Hello to everyone.
while I was looking for a way to enable the nesting virtualization on my Jetson nano,after having enabled KVM applying these patches :
https://github.com/OE4T/linux-tegra-4.9/blob/oe4t-patches-l4t-r37.4/
When I have googled for acquiring more informations,I found these interesting threads :
On the first site,he says :
can be enabled by "-M virt,accel=kvm,virtualization=on" when starting a VM
Good,I could try,but I'm not using qemu directly (I've installed qemu vers. 9) and virt-manager version 4.0. Maybe I should upgrade it ?
In virt-manager I don't see how I can specify those parameters. Anyway I'm not sure that it will work,because on the second thread he said to :
-append "kvm-arm.mode=nested" \
Where is the truth ?
Very thanks.
I have used the RNDR register as follows:
mrs x0, RNDR
But while compiling, the assembler throws: Error: selected processor does not support system register name 'rndr'
I have tried passing -march=armv8.5-a and -mcpu=cortex-a72. But no luck.
Any help would be appreciated.
I'm writing a compiler project for fun. A minimalistic-but-pragmatic ML dialect that is compiled to Aarch64 asm. I'm currently compiling Int
and Float
types to x
and d
registers, respectively. Tuples are compiled to bunches of registers, i.e. completely unboxed.
I think I'm leaving some performance on the table by not using SIMD, partly because I could cram more into registers and spill less, i.e. 64 floats instead of 32. Specifically, why not treat a (Float, Float)
pair as a datum that is loaded into a single q
register? But I don't know how to write the SIMD asm by hand, much less automate it.
What are the best resources to learn Aarch64 SIMD? I've read Arm's docs but they can be impenetrable. For example, what would be an efficient style for my compiler to adopt?
Presumably it is a case of packing pairs of f64
s into q
registers and then performing operations on them using SIMD instructions when possible but falling back to unpacking, conventional operations and repacking otherwise?
Here are some examples of the kinds of functions I might compile using SIMD:
let add((x0, y0), (x1, y1)) = x0+x1, y0+y1
Could this be add v0.2d, v0.2d, v1.2d
?
let dot((x0, y0), (x1, y1)) = x0*x1 + y0*y1
let rec intersect((o, d, hit), ((c, r, _) as scene)) =
let ∞ = 1.0/0.0 in
let v = sub(c, o) in
let b = dot(v, d) in
let vv = dot(v, v) in
let disc = r*r + b*b - vv in
if disc < 0.0 then intersect2((o, d, hit), scene, ∞) else
let disc = sqrt(disc) in
let t2 = b+disc in
if t2 < 0.0 then intersect2((o, d, hit), scene, ∞) else
let t1 = b-disc in
if t1 > 0.0 then intersect2((o, d, hit), scene, t1)
else intersect2((o, d, hit), scene, t2)
Assuming the float pairs are passed and returned in q
registers, what does the SIMD asm even look like? How do I pack and unpack from d
registers?
Hello, I am trying to use Steam with Box86 through Pi Apps, and whenever I try to launch Steam, it opens in the background and does not display any gui. any help would be appreciated!!
I wonder why everyone can't take privilege from android phone like boot on computer? Same on computer, we can change operation system and take root permition.
I'm planning to make an Arm based device. Does placing device tree file in das u-boot configuration enough? Or do I have to place it in Linux configuration too?
Microchip's frontline technical support help desk is of no use here. What else is new?
So, I'm trying to get a deeper understanding of the inner workings of my Cortex-M0+ and friends microcontrollers.
I understand the difference between an exception and an interrupt. I understand how the individual peripherals have individual IRQ lines that go to the NVIC. I understand that the core fielding an interrupt/exception will switch to Handler mode, set the Exception Number in the IPSR, reach into the IVT based on the exception number, save state, and jump to the exception handler.
What I don't have down is the coupling between the NVIC and the core. When the NVIC decides that it's an opportune moment to appraise the core of the fact that IRQ[x] needs to be serviced, it's the HOW of that process that yet eludes me. When the NVIC decides on the value of x there, how does it communicate that value to the core to get the ball rolling toward an eventual ISR dispatch? Is there a dedicated, hidden register that if it's set to zero, the NVIC is communicating that no ISR needs dispatched, and otherwise, it's the exception number of the ISR that does need dispatched? Is it a dedicated bus that the NVIC alone that write to and the core(s) alone read, such that when there's new traffic on it, that starts the process?
At some point, some part of the core has to do:
if (condition)
{
core_isr_dispatch(x);
}
What is that condition
? How does it obtain the value of x
?
As we continue to evolve, Armbian is proud to introduce our latest release, packed with enhancements, new hardware support, and important upgrades that will further solidify the stability and performance of your systems.
Key Highlights
Platinum Support and Community Contributions
Our focus remains on boards with platinum support, where vendors assist us in mitigating costs, ensuring top-tier support and contributing to open-source efforts. If you’re looking for the best-supported boards, we highly recommend selecting from this category.
Armbian remains a community-driven project. We cannot maintain this large and complex ecosystem without your support. Whether it’s rewriting manuals, BASH scripting, or reviewing contributions, there’s a place for everyone. Notably, your valuable contributions could even earn you a chance to win a powerful Intel-based mini PC from Khadas.
Production Use Recommendations
For production environments, we recommend:
Recognizing Our Contributors
We extend our deepest gratitude to the remarkable contributors who have played a pivotal role in this release. Special thanks to: ColorfulRhino, igorpecovnik, rpardini, alexl83, amazingfate, The-going, efectn, adeepn, paolosabatino, SteeManMI, JohnTheCoolingFan, EvilOlaf, chainsx, viraniac, monkaBlyat, alex3d, belegdol, kernelzru, tq-schmiedel, ginkage, Tonymac32, schwar3kat, pyavitz, Kreyren, hqnicolas, prahal, h-s-c, RadxaYuntian and many others.
Our dedicated support staff: Igor, Didier, Lanefu, Adam, Werner, Metka, Aaron, and more, deserve special recognition for their continuous efforts and support.
Join the Armbian Community
Armbian thrives on community involvement. Your contributions are crucial to sustaining this vibrant ecosystem. Whether you’re an experienced developer or just getting started, there’s always a way to contribute.
Thank you for your continued support.
The Armbian Team
Hi all,
I’m trying to do some pretty high speed stuff (60MHz) on a teensy 4.0 dev board running at 600MHz.
Basically I want to read an 8 bit port on the rising edge of the 60MHz clock.
Does anyone know how many clock cycles the below pseudo-code would take? I’m trying to get an idea on if this is even doable with the Teensy 4.0.
The below would be inside an ISR that is tied to the 60MHz clock.
bool found = FALSE;
If(PORTA==0x45)
{
found = TRUE;
disable interrupt;
}
Hello, I have been contemplating buying a new Qualcomm based laptop for the start of my Computer Science course at university. I imagined the chip's efficiency and battery life would be ideal and it would be plenty powerful enough. I am thinking of the Microsoft Surface 7 13" X plus or 15" X Elite depending on which screen size I prefer when I look at them in person as well as their cooling solutions. I was wondering what the ARM based compatibility was for development tools and other essential computer science software and would it be worth going with ARM or would there be too many issues? Many thanks!
Hi.
My first post.Sorry if i make any mistakes in writing.
My question is can we remove a arm processor of android device and place it on a usb or esp32 or any like circuit and use it with pc.
thanks