I'm a happy owner of an original VF2. However, people buying this board should be aware that the StarFive JH7110 is not compatible with the RVA23 spec, which Ubuntu said it was the bare minimum for now on and might create some software incompatibilities in the future.
There's nothing that's compatible with RVA23 except qemu, and very few boards which are even partially compatible. Ubuntu's decision is a dumb one. This should run Fedora or Debian just fine.
> Ubuntu's decision is a dumb one. This should run Fedora or Debian just fine.
You either pull the future forward, or drag the past. Because of the small market, they decided to forgo generating legacy concerns before they even started seeing mainstream adoption.
I like the decision (they are choosing a better foundation) but I can see the merits either way.
I don’t think it’s dumb - hasty and premature perhaps. Manufacturers have been shipping boards with flaky RVV support, a years old kernel and undocumented blobs on in house baked OS and calling it a day.
Feels like a step towards strong arming them into shipping products that can be supported easier/not being left to rot in a drawer.
For release 26.04 next year, it makes more sense because that is a Long Term Support release and a lot of new hardware from then on will be RVA23-compliant.
They recommend 24.04.3 LTS for current hardware. Maybe they just don't want (then) old hardware to be stuck on a non-LTS release.
I don't think it's dumb. There are very strong hints that we can expect SpacemiT K3 RVA23 boards from possibly multiple vendors by perhaps the end of the year and certainly not long after that.
Leaving RVA23 support until 28.04 LTS would be FAR too long.
It would be nice to see both RVA20 and RVA23 supported in the same OS but the problem is that it's not actually practical to do runtime selection of alternative functions or libraries for all extensions in RVA23. It is possible and sensible for things such as V, perhaps, but extensions such as Zba and Zbb (not in RVA20, but supported by VisionFive 2) and Zicond, Zimop, Zcmop, Zcb have instructions which want to be sprinkled all through just about every function.
You'd have to either deny your main program code the use of all those extensions, or else completely duplicate every program binary.
A person might suggest that it would be more user friendly to support hardware that people own. There's a parallel to years ago, when Ubuntu was a lot like Debian but it was willing to ship non-free firmware by default because that was the hardware that people actually had, which is part of why it was praised as noob friendly.
They can bid on the page content. If you're advertising embedded systems for such, you can advertise on a page about embedded boards; you don't need to stalk users in order to bid on them personally.
I mean reading some tech news article isn't like rent or groceries or healthcare or something. No one is forcing you to use the website, so just closing the tab is always an option.
The VF2 (original) is a stolid workhorse, as long as you only use it for simple CLI-based stuff. The hardware is very well supported by Linux. The performance isn't great, but I used it to fix hundreds of Fedora packages to add RISC-V support, so you can't really beat its effectiveness. You do want to get the version with the max amount of RAM, and you'll also need to add an M.2 for storage (one PCIe lane!)
For CLI-based programming work an SSD is unnecessary, an SD card is perfectly fine as long as it's large enough to hold all your stuff and you have enough RAM to cache the frequently-used bits. The difference will be measured in a couple of dozen seconds on an hour-long build.
SD cards have the very great advantage that you can have multiple ones with different OSes or versions and swap them in seconds.
You wouldn't want to use an SD card for a full time build machine at Fedora or Canonical.
It's fine for a normal user doing a few builds a day. Most files never get written to the physical device, being in tempfs or disk cache and deleted before being flushed to disk.
Because of wear-levelling it's not how many times the busiest file is rewritten, but how many GB are written in total.
512GB cards currently cost $32, $35, or $40 depending on whether you want 100, 150, or 200 MB/s read speeds.
A GCC src tree is 2.5GB, the build tree is 11GB. It's going to take ~40 clean builds to put one write cycle on the whole card, so you'll probably get 100,000 clean builds off a card before you wear it out.
> Most files never get written to the physical device, being in tempfs or disk cache and deleted before being flushed to disk.
It's fine if you're using tmpfs, but I would expect written files on anything else to make it to disk even if they also spend some time in the write buffer. Especially on a build box where memory pressure is higher.
First, SBC processors are just a step up from microcontrollers, they're designed to talk to GPIO devices and servos and UARTs and sensors, not GPUs and network cards. The JH7110 only has a two lanes of PCIe 2.0, one of which is used for USB 3.0 and the other mostly (I assume) to provide an M.2 interface. However, it also has 6 UARTs (Universal Asynchronous Receiver-Transmitter, think RS232 serial), 7 channels of SPI (not counting the QSPI Flash controller directly integrated), 8 channels of PWM, 7 channels of I2C, 64 channels of general-purpose IO, I2S, SPDIF, two channels of CANbus, a directly integrated Ethernet MAC, USB 2.0 host, a MIPI-CSI camera interface and a MIPI-DSI video display block with either HDMI output or for directly driving a 24-channel parallel LCD.
Embedded system designers don't want to add a PCIe-to-RS232 card to their industrial robot, NAS, or video camera, heck, they don't want to add an external GPU. They don't even want to add a separate northbridge/southbridge or PCH, they want a single-chip SOM. Going up and down those layers between PCIe and SATA or USB or Ethernet is expensive in terms of chip count and power.
Second, I don't think they want to deal with the drivers. If you want to plug in your choice of PCIe device - be it a GPU, RAID card, sound card, who knows what - that's a level of compatibility and modularity that SBCs are bad at.
Yes. Basic PCI requires at least 64 different data signals (plus Vcc and Gnd) across 124 connector pins. This is expensive in terms of board surface, routing placement and I/O pins.
PCI Express x1 requires only 11 data signals plus Vcc and Gnd across 18 pins, but even the v1 spec from 2003 requires a data rate of 2.5GT/s (as opposed to PCI's data rate of only 33 MT/s). This is a much higher rate than most other data signals usually found on these boards, and rates this high have their own challenges in terms of signal routing.
1. It is non-trivial additional work to add since these are high frequency signals.
2. It is non-trivial additional work to validate.
3. The hardware PCI-E support is likely buggy because it is not well tested and few want to volunteer to spend time working with the SoC supplier on the bugs.
> It is non-trivial additional work to add since these are high frequency signals.
And there it is. Yes, PCI-E 3.0 from 2010, 15 years ago, involves 4 GHz wire level signals. A 4x PCI-E connector has four differential pairs of this, not cross-talking, not violating EMC limits, etc. This requires excellent layout and high quality PCBs with enough layers.
Never mind 4.0, 5.0...
People just do not appreciate what their expectations entail. A recent discussion about "soldered" RAM in the Framework Desktop thread illustrates this, where someone just can't accept that there are reasons for the things board designers do. After you get done routing the display connector, multiple ethernet, USB, DRAM and all the other high frequency stuff on a couple square inches of low cost PCB, there isn't much room for the stretch goal of PCI-E good enough to get through EMI testing.
It is possible. Raspberry Pi did it. But it's a question of cost, talent and time-to-market.
The number of PCIe lanes available is typically defined by the CPU in an SoC context or the lowest common between the CPU and chipset in a traditional motherboard architecture. M.2 defines a physical connector that may connect to different things depended on its intended use. An example is the difference between those intended for SATA or NVMe. Additionally it is common for lower bandwidth peripherals like wifi cards to use an M.2 connector although only be wired into a subset of the board's possible PCIe lanes.
That I hadn't seen but it is unsurprising. My interest has been in adding eGPU to relatively low-end boards with the current crop of M.2 to OCuLink boards, which is an inexpensive way to get better performance than Thunderbolt if you can find an unoccupied M.2 with sufficient connected lanes (and can work within tradeoffs like no hot-swap).
Isn’t it crazy that we already have riscv rpi like boards what a time to be alive :D
Now we only need a graphene phone with riscv. :)
No matter if low spec or something if it just works :D
Like the Nokia where sailfish was born or something :D
The very limited software support would probably stop me from buying one.
Even if there are builds or container images for riscv64, they are probably often not tested at all. Sometimes different architectures have weird quirks (unexpected performance issues for some code, unexpected crashes). I guess only very few maintainers will investigate and fix those.
It took quite some time until all software worked perfectly on arm/arm64. When the first Raspberry Pi was released, this was quite a different experience.
I wonder if the future positioning for RISC-V will be better support from manufacturers of SBCs than their ARM counterparts. Right now they are barely useable outside of Raspberry Pi, which unfortunately has had supply issues.
I know ARM chip makers can just rely on the smartphone, tablets and Roku market but since there is no such market for RISC-V they sort of have to be good as SBCs.
Espressif seems to have mainstreamed their support for RISC-V alongside Xtensa. Maybe that doesn't count as SBC? But given that Raspberry Pi only has RISC-V cores in the RP2350, it is germane in response to the notion of "barely usable outside of Raspberry Pi."
Sorry, my comment was not about if RISC-V were in SBCs but about software and documentation support for SBCs outside of Raspberry Pi being very very poor.
My hope is that the situation for RISC-V SBCs would be an improvement over ARM SBCs given that chipmakers wouldn't be able to rely on the smartphone market for customers.
Yeah, Raspberry Pi has done a great job with documentation and SDKs for SBC and their line of MCU.
I don't think Raspberry Pi would have been started outside the margins of the smartphone market economies of scale. Sure RPi are pretty big now but the smartphone market created a world where low power CPUs and a lot of other components are available at all. My recollection is that as RPi got further away from standard chips, they struggled balancing retail availability while servicing their commercial contracts.
RISC-V, to me, seems more of an IP hedge for chipmakers who may find themselves constrained in designs or distribution in the future because the IP is controlled by potentially unfriendly companies or jurisdictions. Sure, there are some licensing fees/certifications that are friction, but the goal is independence even at the cost of redundant effort in chip and compiler design.
Well I don't know what the market for these companies putting out RISC-V processors and SBCs are going for but I'm hoping they will take support at least as seriously as Raspberry Pi. Maybe none of them will but I'd hope that given that they can't get those economies of scale that they'd make the most of the hobbyist SBC market.
For normal program code it is closer to a Pi 4 than to a Pi 3, similar to the all the very popular A55s board that have come out more recently than the Pi 4.
The only way it is slower than a Pi 3 is if the Pi 3 program is using SIMD (Neon), which the VisionFive 2 lacks.
The worst part of this Pi 3 comparison is that the Pi 3 has only 512MB or 1GB of RAM, which is extremely limiting in the modern world. This RISC-V board comes with a *minimum* of 2GB and is available with 8GB for $37.
The RAM difference alone makes many things possible that are impossible on a Pi 3, and many other things much faster, regardless of the raw CPU speed.
And then you have the M.2 NVMe SSD, something that neither the Pi 3 nor Pi 4 support, which again makes a whole raft of things much faster, even if the single lane means it can "only" do something near 400 MB/s (vs SD cards at 40 or 80 MB/s)
I don’t think designing a fast CPU gets significantly easier with RISC-V. Yes, you don’t have to design an instruction set, but you still have to pick a good set of RISC-V of extensions, find the right mix of cache size, branch predictor memory size, number of integer and float ALUs, number of rename registers, vector size, etc, glue the parts together so that it all works and build something without hitting patents that others hold.
‘Cheaper’ only at come into view if you’re selling millions of devices, and even then there have been other designs that are similarly open for which you can’t buy really competitive cores.
Reply to self: one thing that RISC-V will give you that your own custom ISA won’t give you is compilers. That can be a big advantage, as it makes porting OSes and applications much easier.
> I don’t think designing a fast CPU gets significantly easier with RISC-V.
Waterman, and probably his advisor Patterson, might disagree. The focus of the RISC-V design is avoiding aspects of legacy ISAs that make them harder to implement.
RISC-V has certainly managed to avoid obvious footguns like delay slots or register windows. OTOH there seems to be a lot of people who think RISC-V went too far down the "RISC purity" rabbit hole, and that relying on the C extension is not a good substitute e.g. for lack of more complex addressing modes. Those same people might instead suggest something like aarch64 as an example of a good general purpose ISA.
Secondly, for a high performance core, the consensus seems to be that the ISA mostly doesn't matter. The things that make a high performance core are mostly things that happen downstream of the instruction decoders. Heck, even the x86 ISA allows producing some pretty amazingly good cores. Conversely, for a simple in-order cheap core, the ISA matters much more.
x86 doesn't have any of the stuff that's hostile to high performance, it was perfectly positioned (to be clear, this was pure luck) to exploit the evolution of superscalar, out-of-order processors.
It has a couple of them. Flag registers, global rounding modes, relatively small page size, strong memory ordering guarantees, complicated decoding (and I'm sure there are a few more). It's more that it was good enough and managed to sufficiently mitigate most of the problems pretty well.
OTHER than the ridiculous instruction decoding, which is still today very slow for cold code on even the newest x86 cores.
Once you've decoded the crazy x86 instructions into µops in the µop cache then, yes, it avoided the worst CISC mistakes of multiple memory accesses (and potential page faults) in one instruction via having only one memory operand per instruction and not having indirect addressing.
I think x86 would have been long dead if it weren’t chosen for the IBM PC or if it weren’t married to Intel. The former guaranteed it customers, the latter superior manufacturing technology.
It's "batteries not included": you've got to do your own integration work rather than just license from ARM. And chip companies are pretty risk averse.
People generally don't buy instruction sets, they buy solutions.
I agree with you in a temporal & non-proprietary sense.
Temporally, because (knock on wood) RISC-V is going to take over the RadHard space market between Microchip/NASA’s High Performance Space Computer [1] and the Gaisler chips [2]
In a non-proprietary sense because much NVDA is alleged to be RISC-V
i will bet there are binary blobs you must load in kernel to make this run.
or maybe not. who knows? would be nice if that was front and center on any review, but it's never. which leads me to believe it's choke full of binary garbage.
What's with that cookie dialog on some german sites? I thought I don't understand it because it's in german, but this one seems to be translated and I still can't figure it out.
Had to close the site without reading the article, anyone has alternate links?
The easiest way would be to buy a PoE adapter that takes in PoE and splits to Ethernet + USB-C power plug. I use those with many boards (even Pis from time to time, if a HAT won't fit), and they work as long as you only need a few watts.
Does it have any onboard flash? Embedded SBCs that have no wireless and no onboard state (like the RPi3 without wifi) are quite useful for security applications like offline cold signing/CAs.
The RPi4/5 have a flashable bootrom now so they don’t qualify any longer. The 1/2/3 load their second stage bootloader from the micro-SD, their first stage is burned at the factory and cannot be modified. If you remove the SD and physically destroy it, they can not persist state or exfiltrate data.
You probably want to consider the OrangePi RV2 board instead (I wrote about it here: https://boilingsteam.com/orange-pi-rv2-new-risc-v-board-revi...). I own the original VF2 as well, and the RV2 from OrangePi is much faster, and software support is miles better too.
The Milk-V Duo is also under 5 euro, but is a full 1.0 GHz 64 bit Linux machine with MMU, FPU, 128 bit vector unit. Only 64MB RAM, but the only slightly more expensive Duo 256M and Duo S (512MB) increase that to Pi Zero levels for still under $10.
Unrelated to the contents of the article itself, but this page is a great example of the UI ramifications of GDPR. On mobile, I get a full screen popup, and there appears to be an "accept all" button, but no "reject all" button. I'm grateful to have tools like uBlock Origin's element zapper for pages like this.
For those who don't, here's a version of the page with no full-screen banner: https://archive.is/bTEse
> this page is a great example of the UI ramifications of GDPR.
No, this is an example of malicious compliance. There are so many bad GDPR banners because the people creating them want you to be annoyed by them. They want to have the easy path being the one that lets them collect as much data as they can and the most private path is as annoying as they believe they can get away with under the law. They want people complaining that the GDPR did nothing but cause all these annoying banners.
It'd be possible for many if not most web sites to not have such banners at all by simply defaulting to privacy-friendly behaviors, but there's too much money to be had in the behaviors the GDPR seeks to reduce.
I honestly can't say I have ever in my life seen a GDPR banner where "Reject All" wasn't in some way more effort than "Accept All". The best I can think of at least keep it the same size and inline, but usually mess with the colors to make it less contrasting and thus harder to identify.
Guidance from the regulators has been abundantly clear on this point. You'll notice all the big players have a "reject all" button because they would get fined otherwise. We're well past the point where anyone can make a reasonable excuse of ignorance, making it onerous to opt out is simply banking on lax enforcement.
I, for one, think it's time to start busting some proverbial kneecaps if we ever want publishers to take the matter seriously. The other alternative is to outlaw the collection of personal information without a legitimate purpose (consent or no) _and then_ come down hard on violators. The industry has had ample time to regulate itself and has chosen profit over ethics at every opportunity.
The linked site states that it does not collect personal information until you click on the "agree" button, in what way is that non-compliant with the GDPR?
I'm a happy owner of an original VF2. However, people buying this board should be aware that the StarFive JH7110 is not compatible with the RVA23 spec, which Ubuntu said it was the bare minimum for now on and might create some software incompatibilities in the future.
There's nothing that's compatible with RVA23 except qemu, and very few boards which are even partially compatible. Ubuntu's decision is a dumb one. This should run Fedora or Debian just fine.
> Ubuntu's decision is a dumb one. This should run Fedora or Debian just fine.
You either pull the future forward, or drag the past. Because of the small market, they decided to forgo generating legacy concerns before they even started seeing mainstream adoption.
I like the decision (they are choosing a better foundation) but I can see the merits either way.
It's fine to be forward looking, but if you're so forward looking that you literally support zero hardware on the market, you might have gone too far.
They should have two streams. For Fedora we've got Fedora vs CentOS Stream, with the latter (going to be) tuned more for RVA23 and server boards.
I don’t think it’s dumb - hasty and premature perhaps. Manufacturers have been shipping boards with flaky RVV support, a years old kernel and undocumented blobs on in house baked OS and calling it a day.
Feels like a step towards strong arming them into shipping products that can be supported easier/not being left to rot in a drawer.
A lot of people would call a hasty, premature decision "dumb". I generally would, at least.
For release 26.04 next year, it makes more sense because that is a Long Term Support release and a lot of new hardware from then on will be RVA23-compliant.
They recommend 24.04.3 LTS for current hardware. Maybe they just don't want (then) old hardware to be stuck on a non-LTS release.
I don't think it's dumb. There are very strong hints that we can expect SpacemiT K3 RVA23 boards from possibly multiple vendors by perhaps the end of the year and certainly not long after that.
Leaving RVA23 support until 28.04 LTS would be FAR too long.
It would be nice to see both RVA20 and RVA23 supported in the same OS but the problem is that it's not actually practical to do runtime selection of alternative functions or libraries for all extensions in RVA23. It is possible and sensible for things such as V, perhaps, but extensions such as Zba and Zbb (not in RVA20, but supported by VisionFive 2) and Zicond, Zimop, Zcmop, Zcb have instructions which want to be sprinkled all through just about every function.
You'd have to either deny your main program code the use of all those extensions, or else completely duplicate every program binary.
Performance aside, is it feasible for those extensions to trap and emulate in the kernel? Like old ARM Softfloat or linux/arch/x86/math-emu ?
Sure of course, but that's going to cost something like 500 cycles [1] to emulate something that only exists to save 2 or 3 cycles.
Also, not in the kernel but in SBI -- in Machine mode not Supervisor mode.
[1] estimate based on how long it takes to trap and emulate misaligned accesses on VisionFive 2.
Ubuntus decision was the right one, RVA23 is the first ‘full’ desktop class isa for riscv
Ubuntu is specifically a noob friendly desktop OS. No reason for them to bother supporting slow buggy embedded CPUs.
A person might suggest that it would be more user friendly to support hardware that people own. There's a parallel to years ago, when Ubuntu was a lot like Debian but it was willing to ship non-free firmware by default because that was the hardware that people actually had, which is part of why it was praised as noob friendly.
Is RISC-V the IPv6 of processors?
> Welcome to heise online. We and our up to 185 partners use cookies and tracking technologies.
Advertisers consent for the profiling purposes is required to read this page. 185 advertising companies.
WTF.
Just open the link in private browser mode. Any cookie you accept will be cleared when you exit the private window.
If you're using Safari (and possibly others) they also (attempt to) anonymize your browser fingerprint, making it harder to track you.
Add cmp.* to your ad blocker.
How do you expect ad auctions to work? Every bidder needs to know what they are bidding on.
They can bid on the page content. If you're advertising embedded systems for such, you can advertise on a page about embedded boards; you don't need to stalk users in order to bid on them personally.
I expect them to fuck off
This. Also, I cannot imagine this is legal under the GDPR; as your options are: pay or accept surveillance
I mean reading some tech news article isn't like rent or groceries or healthcare or something. No one is forcing you to use the website, so just closing the tab is always an option.
If your business model requires privacy infringing tracking to be viable, it's the business model that is the problem.
The VF2 (original) is a stolid workhorse, as long as you only use it for simple CLI-based stuff. The hardware is very well supported by Linux. The performance isn't great, but I used it to fix hundreds of Fedora packages to add RISC-V support, so you can't really beat its effectiveness. You do want to get the version with the max amount of RAM, and you'll also need to add an M.2 for storage (one PCIe lane!)
For CLI-based programming work an SSD is unnecessary, an SD card is perfectly fine as long as it's large enough to hold all your stuff and you have enough RAM to cache the frequently-used bits. The difference will be measured in a couple of dozen seconds on an hour-long build.
SD cards have the very great advantage that you can have multiple ones with different OSes or versions and swap them in seconds.
How long does the average SD card last in a build machine? I thought these cards only have write endurance of a few thousand cycles?
You wouldn't want to use an SD card for a full time build machine at Fedora or Canonical.
It's fine for a normal user doing a few builds a day. Most files never get written to the physical device, being in tempfs or disk cache and deleted before being flushed to disk.
Because of wear-levelling it's not how many times the busiest file is rewritten, but how many GB are written in total.
512GB cards currently cost $32, $35, or $40 depending on whether you want 100, 150, or 200 MB/s read speeds.
A GCC src tree is 2.5GB, the build tree is 11GB. It's going to take ~40 clean builds to put one write cycle on the whole card, so you'll probably get 100,000 clean builds off a card before you wear it out.
That's around 30 builds per cent.
> Most files never get written to the physical device, being in tempfs or disk cache and deleted before being flushed to disk.
It's fine if you're using tmpfs, but I would expect written files on anything else to make it to disk even if they also spend some time in the write buffer. Especially on a build box where memory pressure is higher.
Can you install it without a serial connection? Mine is still collecting dust...
I've had a couple of original (kickstarter) VisionFive 2s for 2 1/2 years, use them constantly, and have never used a UART connection with them.
Is PCI like really expensive? I always wonder why these types of boards are so reluctant to add a PCI-Express slot.
I think there are two reasons:
First, SBC processors are just a step up from microcontrollers, they're designed to talk to GPIO devices and servos and UARTs and sensors, not GPUs and network cards. The JH7110 only has a two lanes of PCIe 2.0, one of which is used for USB 3.0 and the other mostly (I assume) to provide an M.2 interface. However, it also has 6 UARTs (Universal Asynchronous Receiver-Transmitter, think RS232 serial), 7 channels of SPI (not counting the QSPI Flash controller directly integrated), 8 channels of PWM, 7 channels of I2C, 64 channels of general-purpose IO, I2S, SPDIF, two channels of CANbus, a directly integrated Ethernet MAC, USB 2.0 host, a MIPI-CSI camera interface and a MIPI-DSI video display block with either HDMI output or for directly driving a 24-channel parallel LCD.
Embedded system designers don't want to add a PCIe-to-RS232 card to their industrial robot, NAS, or video camera, heck, they don't want to add an external GPU. They don't even want to add a separate northbridge/southbridge or PCH, they want a single-chip SOM. Going up and down those layers between PCIe and SATA or USB or Ethernet is expensive in terms of chip count and power.
Second, I don't think they want to deal with the drivers. If you want to plug in your choice of PCIe device - be it a GPU, RAID card, sound card, who knows what - that's a level of compatibility and modularity that SBCs are bad at.
Yes. Basic PCI requires at least 64 different data signals (plus Vcc and Gnd) across 124 connector pins. This is expensive in terms of board surface, routing placement and I/O pins.
PCI Express x1 requires only 11 data signals plus Vcc and Gnd across 18 pins, but even the v1 spec from 2003 requires a data rate of 2.5GT/s (as opposed to PCI's data rate of only 33 MT/s). This is a much higher rate than most other data signals usually found on these boards, and rates this high have their own challenges in terms of signal routing.
This is speculation on my part, but:
1. It is non-trivial additional work to add since these are high frequency signals.
2. It is non-trivial additional work to validate.
3. The hardware PCI-E support is likely buggy because it is not well tested and few want to volunteer to spend time working with the SoC supplier on the bugs.
> It is non-trivial additional work to add since these are high frequency signals.
And there it is. Yes, PCI-E 3.0 from 2010, 15 years ago, involves 4 GHz wire level signals. A 4x PCI-E connector has four differential pairs of this, not cross-talking, not violating EMC limits, etc. This requires excellent layout and high quality PCBs with enough layers.
Never mind 4.0, 5.0...
People just do not appreciate what their expectations entail. A recent discussion about "soldered" RAM in the Framework Desktop thread illustrates this, where someone just can't accept that there are reasons for the things board designers do. After you get done routing the display connector, multiple ethernet, USB, DRAM and all the other high frequency stuff on a couple square inches of low cost PCB, there isn't much room for the stretch goal of PCI-E good enough to get through EMI testing.
It is possible. Raspberry Pi did it. But it's a question of cost, talent and time-to-market.
Friendly FYI: Some M.2 slots offer up to 4 PCIe lanes.
Though, it looks like on the lite version mentioned here, there is but one PCIe lave available on the slot.
Edit: Adding the size is likely the reason and full regular PCIe is not there. The PCIe card would likely be as big as the board it self. :)
Definitely don't assume M.2 == PCIe lanes tho.
The number of PCIe lanes available is typically defined by the CPU in an SoC context or the lowest common between the CPU and chipset in a traditional motherboard architecture. M.2 defines a physical connector that may connect to different things depended on its intended use. An example is the difference between those intended for SATA or NVMe. Additionally it is common for lower bandwidth peripherals like wifi cards to use an M.2 connector although only be wired into a subset of the board's possible PCIe lanes.
https://www.crucial.com/articles/about-ssd/m2-with-pcie-or-s...
> Additionally it is common for lower bandwidth peripherals like wifi cards to use an M.2 connector...
And some of those don't use PCIe at all - the connector can also carry USB signals.
That I hadn't seen but it is unsurprising. My interest has been in adding eGPU to relatively low-end boards with the current crop of M.2 to OCuLink boards, which is an inexpensive way to get better performance than Thunderbolt if you can find an unoccupied M.2 with sufficient connected lanes (and can work within tradeoffs like no hot-swap).
https://pcisig.com/pci-express%C2%AE-oculink-specification-r...
Isn’t it crazy that we already have riscv rpi like boards what a time to be alive :D Now we only need a graphene phone with riscv. :) No matter if low spec or something if it just works :D Like the Nokia where sailfish was born or something :D
The very limited software support would probably stop me from buying one.
Even if there are builds or container images for riscv64, they are probably often not tested at all. Sometimes different architectures have weird quirks (unexpected performance issues for some code, unexpected crashes). I guess only very few maintainers will investigate and fix those.
It took quite some time until all software worked perfectly on arm/arm64. When the first Raspberry Pi was released, this was quite a different experience.
Basically everything works, Linux-wise. Is there particular software?
I wonder if the future positioning for RISC-V will be better support from manufacturers of SBCs than their ARM counterparts. Right now they are barely useable outside of Raspberry Pi, which unfortunately has had supply issues.
I know ARM chip makers can just rely on the smartphone, tablets and Roku market but since there is no such market for RISC-V they sort of have to be good as SBCs.
Espressif seems to have mainstreamed their support for RISC-V alongside Xtensa. Maybe that doesn't count as SBC? But given that Raspberry Pi only has RISC-V cores in the RP2350, it is germane in response to the notion of "barely usable outside of Raspberry Pi."
Sorry, my comment was not about if RISC-V were in SBCs but about software and documentation support for SBCs outside of Raspberry Pi being very very poor.
My hope is that the situation for RISC-V SBCs would be an improvement over ARM SBCs given that chipmakers wouldn't be able to rely on the smartphone market for customers.
Yeah, Raspberry Pi has done a great job with documentation and SDKs for SBC and their line of MCU.
I don't think Raspberry Pi would have been started outside the margins of the smartphone market economies of scale. Sure RPi are pretty big now but the smartphone market created a world where low power CPUs and a lot of other components are available at all. My recollection is that as RPi got further away from standard chips, they struggled balancing retail availability while servicing their commercial contracts.
RISC-V, to me, seems more of an IP hedge for chipmakers who may find themselves constrained in designs or distribution in the future because the IP is controlled by potentially unfriendly companies or jurisdictions. Sure, there are some licensing fees/certifications that are friction, but the goal is independence even at the cost of redundant effort in chip and compiler design.
Well I don't know what the market for these companies putting out RISC-V processors and SBCs are going for but I'm hoping they will take support at least as seriously as Raspberry Pi. Maybe none of them will but I'd hope that given that they can't get those economies of scale that they'd make the most of the hobbyist SBC market.
I would love if this had a more open graphics processor, the IMG BXE requires close source firmware so it's not really great for hobby OS development.
Can't beat that price, just be warned that it's slower than a Pi 3, probably similar to a Pi 2.
It is not! That is super misleading.
For normal program code it is closer to a Pi 4 than to a Pi 3, similar to the all the very popular A55s board that have come out more recently than the Pi 4.
The only way it is slower than a Pi 3 is if the Pi 3 program is using SIMD (Neon), which the VisionFive 2 lacks.
The worst part of this Pi 3 comparison is that the Pi 3 has only 512MB or 1GB of RAM, which is extremely limiting in the modern world. This RISC-V board comes with a *minimum* of 2GB and is available with 8GB for $37.
The RAM difference alone makes many things possible that are impossible on a Pi 3, and many other things much faster, regardless of the raw CPU speed.
And then you have the M.2 NVMe SSD, something that neither the Pi 3 nor Pi 4 support, which again makes a whole raft of things much faster, even if the single lane means it can "only" do something near 400 MB/s (vs SD cards at 40 or 80 MB/s)
I still don't understand why there are not really competitive RISC-V cores in any segment, other than possibly the very race to the bottom.
I don’t think designing a fast CPU gets significantly easier with RISC-V. Yes, you don’t have to design an instruction set, but you still have to pick a good set of RISC-V of extensions, find the right mix of cache size, branch predictor memory size, number of integer and float ALUs, number of rename registers, vector size, etc, glue the parts together so that it all works and build something without hitting patents that others hold.
‘Cheaper’ only at come into view if you’re selling millions of devices, and even then there have been other designs that are similarly open for which you can’t buy really competitive cores.
Reply to self: one thing that RISC-V will give you that your own custom ISA won’t give you is compilers. That can be a big advantage, as it makes porting OSes and applications much easier.
> I don’t think designing a fast CPU gets significantly easier with RISC-V.
Waterman, and probably his advisor Patterson, might disagree. The focus of the RISC-V design is avoiding aspects of legacy ISAs that make them harder to implement.
RISC-V has certainly managed to avoid obvious footguns like delay slots or register windows. OTOH there seems to be a lot of people who think RISC-V went too far down the "RISC purity" rabbit hole, and that relying on the C extension is not a good substitute e.g. for lack of more complex addressing modes. Those same people might instead suggest something like aarch64 as an example of a good general purpose ISA.
Secondly, for a high performance core, the consensus seems to be that the ISA mostly doesn't matter. The things that make a high performance core are mostly things that happen downstream of the instruction decoders. Heck, even the x86 ISA allows producing some pretty amazingly good cores. Conversely, for a simple in-order cheap core, the ISA matters much more.
x86 doesn't have any of the stuff that's hostile to high performance, it was perfectly positioned (to be clear, this was pure luck) to exploit the evolution of superscalar, out-of-order processors.
It has a couple of them. Flag registers, global rounding modes, relatively small page size, strong memory ordering guarantees, complicated decoding (and I'm sure there are a few more). It's more that it was good enough and managed to sufficiently mitigate most of the problems pretty well.
OTHER than the ridiculous instruction decoding, which is still today very slow for cold code on even the newest x86 cores.
Once you've decoded the crazy x86 instructions into µops in the µop cache then, yes, it avoided the worst CISC mistakes of multiple memory accesses (and potential page faults) in one instruction via having only one memory operand per instruction and not having indirect addressing.
I think x86 would have been long dead if it weren’t chosen for the IBM PC or if it weren’t married to Intel. The former guaranteed it customers, the latter superior manufacturing technology.
It's "batteries not included": you've got to do your own integration work rather than just license from ARM. And chip companies are pretty risk averse.
People generally don't buy instruction sets, they buy solutions.
Because the very bottom is the only segment where it's worth to not just pay Arm for a core, with much better ecosystem support and ISA.
I agree with you in a temporal & non-proprietary sense.
Temporally, because (knock on wood) RISC-V is going to take over the RadHard space market between Microchip/NASA’s High Performance Space Computer [1] and the Gaisler chips [2]
In a non-proprietary sense because much NVDA is alleged to be RISC-V
[1] https://www.microchip.com/en-us/products/microprocessors/64-...
[2] https://www.gaisler.com/secondary-product-category/rad-hard-... see GR765 & GR801
I am curious: from a security point of view, what are the security tradeoffs and potential advantages of using this type of board today?
i will bet there are binary blobs you must load in kernel to make this run.
or maybe not. who knows? would be nice if that was front and center on any review, but it's never. which leads me to believe it's choke full of binary garbage.
What's with that cookie dialog on some german sites? I thought I don't understand it because it's in german, but this one seems to be translated and I still can't figure it out.
Had to close the site without reading the article, anyone has alternate links?
Yeah, that was the most annoying cookie banner I've seen in a while.
Better alternative posts that don't coerce you into agreeing to tracking/etc.:
https://liliputing.com/starfive-visionfive-2-lite-is-a-cheap...
https://www.cnx-software.com/2025/08/07/visionfive-2-lite-lo...
It's a relatively annoying example of obviously bad faith 'consent' UI.
I've decided to treat these as signs that the organization running them is either dishonest or incompetent.
It's just 3 buttons, the first one ("Zustimmen" / Accept) closes it right away.
I’m not clear to what I’m agreeing to though. Tracking, I think.
https://archive.is/bTEse
Can something like this be powered by PoE? Aparently PoE hats for RPi are compatible with vf2 but results vary
The easiest way would be to buy a PoE adapter that takes in PoE and splits to Ethernet + USB-C power plug. I use those with many boards (even Pis from time to time, if a HAT won't fit), and they work as long as you only need a few watts.
Does it have any onboard flash? Embedded SBCs that have no wireless and no onboard state (like the RPi3 without wifi) are quite useful for security applications like offline cold signing/CAs.
The RPi4/5 have a flashable bootrom now so they don’t qualify any longer. The 1/2/3 load their second stage bootloader from the micro-SD, their first stage is burned at the factory and cannot be modified. If you remove the SD and physically destroy it, they can not persist state or exfiltrate data.
You probably want to consider the OrangePi RV2 board instead (I wrote about it here: https://boilingsteam.com/orange-pi-rv2-new-risc-v-board-revi...). I own the original VF2 as well, and the RV2 from OrangePi is much faster, and software support is miles better too.
Yep. I looked at that too: https://taoofmac.com/space/reviews/2025/05/12/2230
My only gripe is that the OpenWRT image (still) doesn't have Wi-Fi support for some reason.
You can get a Pico 2 with a RISC-V core (much less capable platform though) for under 5 euros
The Pico 2 is a microcontroller board, that may run something like CircuitPython. The Vision Five 2 is a 64-bit SBC, capable of running Linux.
The Milk-V Duo is also under 5 euro, but is a full 1.0 GHz 64 bit Linux machine with MMU, FPU, 128 bit vector unit. Only 64MB RAM, but the only slightly more expensive Duo 256M and Duo S (512MB) increase that to Pi Zero levels for still under $10.
Unrelated to the contents of the article itself, but this page is a great example of the UI ramifications of GDPR. On mobile, I get a full screen popup, and there appears to be an "accept all" button, but no "reject all" button. I'm grateful to have tools like uBlock Origin's element zapper for pages like this.
For those who don't, here's a version of the page with no full-screen banner: https://archive.is/bTEse
> this page is a great example of the UI ramifications of GDPR.
No, this is an example of malicious compliance. There are so many bad GDPR banners because the people creating them want you to be annoyed by them. They want to have the easy path being the one that lets them collect as much data as they can and the most private path is as annoying as they believe they can get away with under the law. They want people complaining that the GDPR did nothing but cause all these annoying banners.
It'd be possible for many if not most web sites to not have such banners at all by simply defaulting to privacy-friendly behaviors, but there's too much money to be had in the behaviors the GDPR seeks to reduce.
There is a court ruling that German web sites have to make "Reject All" as simple as "Accept All".
This site is not using a loop-hole. It is clearly in violation.
I honestly can't say I have ever in my life seen a GDPR banner where "Reject All" wasn't in some way more effort than "Accept All". The best I can think of at least keep it the same size and inline, but usually mess with the colors to make it less contrasting and thus harder to identify.
> this page is a great example of the UI ramifications of GDPR.
Not having an option to reject that is as convenient as the one to accept is not compliant with GDPR.
You have the option of closing the window/tab, seems easy enough to me.
I am not a lawyer, but I am given to understand that the GDPR does not consider that "option" sufficient.
The linked site provides the same choice that other publications do, my guess is that they have all checked this with lawyers.
Guidance from the regulators has been abundantly clear on this point. You'll notice all the big players have a "reject all" button because they would get fined otherwise. We're well past the point where anyone can make a reasonable excuse of ignorance, making it onerous to opt out is simply banking on lax enforcement.
I, for one, think it's time to start busting some proverbial kneecaps if we ever want publishers to take the matter seriously. The other alternative is to outlaw the collection of personal information without a legitimate purpose (consent or no) _and then_ come down hard on violators. The industry has had ample time to regulate itself and has chosen profit over ethics at every opportunity.
The linked site states that it does not collect personal information until you click on the "agree" button, in what way is that non-compliant with the GDPR?
Stop blaming GDPR, blame the website.
uBlock took care of the cookie law nonsense for me automatically. The internet really is unusable without it.
GDPR is not that bad actually. The Internet is bad and making itself unusable.
[dead]