Hacker Newsnew | past | comments | ask | show | jobs | submit | sylware's commentslogin

But with hardware IP locks like x86_64.

Better favor as much as possible RISC-V implementations.

But, I don't know if there are already good modern-desktop-grade RISC-V implementations (in the US, Sifive is moving fast as far as I know)... and the hard part: accessing the latest and greatest silicon process of TMSC, aka ~5GHz.

Those markets are completely saturated, namely at best, it will be very slow unless something big does happen: for instance AMD adapts its best micro-architecture to RISC-V (ISA decoding mostly), etc.

And if valve start to distribute a client with a strong RISC-V game compilation framework...


This is kind of a solution in search for a problem. RISC-V will grow only if people find some value in it. If it solves their actual problems in ways that other architectures can't.

Yeah, the primary reason RISC-V exists is political (the desire to have an "open source" CPU architecture). As noble as that may be, it's not enough to get people or companies to use (or even manufacture!) it. It'll either be economical (costs) and/or performance (including efficiency) that drives people.

It took ARM decades to get to where it is, and that involved a long stint in low-margin niche applications like embedded or appliances where x86 was poorly suited due to head and power consumption.


I don't think that's the primary reason there's momentum there. The reason is to avoid ARM licensing fees and IP usage restrictions.

I think you'll see ever more accelerating RISC-V adoption in China if the United States continues on its "cold war" style mentality about relations with them.

That said we're a long long way from Actually Existing RISC-V being at performance parity with ARM64, let alone x86.


Yep, licensing fee and IP usage restrictions is a massive decision point on some silicon markets.

The other massive point: RISC-V integrates a lot of CPU "we know now" in a very elegant "sweet spot".

And it is not china only, the best implementations are US, and RISC-V is a US/berkley initiative re-centered in switzerland for "neutrality" reasons.

If good large RISC-V implementations do reach TMSC silicon process (5GHz), some markets won't even look at arm or x86 anymore.

And there is the ultimate "standard ISA" point: assembly written code then become very appropriate, hence strong de-coupling from all those, very few, backdoor injecting compilers.

On many of my personal projects, I don't bother anymore: I write RISC-V assembly which I run with a small x86_64 interpreter, that with a very simple pre-processor and assembler, aka SDK toolchain complexity close to 0 compared to the other abominations.

And I think the main drawback is: big mistakes will be made, and you must account for them.


Standard ISA being rv64gc? Isn't MIPS 2 easier to emulate? It has less funky encoding.

There are tons of RISC-V SOCs and mini-boards. Ez and inexpensive native port...

That might be true for the desktop, but RISC-V is wonderful from a pedagogical and research standpoint for academic uses and in the embedded world its license and "only pay for what you need" is also quite nice.

> Sifive is moving fast as far as I know)

worked with their cores in $pastJob. I'd say their main products are flowery promises and long errata sheets.


Which models? Which nasty issues did you encounter?

Anybody is aware of a public token (severely limited) I can use to test claude coding ability? You know using CURL.

I am itching at testing claude for assembly coding and c++ to plain and simple C ports.


Anybody: can I test claude code without a whatng cartel web engine? web API using curl with an "public" token? Anything else?

I am itching at testing its ability to code assembly.


Wait, I can get such a key and perform gemini API requests with curl? (probably limited in some ways)

Wallet codes bought at local monetary terminals to increase the balance of an account seems to be the less too much 'insecure' way to do things and to stay away from a hard dependency on whatng cartel web engines.

And of course, this is not a web app but a web site with basic HTML forms not requiring a whatng cartel web engine?

From an applications point of view:

They want web apps only running in whatng cartel web engines?

libreoffice? A massive piece of software you can build only with US c++ compilers (MIT and mostly apple)? (the mistake was to use c++ in the first place, well computer languages on an insane level of complexity).

To put it together: it won't be perfect, lines for compromises will have to be drawn, and it will feel like getting out of 'the matrix' for the time (normal "users" won't understand), if you see where I am going. Digital freedom has a "price", efty "price" in a digital world dominated by Big Tech.

Going for a strong independence will have to hurt, or it will be slatted as "posture" more than a real long term/strategic will.

It is not "against" the US, but "in the interest" of the danish people (well, should be EU though...)


Who cares if a piece of open source has American maintainers? The point is not to avoid touching anything American. It is control and sovereignty.

This is what I implied: this is not against the US, which have actually the most control and sovereignty on critical software.

It is much cheaper and easier to have control and sovereignty on less complex software, including the SDK.

Usually you get developer lock-in via non-pertinent complexity, often including the SDK namely the computer language.


What's the article is missing: this is directly related to the complexity of file formats and protocols.

There are 2 webs:

- the web site, to serve noscript/basic (x)html, namely basic HTML forms which can be augmented with <video> and <audio> nowadays, namely it serves web _pages_. It was made super modular, you have browsers not handling CSS, and it is fine for _pages_ with a semantic 2D table (implicit navigation even for braille browsers). Web engines there, are more than reasonable to write an alternative of, even a plain CSS renderer (look at netsurf browser), only text (lynx/edbrowse/etc), graphic (links2/elinks/etc). In the end, 'HTML' is not perfect (like CSS), a bit of a mess actually, that's why they tried an XML representation, a failure because it was literaly sabotaged by... "Big Co" or in the web realm, the 'whatng cartel': I remember their web engines were a pain to use xhtml to develop even a simple page... but not with html... curiously. That said, mistakes were made also on the "w3c" side: the 'semantic web', a real abomination of delirious complexity, which I think is what actually made people jump on the "whatng" train, what a disaster. Now, HTML has been back with its weird(shabby?) parsing, but this was kind of 'cleaned up' and much more rigorously defined.

- the web app: the abomination. Basically, only gigantic and insanely complex software can make a web app work (including their SDK), aka only the web engines from Big Co, here 'the whatng cartel'. It is getting worse, it is said web apps are more and more requiring only one web engine to 'properly work' (often gogol blink), and suspicions are very strong at this is made _on purpose_ (I remember the day when gmail.com did disable their noscript/basic (x)html web interface... then POP3 not a long time ago... I guess you all see where this is going). In this realm, there is near ZERO possibility to create a _real-life_ alternative without a bunch a developers laser focused on that for one billion years. I have been wishing for an alternative web engine I could build from source with a simple SDK, does not exist, and even the few attempts here and there are _not_ doing that: they lean towards super complex syntax computer languages (c++ and similar), hence a failure right from the start.

The 'web3'? A lean javascript engine (for instance quickjs, but there are others), with a small set of OS basic abstraction APIs, and a few 'accelerated' specialized APIs (vector drawing, pixel drawing blitter, video decoding, glyph drawing, etc). First problem: nobody will agree an those interfaces (they would have to be as simple as possible), and the 'whatng cartel' will make sure their are useless...

Or a even simpler "HTML" (probably the same with CSS)? "markdown" like the article suggest? Would it have enough expressive power? Again, nobody will agree on the format and will want to make its own.

A good middle ground is to work with a 'subset' of HTML: rought on the edge, but would do a good enough job for nearly all online services out-there, whatever the platform. Nearly 100% of the online services were running on that a few years back, and with <video> and <audio>, it could be even closer than 100% nowadays.

And there is the danger of the 'mobile app only': there, the only way out is to regulate and enforce the availability of a small, stable in time, set of as simple as possible protocols and file formats to allow reasonable efforts at developping an 'app' for an alternative platform (elf/linux, *BSD, fooOS, etc).


Try to talk about noscript/basic (x)html browser interoperability on HN, namely the web without the engines from the whatng cartel.

You'll see...


This is part of the awakening: IPv6 is the only way to make work the real, symmetric, internet at scale.

The next step is to assess the damage on DNS and online payment systems, basically now gated by whatng cartel web engines.

What a sad story.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: