Well, I've experienced both to some degree in the past. The previous long time with very similar hardware performance was when PCs were exorbitantly expensive and commodore 64 was the main "home computer" (at least in my country) over the latter 80s and early 90s.
That period of time had some benefits. Programmers learned to squeeze absolutely everything out of that hardware.
Perhaps writing software for today's hardware is again becoming the norm rather than being horribly inefficient and simply waiting for CPU/GPU power to double in 18 months.
I was lucky. I built my am5 7950x Ryzen pc with 2x48gb ddr5 2 years ago. I just bought 4x48gb kit a month ago with an idea to build another home server with the old 2*48gb kit.
Today my old g.skill 2x48gb kit costs Double what I paid for the 4x48gb.
Furthermore I bought two used rtx3090 (for AI) back then. A week ago I bought a third one for the same price... ,(for vram in my server).
> It's kinda sad when you grow up in a period of rapid hardware development and now see 10 years going by with RAM $/GB prices staying roughly the same.
But you’re cherry picking prices from a notable period of high prices (right now).
If you had run this comparison a few months ago or if you looked at averages, the same RAM would be much cheaper now.
I think that goes to show that official inflation benchmarks are not very practical / useful in terms of buckets of things that people actually buy or desire. If the bucket that measured inflation included computer parts (GPUs?), food and housing - i.e. all that the thing that a geek really needs inflation would be wayy higher...
> If the bucket that measured inflation included computer parts (GPUs?), food and housing - i.e. all that the thing that a geek really needs inflation would be wayy higher...
A house is $500,000
A GPU is $500
You could put GPUs into the inflation bucket and it wouldn’t change anything. Inflation trackers count cost of living and things you pay monthly, not one time luxury expenses every 4 years that geeks buy for entertainment.
Also need to account for the dollar decline vs other currencies (which yes is possibly somewhat factored into dollar inflation so you'd have to do the inflation calculation in Euros then convert to dollars accounting for the decline in value).
The WebGL game was build with my 2D game engine "Impact", which I previously ported to C[1]. The game has a 3d view, but logic still mostly works in 2 dimensions on a flat ground. The N64 version "just" needed a different rendering and sound backend.
Page is unusable to me, if I try to opt-out through settings it just crashes :/
Nice way to force users to accept - not that I think this cookie terror banner practice makes any sense.
I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?
I've been meaning to write a proper post-morten about all that, now that the dust has settled. But in the meantime, just quickly:
- I did not make billions. You're off by quite a few orders of magnitude. After taxes it was well below $500k.
- Nothing I did was illegal; that's how I got away with it.
- Coinhive was not ransomware. It did not encode/hide/steal data. In fact, it did not collect any data. Coinhive was a JavaScript library that you could put on your website to mine Monero.
- I did not operate it for "years". I was responsible for Coinhive for a total of 6 month.
- I did not organize a doxing campaign. There was no doxing of Brian Krebs. I had nothing to do with the response on the image board. They were angry, because Brian Krebs doxed all the wrong people and their response was kindness: donating to cancer research. In German Krebs = cancer, hence the slogan “Krebs ist scheiße” - “cancer is shit”.
- Troy Hunt did not "snatch away" the coinhive domain. I offered it to him.
In conclusion: I was naive. I had the best intentions with Coinhive. I saw it as a privacy preserving alternative for ads.
People in the beta phase (on that image board) loved the idea to leave their browser window open for a few hours to gain access to premium features that you would have to buy otherwise. The miner was implemented on a separate page that clearly explained what's happening. The Coinhive API was expressly written with that purpose: attributing mined hashes to user IDs on your site. HN was very positive about it, too[1]
The whole thing fell apart when website owners put the miner on their page without telling users. And further, when the script kiddies installed it on websites that they did not own. I utterly failed to prevent embedding on hacked websites and educating legitimate website owners on “the right way” to use it.
I only have access to the trade volume of coinhive's wallet addresses that were publicly known at the time and what the blockchain provides as information about that. How much money RF or SK or MM made compared to you is debatable. But as you were a shareholder of the company/companies behind it, it's reasonable to assume you've got at least a fair share of their revenue.
If you want me to pull out a copy of the financial statements, I can do so. But it's against HN's guidelines so I'm asking for your permission first to disprove your statement.
> Nothing I did was illegal (...) Coinhive was not ransomware
At the time, it went quickly into being the 6th most common miner on the planet, and primarily (> 99% of the transaction volume) being used in malware.
It was well known before you created coinhive, and it was known during and after. Malpedia entries should get you started [1] [2] but I've added lots of news sources, including German media from that time frame, just for the sake of argument [3] [4] [5] [6] [7] [8]
----------
I've posted troyhunt's analysis because it demonstrates how easily this could've been prevented. A simple correlation between Referer/Domain headers or URLs and the tokens would've been enough to figure out that a threat actor from China that distributes malware very likely does not own an .edu or .gov website in the US, and neither SCADA systems.
As there was a financial benefit on your side and no damage payments to any of the affected parties, and none revoked transactions from malicious actors, I'd be right to assume the unethical motivation behind it.
> I did not organize a doxing campaign. There was no doxing of Brian Krebs.
As I know that you're still an admin on pr0gramm as the cha0s user, that's pretty much a useless archive link.
Nevertheless I don't think that you can say "There was no doxing of Brian Krebs" when you can search for "brian krebs hurensohn" on pr0gramm, still, today, with posts that have not been deleted, and still have his face with a big fat "Hurensohn" stamp on it. [9]
As I wrote in another comment, I also said that there are also nice admins on the imageboard like Gamb, and that they successfully turned around that doxxing attempt into something meaningful.
> I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?
This is not personal for me, at all. But I've observed what was going on and I could not be silent about the unethical things that you built in the past.
To me, doing that lost all trust and good faith in you. The damage that you caused on a global scale with your product coinhive far exceeds whatever one person's lifetime can make up for. And I think that people should know about that before they execute your code and are going to be a victim to a fraudulent coin mining scheme.
Calling this hyperbole and misinformation is kind of ridiculous, given that antivirus signatures and everything are easily discoverable with the term "coinhive". It's not like it's a secret or made up or something.
Your "portfolio page" is quite disrespectful and in line with your behaviour in this HN submission. You've made up too many blatantly obvious lies and are now stooping down to provocating a reaction, because you having nothing better to say. I don't think anyone should trust you.
> Your "portfolio page" is quite disrespectful and in line with your behaviour in this HN submission.
Care to elaborate what is "disrespectful" about my own personal website? How did I offend you, specifically?
> You've made up too many blatantly obvious lies and are now stooping down to provocating a reaction, because you having nothing better to say. I don't think anyone should trust you.
I've cited a lot of news articles, blog posts, insights, even malware databases from multiple globally known and trusted security vendors.
The original JavaScript engine “Impact” from 2010 is at the end of its life; the C rewrite “high_impact” is new and will (potentially) be around for as long as we have C compilers and some graphics API.
The JavaScript engine had a lot of workarounds for things that are not necessary anymore and some things that just don't work that well with modern browsers. From the top of my head:
- nearest neighbor scaling for pixel graphics wasn't possible, so images are scaled at load time pixel by pixel[1]. Resizing the canvas after the initial load wasn't possible with this. Reading pixels from an image was a total shit show too, when Apple decided to internally double the Canvas2D resolution for their “retina” devices, yet still reporting the un-doubled resolution[2].
- vendor prefixes EVERYWHERE. Remember those? Fun times. Impact had it's own mechanism to automatically resolve the canonical name[3]
- JS had no classes, so classes are implemented using some trickery[4]
- JS had no modules, so modules are implemented using some trickery[5]
- WebAudio wasn't a thing, so Impact used <Audio> which was never meant for low latency playback or multiple channels[6] and generally was extremely buggy[7]. WebAudio was supported in later Impact versions, but it's hacked in there. WebAudioContext unlocking however is not implemented correctly, because back then most browsers didn't need unlocking and there was no "official" mechanism for it (the canonical way now is ctx.resume() in a click handler). Also, browser vendors couldn't get their shit together so Impact needed to handle loading sounds in different formats. Oh wait, Apple _still_ does not fully support Vorbis or Opus 14 years later.
- WebGL wasn't a thing, so Impact used the Canvas2d API for rendering, which is _still_ magnitudes slower than WebGL.
- Touch input wasn't standardized and mobile support in general was an afterthought.
- I made some (in hindsight) weird choices like extending Number, Array and Object. Fun fact: Function.bind or Array.indexOf wasn't supported by all browsers, so Impact has polyfills for these.
- Weltmeister (the editor) is a big piece of spaghetti, because I didn't know what I was doing.
Of course all of these shortcomings are fixable. I actually have the source for “Impact2” doing all that with a completely new editor and bells and whistles. It was very close to release but I just couldn't push it over the finish line. I felt bad about this for a long time. I guess high_impact is my attempt for redemption :]
I loved Impact and paid for it back in the day, though I never ended up finishing the project I was working on. Did you ever throw the source for "Impact2" up anywhere? What's missing from it being releaseable?
It does, but the main speedup comes from using WebGL instead of Canvas2D. Sadly, Canvas2D is still as slow as it ever was and I really wonder why.
Years back I wrote a standalone Canvas2D implementation[1] that outperforms browsers by a lot. Sure, it's missing some features (e.g. text shadows), but I can't think of any reason for browser implementations needing to be _that_ slow.
For things missing/hard in WebGL, is it performant enough to rely on the browser compositor to add a layer of text, or a layer of svg, over the WebGL?
I have some zig/wasm stuff working on canvas2D, but rendering things to bitmaps and adding them to canvas2d seems awfully slow, especially for animated svg.
Ah man, I'm still looking for a general canvas drop in replacement that would render using webgl or webgpu if supported. Closest I've found so far is pixi.js, but the rendering api is vastly different and documentation spotty, so it would take some doing to port things over. Plus no antialiasing it seems.
To be fair, they modified Impact _a lot_. In some of their development streams[1] you can see a heavily extended Weltmeister (Impact's level editor).
Imho, that's fantastic! I love to see devs being able to adapt the engine for their particular game. Likewise, high_impact shouldn't be seen as a “feature-complete” game engine, but rather as a convenient starting point.
In game development this isn't true for better or worse. There is a lot of sunken cost mindset in games that we just go with what we have because we have already invested the time in it and we'll make it work by any means.
Of course you can. Not really sure why this still is tossed about. You just get a shiny turd with a lot less stink. I've made a career of taking people's turds and turning them into things they can now monetize (think old shitty video tape content prepped to be monetized with streaming).
I like this take, but you're saying something different I think, which is more along the lines of "Customers don't care about how the sausage is made".
You didn't polish a turd, you found something that could be valuable to someone and found a way to make it into a polished product, which is great.
But "You can't polish a turd" implies it's actually a turd and there's nothing valuable to be found or the necessary quality thresholds can't be met.
that's just crazy bs, starting from open source code and adding specific features needed for a project is a very common strategy, doesn't mean at all that the tool wasn't good to begin with
phoboslab was downplaying their own efforts by saying that the Cross Code team customised the Impact engine a bunch. My point was that no amount of customisation can turn a bad engine into a good one (you can't polish a turd), so phoboslab definitely deserves credit for building the base engine for Cross Code.
> no amount of customisation can turn a bad engine into a good one (you can't polish a turd)
At a risk of being off-topic and not contributing much to this particular conversation (as I doubt it's relevant to the point you're making), I'd like to note that I often actually find it preferable to "polish a turd" than to start from scratch. It's just much easier for my mind to start putting something that already exists into shape than to stop staring at a blank screen, and in turn it can save me a lot of time even if what I end up with is basically a ship of Theseus. Something something ADHD, I guess.
However, I'm perfectly aware this approach makes little sense anywhere you have to fight to justify the need to get rid of stuff that already "works" to your higher-ups ;)
I assume you're talking about the pack_map.c used during build? That just was written in C because I had the .map parser already lying around from another side project. If I had done it from scratch for this project it likely would have been JS or PHP, too.
It's kinda sad when you grow up in a period of rapid hardware development and now see 10 years going by with RAM $/GB prices staying roughly the same.