This argument always feels like a motte and bailey to me. Users don't literally care what what tech is used to build a product. Of course not, why would they?
But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
> a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
This isn't true. It took me two seconds to create a new project, run `cargo build` followed by `ls -hl ./target/debug/helloworld`. That tells me it's 438K, not 3.7MB.
Also, this is a debug build, one that contains debug symbols to help with debugging. Release builds would be configured to strip them, and a release binary of hello world clocks in at 343K. And for people who want even smaller binaries, they can follow the instructions at https://github.com/johnthagen/min-sized-rust.
Older Rust versions used to include more debug symbols in the build, but they're now stripped out by default.
$ rustc --version && rustc hello.rs && ls -alh hello
rustc 1.84.1 e71f9a9a9 2025-01-27
-rwxr-xr-x 1 user user 9.1M hello
So 9.1 MB on my machine. And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.
Windows 95 came on 13x 3.5" floppies, so 22MB. The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.
Fwiw, in something like hello world, most of the size is just the rust standard library that's statically linked in. Unused parts don't get removed as it is precompiled (unless there's some linker magic I am unaware of). A C program dynamically links to the system's libc so it doesn't pay the same cost.
Before a few days ago I would have told you that the smallest binary rustc has ever produced is 137 bytes, but I told that to someone recently and they tried to reproduce and got it down to 135.
The default settings don’t optimize for size, because for most people, this doesn’t matter. But if you want to, you can.
I have never seen it used like that. I have always seen it used like parent said: to justify awful technical choices which hurt the user.
I have written performant high quality products in weird tech stacks where performance can be s bit tricky to get: Ruby, PL/PgSQL, Perl, etc. But it was done by a team who cared a lot about technology and their tech stack. Otherwise it would not have been possible to do.
This is a genuinely fascinating difference in perception to me. I don't remember ever hearing it used in the way you have. I've always heard it used to point out that devs often give more focus on what tools they use than they do on what actually matters to their customers.
I have often heard it used to create a (false) impression that the choice of tools does not affect things that matter to customers - effectively silencing valid concerns about the consequences of a particular technical choice. It is often framed in the way you suggest, but the actual context and the intended effect of the phrase are very different from that framing.
Would like to echo this. I've seen this used to justify extracting more value from the user rather than spending time doing things that you can't ship next week with a marketing announcement.
I've also seen it used when discussing solutions that aren't stack pure (for instance, whether to stick with the ORM or write a more performant pure SQL version that uses database-engine specific features)
Then you need to read more, because that's what it means. The tech stack doesn't matter. Only the quality of the product. That quality is defined by the user. Not you. Not your opinion. Not your belief. But the user of the product.
> which hurt the user.
This will self correct.
Horrible tech choices have lead to world class products that people love and cherish. The perfect tech choices have lead to things people laugh at and serve as a reminder that the tech stack doesn't matter, and in fact, may be a red flag.
"It's a basic tool that sits hidden in my tray 99.9% of the time and it should not use 500MB of memory when it's not doing anything" is part of product quality.
Using 500MB of memory while not doing anything isn’t really a problem. If RAM is scarce then it will get paged out and used by another app that is doing something.
And now more seriously ;) Swapping aint fun. Sure, if it needs to happen, its better that out of memory error or crash, but still. Its not excuse to baloon everything..
And here, is my memory stats, while writing reply:
Swapping is perfectly fine for the case where an app that has remained unused for a significant period of time starts being used again. It may even be faster than having the app manually release all of the memory it’s allocated and then manually reallocating it all again on user input (which is presumably what the alternative would be, if the complaint is only about the app’s memory usage when in an inactive state).
Free RAM is just RAM that’s doing nothing useful. Better to fill it with cached application states until you run out.
An application that genuinely uses less RAM at any point of its execution, whether that's measured by maximum RSS, average RSS, whatever, is still better. Then you can have more apps running at the same level of swapping. It's true that if you have a lot of free RAM, there's no need to split hairs. But what about when you don't have a lot? I was under the impression that computers should handle a billion Chrome tabs easily. Or like, y'know, workloads that people actually use that aren't always outlandish stress tests.
Sure, the ideal app uses no resources. But it really doesn’t matter how much allocated memory an app retains when in an unused state (if you’re running it on a modern multitasking OS). If there’s any memory pressure then those pages will get swapped out or compressed, and you won’t really notice because you’re not using the app.
It seems like you're saying that if the user acts in certain patterns, the OS will just take care of it. Isn't the point of computers and technology to make it so that users are less restricted and more empowered? If many people do actually notice performance issues particularly around usage of certain apps, then your recommended patterns are too narrow for practicality. You are talking about scenarios where there is no real pressure, the happy path, but that's not realistic.
I’m just saying that it would not make much difference if Electron reduced its idle memory usage. If you use an app infrequently, then it will get paged out and the memory it retains in its idle state will be available for other processes. If you use the app frequently then you care about active memory usage more than idle memory usage. Either way, there is not much to be gained by reducing idle memory usage. Any attempt to reduce it might just increase the wake up time of the app, which might then have to rebuild various data structures that could otherwise have been efficiently compressed or cached to disk by the OS.
You're still going off of a "happy path" mentality. The user decides when an app should be idle or active, not the OS. The OS therefore pages in and out with no grand scheme and may be out of sync with the user's next action. What of the time taken to page in an idle app's working set? Or, as I addressed from the start, what if the user wants many applications open and there is not much free memory? That means swapping becomes a necessity and not a luxury, performance drops, and user experience declines. I think there is plenty of, perhaps low-hanging is uncharitable, but not unreasonably high fruit to pick so that users can be more comfortable with whatever workload they desire. I don't think we're remotely pushing the limits of the technology here.
You seem to be thinking about memory usage in general and not specifically about memory usage in an idle state, which is what I’ve been talking about in this thread.
Otherwise, what exactly is the alternative you have in mind? Should Electron release allocations manually after a certain period of inactivity and then rebuild the relevant data structures when woken up again? For the reasons I gave above, I suspect that the result of that would be a reduction in overall performance, even if notional memory usage were reduced.
If you just say “Electron should use less RAM overall!” then you are not talking about the specific issue of memory usage in an idle state; you are talking about memory usage in general.
I am engaging with the arguments you are making; I don't think you are engaging with mine. Reread my previous comment. Application designers do not get the luxury of somehow having high memory usage in an idle state but being able to cut down memory usage in an active state. Your idea of the idle state seems to be that the entire system is idle. You are not considering what happens where there is load. The app designer does not decide when the application has the leeway to have high idle memory usage ("oh, Electron idle memory usage of 500MB is fine because swap so it's basically free").
> Free RAM is just RAM that’s doing nothing useful. Better to fill it with cached application states until you run out.
So you recognize that you are only cruising when there is plenty of free RAM. You're setting aside this magical scenario of the idle state, but really you are just being optimistic that users won't suddenly act and overload the system. You are excusing significant waste by saying it won't matter most of the time. Perhaps that's true with certain definitions (i.e. many people on most days don't make their computer thrash), but in practice people notice when they have a problem, and I want to minimize such problems. I am not saying we can do any of this perfectly, but I think there is legitimate room for improvement that you are ignoring. I think it's reasonable to say that, with modest engineering effort, we could make every (say) 800MB Electron app into a 600MB Electron app. If you were running three at once before, now you can run four with a comparable level of performance. I think that's a genuine improvement that isn't some gotcha to you or an unrealistic fantasy.
>Application designers do not get the luxury of somehow having high memory usage in an idle state but being able to cut down memory usage in an active state.
That may be so. However, in that case, it does not make sense to complain about an application using x amount of RAM while not in use, which is the complaint that I was responding to ("it should not use 500MB of memory when it's [sitting in my tray] and not doing anything"). It might make sense to complain about Electron's overall RAM usage. As far as I can tell, that's what you're doing. I don't really have an opinion on that as I'm not familiar with Electron's internals. So on that point there's really nothing for us to argue about.
>Your idea of the idle state seems to be that the entire system is idle.
I'm talking about the state where the application is not being used, not where the entire system is idle. In that state it is easy for the OS to reclaim any RAM the app is using for use by other apps.
Businesses need to learn that, like it or not, code quality and architecture quality is a part of product quality
You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
This is why startups can outcompete incumbents sometimes
Suddenly there's a market shift and a startup can actually build your entire product and the new competitive edge in less time than it takes you to add just the new competitive edge, because your code and architecture has atrophied to the point it takes longer to update it than it would to rebuild from scratch
Maybe this isn't as common as I think, I don't know. But I am pretty sure it does happen
>You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
While it's true that that can be partially due to tech debt, there are generally other factors as well. The more years you've had to accrue customers in various domains, the more years of decisions you have to maintain backwards compatibility with, the more regulatory regimes you conduct business under and build process around, the slower you're going to move compared to someone trying to move fast and break things.
> No, it means that product quality is all that matters
But it says that in such a roundabout way that non technical people use it as an argument for MBAs to dictate technical decisions in the name of moving fast and breaking things.
I don't know what technology was used to build the audio mixer that I got from Temu. I do know that it's a massive pile of garbage because I can hear it when I plug it in. The tech stack IS the product quality.
I don't think that's broadly true. The unfortunate truth about our profession is that there is no floor to how bad code can be while yet generating billions of dollars.
If it's making billions of dollars, somebody somewhere is getting a lot of what they want out of it. But it's possible that those people are actually the purchasing managers or advertisers rather than the users of the software. "Customers" probably would've been the more correct term. Or sometimes "shareholders".
As far as I can tell the article has been misinterpreted causing much lost hours by HN commenters. By saying users don't care about your tech stack, it is saying you should care about your tech stack, i.e it matters, presenting some bullet points on what to keep in mind when caring about your tech stack. Or to summarize, be methodological, not hype-following.
Agree the article is not clearly presented but it's crazy to see the gigantic threads here that seem to be based on a misunderstanding.
I feel like that's what it should mean, that quality is all that matters. But it's often used to excuse poor quality as well. Basically if you skinner box your app hard enough, you can get away with lower quality.
Well, the alternative and more charitable interpretation would be that you are more likely to build a better product in the stack you know well and enjoy.
I think when you get more concrete about what the statement is talking about, it becomes very hard to assert that they mean something else.
Like if you are skilled with, say, Ruby on Rails, you probably should just use that for your v1.0. The hypothetical better stack is often just a myth we tell ourselves as software engineers because we like to think that tech is everything when it's the product + launching that's everything.
> Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.
> If you want to make something that starts instantly you can't use electron or java.
Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.
> In practice this argument is used to justify bloated apps
Speaking of motte-and-bailey. But I actually disagree with the article's "what should you focus on". If you're a public-facing product, your focus should be on making something the user wants to use, and WILL use. And if your tech stack takes 30 seconds to boot up, that's probably not the case. However, if you spend much of your time trying eek out an extra millisecond of performance, that's also not focusing on the right thing (disclaimer: obviously if you have a successful, proven product/app already, performance gains are a good focus).
It's all about balance. Of course on HN people are going to debate microsecond optimizations, and this is perfect place to do so. But every so often, a post like this pops up as semi-rage bait, but mostly to reset some thinking. This post is simplistic, but that's what gets attention.
I think gaming is good example that illustrates a lot of this. The purpose of games is to appeal to others, and to actually get played. And there are SO many examples of very popular games built on slow, non-performant technologies because that's what the developer knew or could pick up easily. Somewhere else in this thread there is a mention of Minecraft. There are also games like Undertale, or even the most popular game last year Balatro. Those devs didn't build the games focusing on "performance", they made them focusing on playability.
I saw an HN post recently where a classic HN commentator was angry that another person was using .NET Blazor for a frontend; with the mandatory 2MB~3MB WASM module download.
He responded by saying that he wasn’t a front-end developer, and to build the fancy lightweight frontend would be extremely demanding of him, so what’s the alternative? His customers find immensely more value in the product existing at all, than by its technical prowess. Doing more things decently well is better than doing few things perfectly.
Although, look around here - the world’s great tech stack would be shredded here because the images weren’t perfectly resized to pixel-perfect fit their frames, forcing the browser to resize the image, which is slower, and wastes CPU cycles every time, when it could have been only once server side, oh the humanity, think about how much ice you’ve melted on the polar ice caps with your carelessness.
And your point is completely wrong. It makes no sense for a language to by default optimize for the lowest possible binary size of a "hello world"-sized program. Nobody's in the business of shipping "hello world" to binary-size-sensitive customers.
Non-toy programs tend to be big and the size of their code will dwarf whatever static overhead there is, so your argument does not scale.
Even then, binary size is a low priority item for almost all use cases.
But then even if you do care about it, guess what, every low level language, Rust, C, whatever, will let you get close to the lowest size possible if you put in the effort.
So no, on no level does your argument make sense with any of the examples you've given.
Did you notice how nobody in this thread has argued that rust should optimize for binary size or that rust is somehow wrong for having made the trade-offs that it did?
People who have only worked with languages that produce large executables are likely to believe that 300k or so is about as small as an executable can possibly be. And if they believe that, then of course they'll also believe that a serious program must therefore be many megabytes large. If you don't have a good grasp of how capable modern computers are then your estimates will be wrong by orders of magnitude.
There are countless of examples -- you don't have to go that far back in history to find them -- where dozens of engineers worked on a complicated program for years that compiled down to maybe 500kb.
And again, the point still isn't that we should write software today like we had to in the 80s or 90s.
However, programmers should be aware how many orders of magnitude slower/bigger their program is relative to a thoroughly optimized version. A program can be small compared to the size of your disk and at the same time 1000x larger than it needs to be.
> My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.
Many of my real-world Rust backend services are in the 1-2MB range.
> Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.
> How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.
> The executable is 175 bytes.
You can also disable the standard library and a lot of Rust features and manually write the syscall assembly into a Rust program. With enough tweaking of compiler arguments you'd probably get it to be a very small binary too.
But who cares? I can transfer a 10MB file in a trivial amount of time. Storage is cheap. Bandwidth is cheap. Playing code golf for programs that don't do anything is fun as a hobby, but using it as a debate about modern software engineering is nonsensical.
In my experience, the people who make these arguments often don't even know their own tech stack of choice well enough to make it work halfway efficiently. They say 10ms but that assumes someone who knows the tech stack, the tradeoffs and can optimize it. In their hands its going to be 1+ seconds and becomes such a tangled mess it can't be optimized down the line.
If you want to make something that starts instantly you can't use electron or java.
This technical requirement is only on the spec sheet created by HN goers. Nobody else cares. Don't take tech specs from your competitors, but do pay attention. The user is always enchanted by a good experience, and they will never even perceive what's underneath. You'd need a competitor to get in their ear about how it's using Electron. Everyone has a motive here, don't get it twisted.
> Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
Semi-dependent.
Default Java libraries being a piles upon piles of abstractions… those were, and for all I know still are, a performance hit.
But that didn't stop Mojang, amongst others. It can be written "fast", if you ignore all the stuff the standard library is set up for, if you think in low-level C-like manipulation of int arrays (not Integer the class, int the primitive type) rather than AbstractFactoryBean etc. — and keep going from there, with that attitude, because there's no "silver bullet" to software quality, no "one weird trick is all you need", software in (almost!) any language fast if you focus on doing that and refuse to accept solutions that are merely "ok" when we had DOOM running in real time with software rendering in the 90s on things less powerful than the microcontroller in your USB-C power supply[0] or HDMI dongle[1].
Of course, these days you can't run applets in a browser plugin (except via a JavaScript abstraction layer :P), but a similar thing is true with the JavaScript language, though here the trick is to ignore all the de-facto "standard" JS libraries like jQuery or React and limit yourself to the basics, hence the joke-not-joke: https://vanilla-js.com
> Like, just look at the amount of results for "Minecraft" "slow"
My niece was playing it just fine on a decade-old Mac mini. I've played it a bit on a Raspberry Pi.
The sales figures suggest my nice's experience is fairly typical, and things such as you quote are more like the typical noise that accompanies all things done in public — people complaining about performance is something which every video game gets. Sometimes the performance even becomes the butt of a comic strips' joke.
If Java automatically causing low performance, neither my niece's old desktop nor my Pi would have been able to run it.
If you get me ranting on the topic: Why does BG3 take longer to load a saved game file than my 1997 Performa 5200 took to cold boot? Likewise Civ VI, on large maps?
I actually tried it in a pi literally a few days agod (came across an older pi which had it preinstalled) and it's pretty much unplayable
> Why does BG3 take longer to load a saved game file than my 1997 Performa 5200 took to cold boot? Likewise Civ V, on large maps?
I believe there should be the possibility for legal customer complaints when this happen just like I can file a complaint if I buy a microwave and it takes 2 minutes to start
> one of the most notoriously slow and bloated video game ever?
I like how you conveniently focus on that aspect and not how that didn't prevent it from being one of the biggest video game hit of all time.
Those two can be true at the same time. And that's one thing that a lot of technical people don't get. Slow is generally bad. But you cannot take it out of context. Slow can mean different things to different people, and slow can be circumvented by strategies that do not involve "making the code run faster".
The fact that it was later bought by Microsoft (and rewritten in C++ so it can run on consoles[1]) is not relevant to the argument that you can, in fact, write a successful and performant game in Java, if you know how.
I’m sure the C++ version is more performant, but that doesn’t strictly mean it was ported for performance. Is there even a JVM on those consoles? I suspect the answer is no and it was easier to remake in C++ than to port OpenJdk to the Xbox 360.
Are we talking modded? Cause Vanilla minecraft runs on a potato, especially if the device is connecting to a server (aka doesn't have to do the server-side updates itself).
> All else equal users will absolutely choose the zippiest products.
Only a small subset of users actually do this, because there are many other reasons that people choose software. There are plenty of examples of bloated software that is successful, because those pieces of software deliver value other than being quick.
Vanishingly few people are going to choose a 1mb piece of software that loads in 10ms over a 100mb piece of software that loads in 500ms, because they won't notice the difference.
Yes, hypothetically if all else is equal, and the difference isn't noticeable, then the users experience is equal. But it's a hypothetical that doesn't actually exist in the real world.
Competing software equal in every way but speed doesn't exist except for some very few contrived examples. Different pieces of software typically have different user interfaces, different features, different marketing, different functionality, etc.
> If you want to make something that starts instantly you can't use electron
Hmmm, VScode starts instantly on my M1 Mac
Slack's success suggests you're wrong about bloat being an issue. Same with almost every mobile app.
The iOS Youtube app is 300meg. You could repo the functionality in 1meg. The TikTok app is 597meg. Instagram 370meg. X app 385meg, Slack app 426meg, T-Life (no idea what it is but it's #3 on the app store, 600meg)
"All else equal users will absolutely choose the zippiest products."
As a "user", it is not only "zippiest" that matters to me. Size matters, too. And, in some cases, the two are related. (Rust? Try compiling on an underpowered computer.^1)
"If you want to make something that starts instantly you can't use Electron or Java."
That's a poor example, as users genuinely don't care about download file size or installed size, within reason. Nobody in the West is sweating a 200MB download.
Users will generally balk at 2000MB though. ie, there's a cutoff point somewhere between 200MB and 2000MB, and every engineering decision that adds to the package size gets you closer to it.
For an installed desktop app... vast majority of folks aren't going to batt an eye at 2G.
Hell - the most exposure the average person gets to installing software is game downloads, sadly (100G+). After that it's the stuff like MSOffice (~5-10G).
---
I want to be clear, I definitely agree there are cases where "performance is the feature". That said, package size is a bad example.
Disk is SO incredibly cheap that users are being conditioned to not even consider it on mobile systems. And networks are good enough I can pull a multi-gig file down with just my phone's tethering bandwidth in minutes basically across the country.
When I want performance as a user, it's for an action I have to do multiple times repeatedly. I want the app itself to be fast, I want buttons to respond quickly, I want pages to show up without loaders, I want search to keep up with my keystrokes.
Use as much disk and ram as you can to get that level of performance. Don't optimize for computer nerd stats like package size (or the ram usage harpies...) when even semi-technical folks can't tell you the difference between kb/mb/gb, and have no idea what ram does.
Users care about performance in the same way that users buy cars. Most don't give a fuck about the numbers, they want to like the way it drives.
Your tech stack can definitely influence that, but you still have to make the right value decisions. Unless your audience is literally "software developers" like that file explorer, lay off the "software developer stats".
Everyone gets these preloaded in their laptops, and for people that need it it's a non-negotiable
> Games
Continuing the same point, people aren't going to be willing to put the same effort into trying some free app as they would into a game they just paid a bunch of money for.
> Disk so cheap
Cost per GB is irrelevant if the iPhone comes with a fixed amount of un-upgradable space the majority of which is taken by media. People routinely "free up" space on their phones by deleting apps that show up at the top of the settings -> storage page. Do you want that to be your app?
> 2GB
I think this is too high for a desktop app. Few hundred MB is probably the limit I'd guess.
I know plenty of non-technical users who still dislike Java, because its usage was quite visible in the past (you gotta install JVM) and lots of desktop apps made with were horrible.
My father for example, as it was the tech chosen by his previous bank.
Electron is way sneakier, so people just complain about Teams or something like that.
Nowadays users aren’t expected to install the vm. Desktop apps now jlink a vm and it’s like any other application. I’ve even seen the trimmed vm size get down to 30-50mb.
Unfortunately you still get developers who don’t set the correct memory settings and then it ends up eating 25% of a users available ram.
I strongly agree with this sentiment. And I realize that we might not be the representative of the typical user, but nonetheless, I think these things definitely matter for some subset of users.
It’s probably best to compare download sizes on a logarithmic scale.
18mb vs 180mb is probably the difference between an instant download and ~30 seconds. 1.8gb is gonna make someone stop and think about what they’re doing.
It’s a little off topic but I used to think Go binaries were too big. I did a little evaluation at some point and changed my mind. Sounds like they are still bigger than Rust.
> But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
It's an incredibly effective argument to shut down people pushing for the new shiny thing just because they want to try it.
Some people are gullible enough to read some vague promises on the homepage of a new programming language or library or database and they'll start pushing to rewrite major components using the new shiny thing.
Case in point: i've worked at two very successful companies (one of them reached unicorn-level valuation) that were fundamentally built using PHP. Yeah, that thing that people claim has been dead for the last 15 years. It's alive, kicking and screaming. And it works beautifully.
> If you want to make something that starts instantly you can't use electron or java.
You picked the two technologies that are the worst examples for this.
Electron: electron has essentially breathed new life into GUI development, that essentially nobody was doing anymore.
Java: modern java is crazy fast nowadays , and on decent computer your code gets to the entrypoint (main) in less than a second. Whatever slows it down is codebase problem, it's not the jvm.
Users don't care if your binary is 3 or 4 mb. They might care if the binary was 3 or 400 mb. But then I also look at our company that uses Jira and Confluence and it takes 10+ seconds to load a damn page. Sometimes the users don't have a say.
> If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice.
Most users do not care at all.
If someone is sitting down for an hour-long gaming session with their friends, it doesn't matter if Discord takes 1 second or 20 seconds to launch.
If someone is sitting down to do hours of work for the day, it doesn't matter if their JetBrains IDE or Photoshop or Solidworks launches instantly or takes 30 seconds. It's an entirely negligible amount.
What they do care about is that the app works, gives them the features they want, and gets the job done.
We shouldn't carelessly let startup times grow and binaries become bloated for no reason, but it's also not a good idea to avoid helpful libraries and productivity-enhancing frameworks to optimize for startup time and binary size. Those are two dimensions of the product that matter the least.
> All else equal users will absolutely choose the zippiest products.
"All else equal" is doing a lot of work there. In real world situations, the products with more features and functionality tend to be a little heavier and maybe a little slower.
Dealing with a couple seconds of app startup time is nothing in the grand scheme of people's work. Entirely negligible. It makes sense to prioritize features and functionality over hyper-optimizing a couple seconds out of a person's day.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb.
Okay. Comparing a debug build to a released app is a blatantly dishonest argument tactic.
I have multiple deployed Rust services with binary sizes in the 1-2MB range. I do not care at all how large a "Hello World" app is because I'm not picking Rust to write Hello World apps.
This was on the cover of a book and it prints a maze using block characters? copilot said the block characters part but not the maze, i remembered that after seeing the "one of two block characters."
unfortunately that requires a BASIC to run, which will probably be larger than 256B and possibly even 1800000B - the mind is born .prg can just be "run" on the computer it was written for with no interpreter.
I just don't really see drawing like that by binaries that small. Though I've seen plenty of demoscene, not really the same.
I just checked one of the project pages (behind some links - here: https://filepilot.handmade.network) and the first post says C with OpenGL. Boy is it rare to see one of those nowadays.
There's been a fair bit of research on this. People don't like slow interfaces. They may not necessarily _recognise_ that that's why they don't like the interface, but even slowdowns in the 10s of ms range can make a measurable difference to user sentiment.
SPAs, in practice, tend to be slower than well-done conventional/old-fashioned webapps, and, particularly for Wikipedia-type applications, have all sorts of other usability concerns. Like, which feels more responsive, wikipedia or some random SPA?
Most regular people buy a new phone when their old one has "gotten slow". And why do phones get slow? "What Andy giveth, Bill taketh away."
In tech circles regular people are believed to be stupid and blind. They are neither. People notice when apps get slower and less usable over time. It's impossible not to.
And then they spend their own money making them faster, not linking the slowness to the software, but to some belief their hardware is getting worse over time.
> They won’t notice those extra 10 milliseconds you save
They won't notice if this decision happens once, no. But if you make a dozen such decisions over the course of developing a product, then the user will notice. And if the user has e.g. old hardware or slow Internet, they will notice before a dozen such decisions are made.
In my career of writing software most developers are fully incapable of measuring things. They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
And yes, contrary to many comments here, users will notice that 10ms saved if it’s on every key stroke and mouse action. Closer to reality though is sub-millisecond savings that occurs tens of thousands of times on each user interaction that developers disregard as insignificant and users always notice. The only way to tell is to measure things.
When I was at Google, our team kept RUM metrics for a bunch of common user actions. We had a zero regression policy and a common part of coding a new feature was running benchmarks to show that performance didn't regress. We also had a small "forbidden list" of JavaScript coding constructs that we measured to be particularly slow in at least one of Chrome/Firefox/Internet Explorer.
Outside contributors to our team absolutely hated us for it (and honestly some of the people on the team hated it too); everyone likes it when their product is fast, and nobody likes being held to the standard of keeping it that way. When you ask them to rewrite their functional coding as a series of `for` loops because the function overhead is measurably 30% slower across browsers[0], they get so mad.
[0] This was in 2010, I have no idea what the performance difference is in the Year of Our Lord 2025.
Have you had the chance to interact with any of the web interfaces for their cloud products like GCP Console, Looker Studio, Big Query etc? It's painful, like when clicking a button or link you can feel a cloud run initializing itself in the background before processing your request.
Boy, do I wish more teams worked this way. Too many product leaders are tasked with improving a single KPI (for example, reducing service calls) but without requiring other KPIs such as user satisfaction to remain constant. The end result is a worse experience for the customer, but hey, at least the leader’s goal was met.
> They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
I completely agree. It blows my mind how fifteen minutes of testing something gets replaced with a guess. The most common situation I see this in (over and over again) is with DB indexes.
The query is slow? Add a bunch of random indexes. Let's not look at the EXPLAIN and make sure the index improves the situation.
I just recently worked with a really strong engineer that kept saying we were going to need to shard our DB soon, but we're way to small of a company for that to be justified. Our DB shouldn't be working that hard (it was all CPU load), there had to be a bad query in there. He even started drafting plans for sharding because he was adamant that it was needed. Then we checked RDS Performance Insights and saw it was one rogue query (as one should expect). It was about about a 45 minute fix and after downsizing one notch on RDS, we're sitting at about 4% most of the time on the DB.
But this is a common thing. Some engineers will _think_ there's going to be an issue, or when there is one, completely guess what it is without getting any data.
Another anecdote from a past company was them upsizing their RDS instance way more than they should need for their load because they dealt with really high connection counts. There was no way this number of connections should be going on based on request frequency. After a very small amount of digging, I found that they would open a new DB connection per object they created (this was PHP). Sometimes they'd create 20 objects in a loop. All the code was synchronous. You ended up with some very simple HTTP requests that would cause 30 DB connections to be established and then dropped.
The amount of premature optimizations and premature abstractions I’ve seen (and stopped when possible) is staggering. Some people just really want to show how “clever” they are solving a problem that no one asked them to solve and wasn’t even an issue. Often it’s “solved” in a way that cannot be added to or modified without bringing the whole house of cards down (aka, requiring a rewrite).
I can handle premature optimization better than premature abstraction. The second is more insidious IMHO. I’m not throwing too many stones here because I did this too when I was a junior dev but the desire to try and abstract everything, make everything “reusable” kills me. 99.99999% of the time the developer (myself included) will guess /wrong/ on what future development will need.
They will make it take 2s to add a new column (just look at that beautiful abstraction!) but if you ask them to add another row then it’s a multi-day process to do because their little abstraction didn’t account for that.
I think we as developers think we can predict the future way better than we can which is why I try very hard to copy and paste code /first/ and then only once I’ve done that 3-4 times do I even consider an abstraction.
Most abstractions I’ve seen end up as what I’ve taken to calling “hourglass abstractions” (maybe there’s a real name for this I don’t know it). An hourglass abstraction is where you try to abstract some common piece of code, but your use cases are so different that the abstraction becomes a jumbled mess of if/else or switch/case statements to handle all the different cases. Just because 2 code blocks “rhyme” doesn’t mean you should abstract them.
The 2 ends of the hourglass represent your inputs/outputs and they all squeeze down to go though the center of the hourglass (your abstraction). Abstractions should look like funnels, not hourglasses.
My plex server was down and my lazy solution was to connect directly to the NAS. I was surprised just how much I noticed the responsiveness after getting used to web players. A week ago I wouldn't have said web player bothered me at all. Now I can't not notice.
Can you show me any longitudinal studies that show examples of a causal connection between incrementality of latency and churn? It’s easy to make such a claim and follow up with “go measure it”. That takes work. There are numerous other things a company may choose to measure instead that are stronger predictors of business impact.
There is probably some connection. Anchoring to 10ms is a bit extreme IMO because it’s indirectly implying that latency is incredibly important which isn’t universally true - each product’s metrics that are predictive of success are much more nuanced and may even have something akin to the set of LLM neurons called “polysemantic” - it may be a combination of several metrics expressed via some nontrivial function that are the best predictor.
For SaaS, if we did want to simplify things and pick just one - usage. That’s the strongest churn signal.
Takeaway: don’t just measure. Be deliberate about what you choose to measure. Measuring everything creates noise and can actually be detrimental.
Human factors has a long history of studying this. I'm 30 years out of school and wouldn't know where to find my notes (and thus references) , but there are places where users will notice 5ms. There are other places where seconds are not noticed.
The web forced people to get used to very long latency and so fail no longer comment on 10+ seconds but the old studies prove they notice them and shorter waits would drive better "feelings". Back in the old days (of 25mhz CPUs!) we had numbers of how long your application could take to do various things before users would become dissatisfied. Most of the time dissatisfied is not something they would blame on the latency even though the lab test proved that was the issue, instead it was a general 'feeling' they would be unable to explain.
There are many many different factors that UI studies used to measure. Lag in the mouse was a big problem, not just the point movement either: if the user clicks you have only so long before it must be obvious that the application saw a click (My laptop fails at this when I click on a link), but didn't have to bring up the respond nearly as fast so long as users could tell it was processing.
TLDR; full state restoration of a OS GUI in the browser under 80ms from page request. I was eventually able to get that exact scenario down to 67ms. Not only is the state restoration complete but it covers all interactions and states of the application in a far more durable and complete way than big JavaScript frameworks can provide.
Extreme performance showed me two things:
1. Have good test automation. With a combination of good test automation and types/interfaces on everything you can refactor absolutely massive applications in about 2 hours with almost no risk of breaking anything.
2. Tiny performance improvements mean massive performance gains overall. The difference in behavior is extreme. Imagine pressing a button and what you want is just there before your brain can process screen flicker. This results in a wildly different set of user behaviors than slow software that causes users to wait between interactions.
Then there are downstream consequences to massive performance improvements, the second order consequences. If your software is extremely fast across the board then your test automation can be extremely fast across the board. Again, there is a wildly different set of expectations around quality when you can run end-to-end testing across 300 scenarios in under 8 seconds as compared to waiting 30 minutes to fully validate software quality. In the later case nobody runs the tests until they are forced to as some sort of CI step and even then people will debate if a given change is worth the effort. When testing takes less than 8 seconds everybody and their dog, including the completely non-technical people, runs the tests dozens of times a day.
I wrote my study of performance just a few months before being laid off from JavaScript land. Now, I will never go back for less than half a million in salary per year. I got tired of people repeating the same mistakes over and over. God forbid you know what the answer is to cure world hunger and bring in world peace, because any suggestion to make things better is ALWAYS met with hostility if it challenges a developer's comfort bubble. So, now I do something else where I can make just as much money without all the stupidity.
Soooooo, my "totally in the works" post about how direct connection to your RDMS is the next API may not be so tongue in cheek. No rest, no graphQL, no http overhead. Just plain SQL over the wire.
Authentication? Already baked-in. Discoverability? Already in. Authorization? You get it almost for free. User throttling? Some offer it.
I find it fascinating that HN comments always assert that 10ms matters in the context of user interactions.
60Hz screens don't even update every 10ms.
What's even more amazing is that the average non-tech person probably won't even notice the difference between a 60Hz and a 120Hz screen. I've held 120Hz and 60Hz phones side by side in front of many people, scrolled on both of them, and had the other person shrug because they don't really see a difference.
The average user does not care about trivial things. As long as the app does what they want in a reasonable amount of time, it's fine. 10ms is nothing.
Multiply that by the number of users and total hours your software is used, and suddenly it's a lot of wasted Watts of energy people rarely talk about.
But they still don't care about your stack. They care that you made something slow.
Fix that however you like but don't pretend your non-technical users directly care that you used go vs java or whatever. The only time that's relevant to them is if you can use it for marketing.
> Amazon Found Every 100ms of Latency Cost them 1% in Sales
I see this quoted but Amazon has become 5x slower (guestimate) and it doesn't seem like they are working on it as much. Sure the home page loads "fast" ~800ms over fiber, but clicking on a product routinely takes 2-3 seconds to load.
Amazon nowadays has a near monopoly powered by ad money due to the low margin on selling products versus ad spend. So unless you happen to be in the same position using them nowadays as an example isn't going to be very helpful. If they increased sales 20% at the cost of 1% less ad spend then they'd probably be at a net loss as a result.
So you're kinda falling into a fallacy here. You're taking a specific example and trying to make a general rule out of it. I also think the author of the article is doing the same thing, just in a different way.
Users don't care about the specifics of your tech stack (except when they do) but they do care about whether it solves their problem today and in the future. So they indirectly care about your tech stack. So, in the example you provided, the user cares about performance (I assume Rippling know their customer). In other examples, if your tech stack is stopping you from easily shipping new features, then your customer doesn't care about the tech debt. They do care, however, that you haven't given them any meaningful new value in 6 months but your competitor has.
I recall an internal project where a team discussed switching a Python service with Go. They wanted to benchmark the two to see if there was a performance difference. I suggested from outside that they should just see if the Python service was hitting the required performance goals. If so, why waste time benchmarking another language? It wasn't my team, so I think they went ahead with it anyway.
I think there's a balance to be struck. While users don't directly care about the specific tech, they do care about the results – speed, reliability, features. So, the stack is indirectly important. Picking the right tools (even if there are several "good enough" options) can make a difference in delivering a better user experience. It's about optimizing for the things users do notice.
They absolutely do not. In fact, relatively few do. Every single Electron app (which is a depressing number of apps) is a bloated mess. Most web pages are a bloated mess where you load the page and it isn't actually loaded, visibly loading more elements as the "loaded" page sits there.
Software sucks to use in 2025 because developers have stopped giving a shit about performance.
This is so true, and yet. . . Bad, sluggish performance is everywhere. I sometimes use my phone for online shopping, and I'm always amazed how slow ecommerce companies can make something as simple as opening a menu.
Yeah, those languages do a really good point of demonstrating the original point! Java would lead to a lot better performance in a lot of cases (like building a native application say) but Python despite being slow, has great FFI (which Java doesn't) so is a good shout for use cases like data science where you really just want a high level controller for some C or Rust code.
Point being, Python despite being slow as a snail or a cruise ship will lead to faster performance in some specific contexts, so context really is everything.
This is a mixed bag of advice. While it seems wise at the surface, and certainly works as an initial model, the reality is a bit more complex than aphorisms.
For example, what you know might not provide the cost benefit ratio your client would. Or the performance. If you only know Cloud Spanner but now there is a need for a small relational table? These maxims have obvious limitations.
I do agree that the client doesn't care about the tech stack. Or that seeking a golden standard is a McGuffin. But it does much deeper than that. Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
A good engineer balances tradeoffs and solves problems in a satisfying way sufficing all requirements. That can be MySQL and Node. But it can also be C++ and Oracle Coherence. Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
> Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
Your customer does care about how quickly you can iterate new features over time, and product stability. A stack with a complex mix of technologies is likely to be harder to maintain over the longer term.
That's also an aphorism that may or may not correspond to reality.
Not only there are companies with highly capable teams that are able to move fast using a complex mix of technologies, but also there are customers who have very little interest in new features.
This is the point of my comment: these maxims are not universal truths, and taking them as such is a mistake. They are general models of good ideas, but they are just starter models.
A company needs to attend to its own needs and solve its own problems. The way this goes might be surprisingly different from common sense.
Sure universal truths are rare - though I think there are many more people using such an argument to justify an overly complex stack, than there are cases where it truly is the best solution long term.
Remember even if you have an unchanging product, change can be forced on you in terms of regulatory compliance, security bugs, hardware and OS changes etc.
I think the point of the original post is that most important part of the context is the people ( developers ) and what they know how to use well and I'd agree.
I'd just say that one thing I've learnt is that even if the developer in the future that has to add some feature or fix some bug, is the developer who originally wrote it, life is so much easier if the original is as simple as possible - but hey maybe that's just me.
One person writing a stack in 6 languages is different from a team of 100 using 6 languages.
The problem emerges if you have some eccentric person who likes using a niche language no one else on the team knows. Three months into development they decide they hate software engineering and move to a farm in North Carolina.
Who else is going to be able to pick up their tasks, are you going to be able to quickly on board someone else. Or are you going to have to hire someone new with a specialty in this specific language.
This is a part of why NodeJS quickly ate the world. A lot of web studios had a bunch of front end programmers who were already really good with JavaScript. While NodeJS and frontend JS aren't 100% the same, it's not hard to learn both.
Try to get a front end dev to learn Spring in a week...
Excellent comment. What you raised are two important aspects of the analysis that the article didn't bother thinking about:
- how to best leverage the team you currently have
- what is the most likely shape your team will have in the future
Jane Street has enough resources and experts to be able to train developers on OCaml, Nubank and Clojure also comes to mind. If one leaves, the impact is not devastating. Hiring is not straightforward, but they are able to hire engineers willing to learn and train them.
This is not true for a lot of places, that have tighter teams and budgets, whose product is less specialized, and so on.
But this is where the article fails and your comment succeeds: actually setting out parameters to establish a strategy.
Most of the engineering advice tries to be as general as possible.
The issue is nothing is general in software. What a solo dev, a 10 person startup and a FAANG should do are 3 different things.
For example, a hobbyist dev who spends all day writing unit tests and calculating their own points in Jira, that doesn't make sense.
But I'd be horrified to see the same dev not use any task tracking at all at work.
For myself I take long breaks on my side projects, particularly when it comes to boring stuff like authentication.
If course I can't do that at my full time job. I can't just stop and do something else.
Jane Street is making billions in revenue per year and all their software needs to be as optimized as possible. If making the software 1% better nets you another 100 million dollars, you can hire a dozen people to specifically write internal tooling to make it happen.
Verses a smaller team probably just has to be do with off the shelf solutions .
The other thing this article misses is this is going to drastically depend on your industry. It doesn't really matter if you're selling special cat food and need an e-commerce website for that.
It's a completely different story if you're developing a low latency trading system
> This is a part of why NodeJS quickly ate the world
And the other part is you can share, say, data validation code between client and server easily - or move logic either side of the network without having to rewrite it.
ie Even if you are an expert in Java and Javascript - there are still benefits to running the same both ends.
Very much this. The concerns with running a six person team are quite a bit different from the concerns of directing hundreds to thousands of developers across multiple projects. No matter how good the team is and how well they are paid and treated, there will be churn. Hiring and supporting folks until they are productive is very expensive and gets more expensive the more complicated and number of different stacks you have to maintain.
If you want to have efficient portability of developers between teams you've got to consolidate and simplify your stacks as much as possible. Yeah your super star devs already know most of the languages and can pick up one more in stride no problem. But that's not your average developer. That average dev in very large organizations has worked on one language in one capacity for the last 5-15 years and knows almost nothing else. They aren't reading HN or really anything technology related not directly assigned via certification requirements. It's just a job. They aren't curious about the craft. How are you able to get those folks as productive as possible within your environment while still building institutional resiliency and, when possible, improving things?
That's why the transition from small startup with a couple pizza teams to large organizations with hundreds of developers is so difficult. They are able to actually hire full teams of amazing developers who are curious about the craft. The CTO has likely personally interviewed every single developer. At some point that doesn't become feasible and HR processes become involved. So inevitably the hiring bar will drop. And you'll start getting in more developers who are better about talking through an interview process than jumping between tech stacks fluidly. At some point, you have to transition to a "serious business" with processes and standards and paperwork and all that junk that startup devs hate. Maybe you can afford to have a skunkworks team that can play like startups. But it's just not feasible for the rest of Very Large Organizations. They have to be boring and predictable.
I've seen a grown man nearly start crying when asked to switch programming languages. It would of at most been a few weeks to help with some legacy code.
He starts mouthing off about how much more he can make somewhere else. Management relents and he got his way.
>If you want to have efficient portability of developers between teams you've got to consolidate and simplify your stacks as much as possible.
Amen!
I'm often tasked with creating new frameworks when I come to a new company. I always try to use whatever people are already comfortable with. If we're a NodeJS shop, I'll do it in NodeJS.
Unless you have very unique challenges to solve, you can probably more or less use the same stack across the company.
On a personal level I think it's a good idea to be comfortable with at least 3 languages. I've worked professionally in NodeJS, Python, and C#. While building hobbyist stuff with Dart/Flutter.
I don't have strong opinions either way, but if my boss expected me to learn Rust I'd probably start looking elsewhere. Not that Rust is bad, I just don't think I can do it.
> Your customer does care about how quickly you can iterate new features over time
How true this is depends on your particular target market. There is a very large population of customers that are displeased by frequent iterations and feature additions/changes.
The author didn't say listen to the opinion of other, hype or not. The author said "set aside time to explore new technologies that catch your interest ... valuable for your product and your users. Finding the right balance is key to creating something truly impactful.".
It means we should make our own independent, educated judgement based on the need of the product/project we are working on.
> Finding the right balance is key to creating something truly impactful
This doesn't mean anything at all. These platitudes are pure vapor, and seem just solid enough that they make sense at first glance, but once you try to grasp it, there is nothing there. What is impactful? What is untruly impactful as opposed to truly impactful? Why is that important? Why is the right balance key for it? Balance of what? How do you measure if the balance is right?
My expectation for engineering (including its management) is that we deal in requirements, execution, delivery, not vibes. We need measurable outcomes, not vapor clouds.
I heavily dislike GraphQL for all of the reasons. But I'll say that for a lot of developers, if you are already setting up an API gateway, you might as well batch the calls, and simplify the frontend code.
C++ is often the best answer for users, but this is about how bad the other options are, and not that C++ is good. Options like Rust doesn't have the mature frameworks that C++ does. (rust-qt is often used as a hack instead of a pure rust framework). There is a big difference between modern C++ and the old C++98 as well, and the more you force you code to be modern C++ the less the footguns in C++ will hit you. The C++ committee is also driving forward in eliminating the things people don't like about C++.
Users don't care about your tech stack. They care about things like battery life, and how fast your program runs, how fast your program starts - places where C++ does really well. (C, rust... also do very well). Remember this is real world benchmarks, you can find micro benchmarks where python is just as fast as well written C, but if write a large application in python they will be 30-60 times slower than the same written in C++.
Note however that users only care about security after it is too late. C++ can be much better than C, but since it is really easy to write C style code in C++ you need a lot more care than you would want.
If for your application Rust or ada does have mature enough frameworks to work with then I wouldn't write C++, but all too often the long history of C++ means it is the best choice. In some applications managed languages like Java works well, but in others the limits of the runtime (startup, worse battery life) make it a bad choice. Many things are scripts you won't run very much and so python is just fine despite how slow it is. Make the right choice, but don't call C++ a bad choice just because for you it is bad.
It's true, and of course, all models are wrong, especially as you go into deeper detail, so I can't really argue an edge case here. Indeed, C++ is rarely the best answer. But we all know of trading systems and gaming engines that rely heavily on C++ (for now, may Rust keep growing).
It would be funny if it weren’t tragic. So many of the comments here echo the nonsense of my software career: developers twisting themselves in knots to justify writing slow software.
I've not seen a compelling reason start the performance fight in ordinary companies doing CRUD apps. Even if I was good at performance, I wouldn't give that away for free, and I'd prefer to go to companies where it's a requirement (HFT or games), which only furthers the issue about slowness being ubiquitous.
For example, I dropped a 5s paginated query doing a weird cross join to ~30ms and all I got for that is a pat on the back. It wasn't skill, but just recognizing we didn't need the cross join part.
We'd need to start firing people who write slow queries, forcing them to become good, or pay more for developers who know how to measure and deliver performance, which I also don't think is happening.
For 99% of apps slow software is compensated by fast hardware. In almost all cases, the speed of your software does not matter anymore.
Unless speed is critical, you can absolutely justify writing slow software if its more maintainable that way.
And thus when I clicked on the link to a NPR story just now it was 10 seconds before the page was reasable on my computer.
Now my computer (pinebook pro) was never known as fast, but still it runs circles around the first computer I ran a browser. (I'm not sure which computer that was, but likely the CPU was running at 25mhz, could have been a 80486 or a Sparc CPU though - now get off my lawn you kids)
We are long past knitting RAM. You should go with the times. Development speed is key nowadays. People literally dont care about speed. If they did, Amazon & Facebook were long gone.
I find I develop faster when I am not waiting on a bunch of unnecessary nonsense that is only present to supplement untrained developers. For example I can refactor an absolutely massive application in around 2 hours if I have end to end tests that execute in seconds and everything is defined as well organized interfaces.
You cannot develop fast if everything you touch is slow.
> People literally dont care
I am really not interested in develops inventing wild guesses.
No. Only tech savy users care. And these are the minority of relevant users. Why do people still buy shit on Amazon despite Amazon being horribly slow?
This is a fairly classic rant. Many have gone before, and many will come after.
I have found that it's best to focus on specific tools, and become good with them, but always be ready to change. ADHD-style "buzzword Bingo," means that you can impress a lot of folks at tech conferences, but may have difficulty reliably shipping.
I have found that I can learn new languages and "paradigms," fairly quickly, but becoming really good at it, takes years.
That said, it's a fast-changing world, and we need to make sure that we keep up. Clutching onto old tech, like My Precioussss, is not likely to end well.
What do you think of Elixir in that regard? It seems to be evolving in parallel to current trends, but it still seems a bit too niche for my taste. I‘m asking because I‘m on the fence on whether I should/want to base my further server side career on it. My main income will likely come from iOS development for at least a few more years, but some things feel off in the Apple ecosystem, and I feel the urge to divest.
Ive been working in Elixir since 2015. I love the ecosystem and think its the best choice for building a web app from a pure tech/stability/scalability/productivity perspective (I also have a decade+ experience in Ruby on rails, Nodejs, and Php laravel, plus Rust to a lesser extent).
I am however having trouble in the human side of it. Ive got a strong resume but I was laid off in Nov 2024 and Im having trouble even getting Elixir interviews (with 9+ years of production Elixir experience!). Hiring people with experience was also hard when I was the hiring manager. It is becoming less niche these days. I love it too much to leave for other ecosystems in the web sphere
I like PHP. It has a raw power that is really nice to have in web development.
Elixir is similar but concurrency is a 'first class citizen', processes instead of objects, kind of. It's worth a look. I've never used it but there's a project for building iOS applications with the dominant Elixir web framework, https://github.com/liveview-native/live_view_native .
Elixir can be used for scripting tasks, config and test rig are usually scripts. In theory you can use the platform for desktop GUI too, one of the bespoke monitoring tools is built that way. Since a few years back there are libraries for numeric and ML computing too.
> Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.
The linked Tweet thread says they have 16 Software Engineers and a separate ops team that he's not counting for some reason.
There are also comments further down that thread about how their "designers" also code, so there is definitely some creative wordplay happening to make the number of programmers sound as small as possible.
Basecamp (37Signals) also made headlines for losing a lot of employees in recent years. They had more engineers in the past when they were building their products.
Basecamp is also over 20 years old and, to be honest, not very feature filled. It's okay-ish if your needs fit within their features, but there's a reason it's not used by a lot of people.
DHH revealed their requests per second rate in a Twitter argument a while ago and it was a surprisingly low number. This was in the context of him claiming that he could host it all on one or two very powerful servers, if I recall correctly.
When discussing all things Basecamp (37Signals) it's really important to remember that their loud internet presence makes them seem like they have a lot more users than they really do. They've also been refining basically the same product for two decades and had larger teams working in the past.
Just joining all the other comments to say there's a split between:
- users don't care about your tech stack
- you shouldn't care about your tech stack
I don't on-paper care what metal my car is going to be made of, I don't know enough information to have an opinion. But I reeaaally hope the person designing it has a lot of thoughts on the subject.
I find it funny that is message resurfaces on the front page once or twice a year for at least 10 years now.
Product quality is often not the main argument advanced when deciding on a tech stack, only indirectly. Barring any special technical requirements, in the beginning what matters is:
- Can we build quickly without making a massive mess?
- Will we find enough of the right people who can and want to work with this stack?
- Will this tech stack continue to serve us in the future?
Imagine it's 2014 and you're deciding between two hot new framework ember and react, this is not just a question about what is hot or shiny and new.
There's an obvious solution to "language doesn't matter". Let the opinionated people pick the stack. Then you satisfy the needs of the people who care and those who don't care.
How long are we gonna be getting those patronizing pieces of advice from people who figured out something based on their experience, made wrong conclusions, and are now eager to "teach" others?
This discussion is not about technology. It's about technical people learning that business, product and users are actually important. The best advice I can give technical people about working at startups is that you should learn everything you can about business. You can do that at a startup much easier than at a big tech company. Spend as much time as you can with your actual users, watching them use your product. It will help you communicate with the rest of the team, prioritize your technical tasks, and help you elevate your impact.
Problem is your hiring manager at a startup will still care whether you're an expert in the stack-du-jour. So technical people aren't incentivised to care about the business.
I don't agree with the article's central premise. It assumes that tech stack choices should be driven solely by what the end user cares about.
In reality, selecting a tech stack is about more than the user side of things, it’s a strategic decision where factors like cost efficiency, the ease of hiring new developers, long-term maintainability, and the likelihood that the technology will still be relevant in five years are all critical.
These considerations directly impact how quickly and effectively a team can build and scale a product, even if the end user never sees the tech stack at work.
I worked on a small team with 2 or 3 backend Elixir devs as the sole JavaScript / TypeScript front end, React Native app, micro services running in node, browser automation developer. It was easiest for me to write a backend service in JavaScript and expose an interface for them to integrate with rather than wait for them to get around to building it. The services were usually small under a couple thousand lines of code and if they wanted to, they could translate the service to Elixir since the business logic was usually hardened and solved. One service might scrape data, store it in S3 buckets, and then process it when requested storing / caching the results in a Postgres database.
Here is the important part: the automated browser agents I built were core to the company's business model. Even today nobody can accomplish in any other language than JavaScript what I was doing because it requires injecting JavaScript into third party websites with the headless browser. Even if the surface area was small, the company was 100% dependent on JavaScript.
The issue is that they are huge Elixir fanboys. Monday morning meeting G. would start talking about how much JavaScript sucks and they should start to move all the front end code to LiveView. Every couple weeks .... "we should migrate to LiveView." Dude, I'm sitting right here, I can hear you. Moreover, your job depends on at least one person writing JavaScript as shitty a language as it might be. I don't think he understood that he is threatening my job. The fanboy Elixir conversations between the 3 of of them always made me feel like a second class citizen.
I'm one of those fanboys. I've done the react, angular etc front end thing. LiveView just absolutely smokes SPAs and other JS rats-nests in terms of productivity, performance and deployment simplicity (for certain types of apps). The fact that you don't have to write 6 layers of data layer abstraction alone is worth it.
And don't get me wrong, I even like things like Nuxt and have a few products on Astro (great framework btw). Agree regarding browser automation, not many options there so your gig is safe for now. But do play with LiveView, it's pretty special.
I'm going to agree with you that Elixir / Erlang is the most productive and will back it up with some data; elixir developers are the highest paid because they generate the most value. [0] Nonetheless, LiveView isn't a viable solution for a lot of what I do. Because of that, it is important to have developers who know and understand how to use JavaScript.
A mixture of contempt and resentment towards JavaScript makes developers worse engineers.
What will happen now you're gone, is, one of them will encounter a trivial problem working in a language they don't understand. They could solve this by consulting some documentation, but they won't do that. Instead they will make a huge fuss and embark on a six month project to rewrite everything in LiveView.
Like all rewrites it will be much harder than they think, come out worse than they hoped and fail to solve any customer problems.
> Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that. They won’t notice those extra 10 milliseconds you saved, nor will their experience magically improve just because you’re using the latest JavaScript framework.
Users care about performance. The per user action latency is >10 ms (unless your in bootstrap phase).
> What truly makes a difference for users is your attention to the product and their needs.
False dichotomy.
> Every programming language shines in specific contexts. Every framework is born to solve certain problems. But none of these technical decisions, in isolation, will define the success of your product from a user’s perspective.
Yes so evaluate your product's context, choose your tools, frameworks, and languages accordingly.
Many comments arguing that the right stack and good "clean" code will then lead to user-appreciated performance improvements.
More often I've seen this used by developers as an excuse to yak-shave "optimizations" that deliver no (or negative) performance improvements. e.g. "Solving imaginary scaling problems ... at scale!"
Maybe not, but I do, and I hope anyone else who works in the space does as well. I'm not a big fan of this argument of cutting all "inefficient attention" out of what should be our craft. I want to take pride in my work, even if my users don't share that feeling
Indeed, hence why I always ask back when someone proposes rewrites, what is the business value.
Meaning the amount of money spent on developer salaries for the rewrite timeframe, how does that reflect in what business is doing, and how is the business going to get that investment back.
> There are no “best” languages or frameworks. There are only technologies designed to solve specific problems, and your job is to pick the ones that fit your use case
What if multiple technologies fit our use case? There will be one that fits the use case “best”.
The user has never been the primary factor in my choice of tech stack. Just like my employer doesn't care what car I drive to get to work. It's mostly about the developers and the best tools available to them at that point in time.
No! This is a great advise if you are working on a personal project but a terrible advise in all other scenarios. Use the stack that solves your problem, not the stack you are simply comfortable with.
I think the folks who lost access to funds due to an implementation choice of Synapse would disagree with this statement.
They don't care about most implementation details, but choice of 3rd parties to manage functionality and data does matter to your users when it breaks. And it will.
Sure, the language and framework you choose are less important to them, per parent post. But it's a slippery slope; don't assume that they are agnostic to _all_ your technical choices.
I am working as a contractor on a product where my principal incentives revolve around meeting customer-facing milestones. I am not being paid a dime for time spent on fancy technology experiments.
It has been quite revealing working with others on the same project who are regular salaried employees. The degree to which we ensure technology is well-aligned with actual customer expectations seems to depend largely on how we are compensated.
More importantly for open source projects with a lot of users, don't advertise a bunch of techno mumbo-jumbo on your website. That either means nothing to users, or may even put them off. Sure, have a link for developers that goes in to all that stuff so they can decide to contribute or build for source or whatever. Just keep it off the main page - it's meaningless to the general public.
This feels like one of those comments you hear in a tech stack meeting—something that doesn’t change the discussion but fills space, like a “nothing from my end.”
“So, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
'Hey guys, users don’t care about the tech stack. We need to focus on their needs.'
“Uh… right. So, users like speed, yeah? Anyway, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
They do not care about your stack, but do care that the stack works. Use what you're familiar with, sure, but if that does not produce a reliable system, your user will not use it.
Nobody is making the argument that users care about your tech stack. I've literally never heard a dev justify using a library because "users care about it". Nobody.
There is. But what the OP is doing is not that, it's "scaling". Which probably makes sense for whatever they're working on*. For the other 99% of projects, it doesn't.
* ... if they're at ClosedAI or Facebook or something. If they're at some startup selling "AI" solutions that has 10 customers, it may be wishful thinking that they'll reach ClosedAI levels of usage.
It's not really clear to me that the OP is talking about hardware costs. If so, yeah, once you have enough scale and with a read-only service like an LLM, those are perfectly linear.
If it's about saving the users time, it's very non-linear. And if it's not a scalable read-only service, the costs will be very non-linear too.
This is dumb. Of course they don't care about the tech stack directly but they obviously care about the things that it affects - performance, reliability, features etc.
To top it off, it also includes a classic "none of them are perfect so they are all the same and it doesn't matter which you choose". I recently learnt the name for this: the fallacy of grey.
Sometimes I have to remind myself of this. Take for example Bring a Trailer. It is a wordpress site. I know you're rolling your eyes or groaning at the mention of wordpress. It works. It is hugely successful in the niche it is in.
People are using machines dozens of times more powerful than machines from 15 years ago, but do they do things that are materially different to what they did before? Not really.
They absolutely do care, even if they cannot articulate whats wrong.
`I still often find myself in discussions where the main focus is the choice of technologies. Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that.'
When developers are having those discussions, are they ever doing so in relation to some hypothetical user caring? This feels like a giant misdirection strawman.
When I discuss technology and framework choices with other developers, the context is the experience for developers. And of course business considerations come into play as well: Can we find lots of talent experienced in this set of technologies? Is it going to efficiently scale in a cost effective manner? Are members of the team going to feel rewarded mastering this stack, gaining marketable skills? And so on.
There’s a lot of vacancies of technical co-founders with a preference of stack, which usually comes from some advisor or investor. A pretty dumb filter, given that it’s often a Node/React combo. It is understandable where it comes from, but still… dumb.
Lately, I've been thinking that LLMs will lift programming anyways to another level: the level of specification in natural language and some formal descriptions mixed in. LLMs will take care of transforming this into actual code. So not only users don't care about programming but also the developers. Switching the tech stack might become a matter of minutes.
If it simply generates code from natural language then I am still fundamentally working with code. Aider as an example is useful for this, but anything that isn't a common function/component/class it falls apart even with flagship models.
If I actually put my "natural language code" under git then it'll lack specificity at compile time likely leading to large inconsistencies between versions. This is horrible user experience - like the random changes Excel makes every few years, but every week instead.
And everyone that has migrated a somewhat large database knows it isn't doable within minutes.
I don't think one would put only the specification in Git. LLMs are not a reliable compiler.
Actual code is still the important part of a business. However, how this code is developed will drastically change (some people actually work already with Cursor etc.). Imagine: If you want a new feature, you update the spec., ask an LLM for the code and some tests, test the code personally and ship it.
I guess no one would hand over the control of committing and deployment to an AI. But for coding yes.
This argument always feels like a motte and bailey to me. Users don't literally care what what tech is used to build a product. Of course not, why would they?
But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
> a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
This isn't true. It took me two seconds to create a new project, run `cargo build` followed by `ls -hl ./target/debug/helloworld`. That tells me it's 438K, not 3.7MB.
Also, this is a debug build, one that contains debug symbols to help with debugging. Release builds would be configured to strip them, and a release binary of hello world clocks in at 343K. And for people who want even smaller binaries, they can follow the instructions at https://github.com/johnthagen/min-sized-rust.
Older Rust versions used to include more debug symbols in the build, but they're now stripped out by default.
$ rustc --version && rustc hello.rs && ls -alh hello
rustc 1.84.1 e71f9a9a9 2025-01-27 -rwxr-xr-x 1 user user 9.1M hello
So 9.1 MB on my machine. And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.
Windows 95 came on 13x 3.5" floppies, so 22MB. The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.
Fwiw, in something like hello world, most of the size is just the rust standard library that's statically linked in. Unused parts don't get removed as it is precompiled (unless there's some linker magic I am unaware of). A C program dynamically links to the system's libc so it doesn't pay the same cost.
You can statically link a C program and only parts of libc that are used are included in the binary.
Before a few days ago I would have told you that the smallest binary rustc has ever produced is 137 bytes, but I told that to someone recently and they tried to reproduce and got it down to 135.
The default settings don’t optimize for size, because for most people, this doesn’t matter. But if you want to, you can.
> And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.
This is such a silly argument because nobody is optimizing compilers and standard libraries for Hello World utilities.
It's also ridiculous to compare debug builds in rust against release builds for something else.
If you want a minimum sized Hello World app in rust then you'd use nostd, use a no-op panic handler, and make the syscalls manually.
> The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.
Fortunately for all of us, storage costs and bandwidth prices have improved by multiple orders of magnitude since then.
Which is why we don't care. The added benefits of modern software are great.
You're welcome to go back and use a 30 year old desktop OS if you'd like, though.
rustc --version && rustc main.rs && ls -alh main
rustc 1.85.0 (4d91de4e4 2025-02-17) -rwxr-xr-x 1 user user 436K 21 Feb 17:17 main
What's your output for `rustup default`?
Also, what's your output when you follow min-sized-rust?
For me,
3.8M Feb 21 11:56 target/debug/helloworld
Why is not —release being passed to cargo? It’s not like the File Pilo mentioned by GP is released with debug symbols.
The original post/blog said a debug hello world was 3.7MB. I confirmed the post.
A release build is:
426K Feb 21 15:28 target/release/helloworld
If you use lto, codegen-units = 1 and strip then you get:
323K Feb 21 15:30 target/release/helloworld
>When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
No, it means that product quality is all that matters. The users don't care how you make it work, only that it works how they want it to.
I have never seen it used like that. I have always seen it used like parent said: to justify awful technical choices which hurt the user.
I have written performant high quality products in weird tech stacks where performance can be s bit tricky to get: Ruby, PL/PgSQL, Perl, etc. But it was done by a team who cared a lot about technology and their tech stack. Otherwise it would not have been possible to do.
This is a genuinely fascinating difference in perception to me. I don't remember ever hearing it used in the way you have. I've always heard it used to point out that devs often give more focus on what tools they use than they do on what actually matters to their customers.
There are developers who care about tech and not about product. They build toys.
There are developers who care about product and not about tech. They build things that just barely work.
There are developers who care about both. They build the stuff people remember.
I have often heard it used to create a (false) impression that the choice of tools does not affect things that matter to customers - effectively silencing valid concerns about the consequences of a particular technical choice. It is often framed in the way you suggest, but the actual context and the intended effect of the phrase are very different from that framing.
TFA uses the phase that way.
> What truly makes a difference for users is your attention to the product and their needs.
> Learn to distinguish between tech choices that are interesting to you and those that are genuinely valuable for your product and your users.
Would like to echo this. I've seen this used to justify extracting more value from the user rather than spending time doing things that you can't ship next week with a marketing announcement.
I've also seen it used when discussing solutions that aren't stack pure (for instance, whether to stick with the ORM or write a more performant pure SQL version that uses database-engine specific features)
> I have never seen it used like that.
Then you need to read more, because that's what it means. The tech stack doesn't matter. Only the quality of the product. That quality is defined by the user. Not you. Not your opinion. Not your belief. But the user of the product.
> which hurt the user.
This will self correct.
Horrible tech choices have lead to world class products that people love and cherish. The perfect tech choices have lead to things people laugh at and serve as a reminder that the tech stack doesn't matter, and in fact, may be a red flag.
Look at every single discussion about Electron ;)
"It's a basic tool that sits hidden in my tray 99.9% of the time and it should not use 500MB of memory when it's not doing anything" is part of product quality.
Only 500MB? Now you're being charitable.
Electron adds about 500MB to your memory requirements. If your software uses 2GB like Teams, the rest is on you.
Poorly-chosen environments also apply a multiplier to actual application memory.
Pointer-oriented languages as a rule give a 2x memory-size multiplier to everything, though higher is possible depending on how dynamic they are.
I suspect that the DOM and JS runtimes add a significant multiplier to the UI memory cost.
Sounds like schooling your even drunker mate 8 pints deep into a weekend bender.
Using 500MB of memory while not doing anything isn’t really a problem. If RAM is scarce then it will get paged out and used by another app that is doing something.
Borg3 slaps foldr with a spinning rust.
And now more seriously ;) Swapping aint fun. Sure, if it needs to happen, its better that out of memory error or crash, but still. Its not excuse to baloon everything..
And here, is my memory stats, while writing reply:
Commit: 288.13M (1%); Free Memory: 15.43G (96%)
Swapping is perfectly fine for the case where an app that has remained unused for a significant period of time starts being used again. It may even be faster than having the app manually release all of the memory it’s allocated and then manually reallocating it all again on user input (which is presumably what the alternative would be, if the complaint is only about the app’s memory usage when in an inactive state).
Free RAM is just RAM that’s doing nothing useful. Better to fill it with cached application states until you run out.
An application that genuinely uses less RAM at any point of its execution, whether that's measured by maximum RSS, average RSS, whatever, is still better. Then you can have more apps running at the same level of swapping. It's true that if you have a lot of free RAM, there's no need to split hairs. But what about when you don't have a lot? I was under the impression that computers should handle a billion Chrome tabs easily. Or like, y'know, workloads that people actually use that aren't always outlandish stress tests.
Sure, the ideal app uses no resources. But it really doesn’t matter how much allocated memory an app retains when in an unused state (if you’re running it on a modern multitasking OS). If there’s any memory pressure then those pages will get swapped out or compressed, and you won’t really notice because you’re not using the app.
It seems like you're saying that if the user acts in certain patterns, the OS will just take care of it. Isn't the point of computers and technology to make it so that users are less restricted and more empowered? If many people do actually notice performance issues particularly around usage of certain apps, then your recommended patterns are too narrow for practicality. You are talking about scenarios where there is no real pressure, the happy path, but that's not realistic.
I’m just saying that it would not make much difference if Electron reduced its idle memory usage. If you use an app infrequently, then it will get paged out and the memory it retains in its idle state will be available for other processes. If you use the app frequently then you care about active memory usage more than idle memory usage. Either way, there is not much to be gained by reducing idle memory usage. Any attempt to reduce it might just increase the wake up time of the app, which might then have to rebuild various data structures that could otherwise have been efficiently compressed or cached to disk by the OS.
You're still going off of a "happy path" mentality. The user decides when an app should be idle or active, not the OS. The OS therefore pages in and out with no grand scheme and may be out of sync with the user's next action. What of the time taken to page in an idle app's working set? Or, as I addressed from the start, what if the user wants many applications open and there is not much free memory? That means swapping becomes a necessity and not a luxury, performance drops, and user experience declines. I think there is plenty of, perhaps low-hanging is uncharitable, but not unreasonably high fruit to pick so that users can be more comfortable with whatever workload they desire. I don't think we're remotely pushing the limits of the technology here.
You seem to be thinking about memory usage in general and not specifically about memory usage in an idle state, which is what I’ve been talking about in this thread.
Otherwise, what exactly is the alternative you have in mind? Should Electron release allocations manually after a certain period of inactivity and then rebuild the relevant data structures when woken up again? For the reasons I gave above, I suspect that the result of that would be a reduction in overall performance, even if notional memory usage were reduced.
If you just say “Electron should use less RAM overall!” then you are not talking about the specific issue of memory usage in an idle state; you are talking about memory usage in general.
I am engaging with the arguments you are making; I don't think you are engaging with mine. Reread my previous comment. Application designers do not get the luxury of somehow having high memory usage in an idle state but being able to cut down memory usage in an active state. Your idea of the idle state seems to be that the entire system is idle. You are not considering what happens where there is load. The app designer does not decide when the application has the leeway to have high idle memory usage ("oh, Electron idle memory usage of 500MB is fine because swap so it's basically free").
> Free RAM is just RAM that’s doing nothing useful. Better to fill it with cached application states until you run out.
So you recognize that you are only cruising when there is plenty of free RAM. You're setting aside this magical scenario of the idle state, but really you are just being optimistic that users won't suddenly act and overload the system. You are excusing significant waste by saying it won't matter most of the time. Perhaps that's true with certain definitions (i.e. many people on most days don't make their computer thrash), but in practice people notice when they have a problem, and I want to minimize such problems. I am not saying we can do any of this perfectly, but I think there is legitimate room for improvement that you are ignoring. I think it's reasonable to say that, with modest engineering effort, we could make every (say) 800MB Electron app into a 600MB Electron app. If you were running three at once before, now you can run four with a comparable level of performance. I think that's a genuine improvement that isn't some gotcha to you or an unrealistic fantasy.
>Application designers do not get the luxury of somehow having high memory usage in an idle state but being able to cut down memory usage in an active state.
That may be so. However, in that case, it does not make sense to complain about an application using x amount of RAM while not in use, which is the complaint that I was responding to ("it should not use 500MB of memory when it's [sitting in my tray] and not doing anything"). It might make sense to complain about Electron's overall RAM usage. As far as I can tell, that's what you're doing. I don't really have an opinion on that as I'm not familiar with Electron's internals. So on that point there's really nothing for us to argue about.
>Your idea of the idle state seems to be that the entire system is idle.
I'm talking about the state where the application is not being used, not where the entire system is idle. In that state it is easy for the OS to reclaim any RAM the app is using for use by other apps.
I consider it a problem on my system and I will refuse to use your app if you disrespect my resources that way.
Businesses need to learn that, like it or not, code quality and architecture quality is a part of product quality
You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
This is why startups can outcompete incumbents sometimes
Suddenly there's a market shift and a startup can actually build your entire product and the new competitive edge in less time than it takes you to add just the new competitive edge, because your code and architecture has atrophied to the point it takes longer to update it than it would to rebuild from scratch
Maybe this isn't as common as I think, I don't know. But I am pretty sure it does happen
>You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
While it's true that that can be partially due to tech debt, there are generally other factors as well. The more years you've had to accrue customers in various domains, the more years of decisions you have to maintain backwards compatibility with, the more regulatory regimes you conduct business under and build process around, the slower you're going to move compared to someone trying to move fast and break things.
> No, it means that product quality is all that matters
But it says that in such a roundabout way that non technical people use it as an argument for MBAs to dictate technical decisions in the name of moving fast and breaking things.
> product quality is all that matters
I don't know what technology was used to build the audio mixer that I got from Temu. I do know that it's a massive pile of garbage because I can hear it when I plug it in. The tech stack IS the product quality.
I don't think that's broadly true. The unfortunate truth about our profession is that there is no floor to how bad code can be while yet generating billions of dollars.
If it's making billions of dollars, somebody somewhere is getting a lot of what they want out of it. But it's possible that those people are actually the purchasing managers or advertisers rather than the users of the software. "Customers" probably would've been the more correct term. Or sometimes "shareholders".
If users care so much about product quality why is everyone using the most shitty software ever produced — such as Teams?
For 99% of users, what you describe really isn't something they know or care about.
I might agree that 99% of users don't know what they want, but not that they don't care.
As far as I can tell the article has been misinterpreted causing much lost hours by HN commenters. By saying users don't care about your tech stack, it is saying you should care about your tech stack, i.e it matters, presenting some bullet points on what to keep in mind when caring about your tech stack. Or to summarize, be methodological, not hype-following.
Agree the article is not clearly presented but it's crazy to see the gigantic threads here that seem to be based on a misunderstanding.
I feel like that's what it should mean, that quality is all that matters. But it's often used to excuse poor quality as well. Basically if you skinner box your app hard enough, you can get away with lower quality.
Yes, this is exactly what the article means.
"Use whatever technologies you know well and enjoy using" != "Use the tech stack that produces the highest quality product".
Well, the alternative and more charitable interpretation would be that you are more likely to build a better product in the stack you know well and enjoy.
I think when you get more concrete about what the statement is talking about, it becomes very hard to assert that they mean something else.
Like if you are skilled with, say, Ruby on Rails, you probably should just use that for your v1.0. The hypothetical better stack is often just a myth we tell ourselves as software engineers because we like to think that tech is everything when it's the product + launching that's everything.
I think the idea is that the use of a particular tech stack isn't a determinant factor in terms of product quality.
Tech stack and product quality are highly correlated in the real world.
Sometimes it is, but generally speaking, I don't think so. The correlation appears to be pretty loose in the real world.
!= "Use the tech stack that cheapest developers can work with".
(Yes, for real. I've once witnessed this being said out loud and used to justify specific tech stack choice.)
> Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.
> If you want to make something that starts instantly you can't use electron or java.
Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.
alacritty (in the arch repo) is 8MB decompressed
alacritty is also written in rust and gpu accelerated, so the other vte must just be just be plain bad
Edit: Just tried turning on a couple bin-size optimizations which yielded a 3.3M binary
> In practice this argument is used to justify bloated apps
Speaking of motte-and-bailey. But I actually disagree with the article's "what should you focus on". If you're a public-facing product, your focus should be on making something the user wants to use, and WILL use. And if your tech stack takes 30 seconds to boot up, that's probably not the case. However, if you spend much of your time trying eek out an extra millisecond of performance, that's also not focusing on the right thing (disclaimer: obviously if you have a successful, proven product/app already, performance gains are a good focus).
It's all about balance. Of course on HN people are going to debate microsecond optimizations, and this is perfect place to do so. But every so often, a post like this pops up as semi-rage bait, but mostly to reset some thinking. This post is simplistic, but that's what gets attention.
I think gaming is good example that illustrates a lot of this. The purpose of games is to appeal to others, and to actually get played. And there are SO many examples of very popular games built on slow, non-performant technologies because that's what the developer knew or could pick up easily. Somewhere else in this thread there is a mention of Minecraft. There are also games like Undertale, or even the most popular game last year Balatro. Those devs didn't build the games focusing on "performance", they made them focusing on playability.
I saw an HN post recently where a classic HN commentator was angry that another person was using .NET Blazor for a frontend; with the mandatory 2MB~3MB WASM module download.
He responded by saying that he wasn’t a front-end developer, and to build the fancy lightweight frontend would be extremely demanding of him, so what’s the alternative? His customers find immensely more value in the product existing at all, than by its technical prowess. Doing more things decently well is better than doing few things perfectly.
Although, look around here - the world’s great tech stack would be shredded here because the images weren’t perfectly resized to pixel-perfect fit their frames, forcing the browser to resize the image, which is slower, and wastes CPU cycles every time, when it could have been only once server side, oh the humanity, think about how much ice you’ve melted on the polar ice caps with your carelessness.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Debug symbols aren't cheap. A release build with a minimal configuration (linked below) gets that down to 263kb.
https://stackoverflow.com/questions/29008127/why-are-rust-ex...
My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.
Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.
How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.
The executable is 175 bytes.
[1] https://pastebin.com/p7VzLYxS
(Again, the point is not that Rust is bad or bloated but that people forget that 1 megabyte is actually a lot of data.)
And your point is completely wrong. It makes no sense for a language to by default optimize for the lowest possible binary size of a "hello world"-sized program. Nobody's in the business of shipping "hello world" to binary-size-sensitive customers.
Non-toy programs tend to be big and the size of their code will dwarf whatever static overhead there is, so your argument does not scale.
Even then, binary size is a low priority item for almost all use cases.
But then even if you do care about it, guess what, every low level language, Rust, C, whatever, will let you get close to the lowest size possible if you put in the effort.
So no, on no level does your argument make sense with any of the examples you've given.
Did you notice how nobody in this thread has argued that rust should optimize for binary size or that rust is somehow wrong for having made the trade-offs that it did?
People who have only worked with languages that produce large executables are likely to believe that 300k or so is about as small as an executable can possibly be. And if they believe that, then of course they'll also believe that a serious program must therefore be many megabytes large. If you don't have a good grasp of how capable modern computers are then your estimates will be wrong by orders of magnitude.
There are countless of examples -- you don't have to go that far back in history to find them -- where dozens of engineers worked on a complicated program for years that compiled down to maybe 500kb.
And again, the point still isn't that we should write software today like we had to in the 80s or 90s.
However, programmers should be aware how many orders of magnitude slower/bigger their program is relative to a thoroughly optimized version. A program can be small compared to the size of your disk and at the same time 1000x larger than it needs to be.
> My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.
Many of my real-world Rust backend services are in the 1-2MB range.
> Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.
> How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.
> The executable is 175 bytes.
You can also disable the standard library and a lot of Rust features and manually write the syscall assembly into a Rust program. With enough tweaking of compiler arguments you'd probably get it to be a very small binary too.
But who cares? I can transfer a 10MB file in a trivial amount of time. Storage is cheap. Bandwidth is cheap. Playing code golf for programs that don't do anything is fun as a hobby, but using it as a debate about modern software engineering is nonsensical.
No disagreement here! Just curious how big the impact of debug symbols was and wanted to share my findings.
Thanks for pointing this out.
It does seem weird to complain about the file size of a debug build not a release build.
In my experience, the people who make these arguments often don't even know their own tech stack of choice well enough to make it work halfway efficiently. They say 10ms but that assumes someone who knows the tech stack, the tradeoffs and can optimize it. In their hands its going to be 1+ seconds and becomes such a tangled mess it can't be optimized down the line.
If you want to make something that starts instantly you can't use electron or java.
This technical requirement is only on the spec sheet created by HN goers. Nobody else cares. Don't take tech specs from your competitors, but do pay attention. The user is always enchanted by a good experience, and they will never even perceive what's underneath. You'd need a competitor to get in their ear about how it's using Electron. Everyone has a motive here, don't get it twisted.
> Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
Semi-dependent.
Default Java libraries being a piles upon piles of abstractions… those were, and for all I know still are, a performance hit.
But that didn't stop Mojang, amongst others. It can be written "fast", if you ignore all the stuff the standard library is set up for, if you think in low-level C-like manipulation of int arrays (not Integer the class, int the primitive type) rather than AbstractFactoryBean etc. — and keep going from there, with that attitude, because there's no "silver bullet" to software quality, no "one weird trick is all you need", software in (almost!) any language fast if you focus on doing that and refuse to accept solutions that are merely "ok" when we had DOOM running in real time with software rendering in the 90s on things less powerful than the microcontroller in your USB-C power supply[0] or HDMI dongle[1].
[0] http://www.righto.com/2015/11/macbook-charger-teardown-surpr...
[1] https://www.tomshardware.com/video-games/doom-runs-on-an-app...
Of course, these days you can't run applets in a browser plugin (except via a JavaScript abstraction layer :P), but a similar thing is true with the JavaScript language, though here the trick is to ignore all the de-facto "standard" JS libraries like jQuery or React and limit yourself to the basics, hence the joke-not-joke: https://vanilla-js.com
> But that didn't stop Mojang, amongst others
Stop them from... Making one of the most notoriously slow and bloated video game ever? Like, just look at the amount of results for "Minecraft" "slow"
> Like, just look at the amount of results for "Minecraft" "slow"
My niece was playing it just fine on a decade-old Mac mini. I've played it a bit on a Raspberry Pi.
The sales figures suggest my nice's experience is fairly typical, and things such as you quote are more like the typical noise that accompanies all things done in public — people complaining about performance is something which every video game gets. Sometimes the performance even becomes the butt of a comic strips' joke.
If Java automatically causing low performance, neither my niece's old desktop nor my Pi would have been able to run it.
If you get me ranting on the topic: Why does BG3 take longer to load a saved game file than my 1997 Performa 5200 took to cold boot? Likewise Civ VI, on large maps?
> I've played it a bit on a Raspberry Pi.
I actually tried it in a pi literally a few days agod (came across an older pi which had it preinstalled) and it's pretty much unplayable
> Why does BG3 take longer to load a saved game file than my 1997 Performa 5200 took to cold boot? Likewise Civ V, on large maps?
I believe there should be the possibility for legal customer complaints when this happen just like I can file a complaint if I buy a microwave and it takes 2 minutes to start
> I actually tried it in a pi literally a few days agod (came across an older pi which had it preinstalled) and it's pretty much unplayable
Huh. Weird. Fine for me. Glad I'm not the QA/dev that has to sort out why.
> one of the most notoriously slow and bloated video game ever?
I like how you conveniently focus on that aspect and not how that didn't prevent it from being one of the biggest video game hit of all time.
Those two can be true at the same time. And that's one thing that a lot of technical people don't get. Slow is generally bad. But you cannot take it out of context. Slow can mean different things to different people, and slow can be circumvented by strategies that do not involve "making the code run faster".
The fact that it was later bought by Microsoft (and rewritten in C++ so it can run on consoles[1]) is not relevant to the argument that you can, in fact, write a successful and performant game in Java, if you know how.
[1] My source is just comments in HN.
I’m sure the C++ version is more performant, but that doesn’t strictly mean it was ported for performance. Is there even a JVM on those consoles? I suspect the answer is no and it was easier to remake in C++ than to port OpenJdk to the Xbox 360.
Are we talking modded? Cause Vanilla minecraft runs on a potato, especially if the device is connecting to a server (aka doesn't have to do the server-side updates itself).
Runs fine on an essentially single threaded 7350k with a 1050ti I built 8-9 years ago.
The server runs on a single thread on a NUC, too.
Microsoft rewrote Minecraft in C++, so maybe not the best example?
> All else equal users will absolutely choose the zippiest products.
Only a small subset of users actually do this, because there are many other reasons that people choose software. There are plenty of examples of bloated software that is successful, because those pieces of software deliver value other than being quick.
Vanishingly few people are going to choose a 1mb piece of software that loads in 10ms over a 100mb piece of software that loads in 500ms, because they won't notice the difference.
Yes, there are other reasons people choose software. That's why GP said all else equal. You just ignored the most important part of his post.
Yes, hypothetically if all else is equal, and the difference isn't noticeable, then the users experience is equal. But it's a hypothetical that doesn't actually exist in the real world.
Competing software equal in every way but speed doesn't exist except for some very few contrived examples. Different pieces of software typically have different user interfaces, different features, different marketing, different functionality, etc.
> If you want to make something that starts instantly you can't use electron
Hmmm, VScode starts instantly on my M1 Mac
Slack's success suggests you're wrong about bloat being an issue. Same with almost every mobile app.
The iOS Youtube app is 300meg. You could repo the functionality in 1meg. The TikTok app is 597meg. Instagram 370meg. X app 385meg, Slack app 426meg, T-Life (no idea what it is but it's #3 on the app store, 600meg)
Users don't care about bloat.
"All else equal users will absolutely choose the zippiest products."
As a "user", it is not only "zippiest" that matters to me. Size matters, too. And, in some cases, the two are related. (Rust? Try compiling on an underpowered computer.^1)
"If you want to make something that starts instantly you can't use Electron or Java."
Nor Python.
1. I do this everyday with C.
That's a poor example, as users genuinely don't care about download file size or installed size, within reason. Nobody in the West is sweating a 200MB download.
Users will generally balk at 2000MB though. ie, there's a cutoff point somewhere between 200MB and 2000MB, and every engineering decision that adds to the package size gets you closer to it.
For an installed desktop app... vast majority of folks aren't going to batt an eye at 2G.
Hell - the most exposure the average person gets to installing software is game downloads, sadly (100G+). After that it's the stuff like MSOffice (~5-10G).
---
I want to be clear, I definitely agree there are cases where "performance is the feature". That said, package size is a bad example.
Disk is SO incredibly cheap that users are being conditioned to not even consider it on mobile systems. And networks are good enough I can pull a multi-gig file down with just my phone's tethering bandwidth in minutes basically across the country.
When I want performance as a user, it's for an action I have to do multiple times repeatedly. I want the app itself to be fast, I want buttons to respond quickly, I want pages to show up without loaders, I want search to keep up with my keystrokes.
Use as much disk and ram as you can to get that level of performance. Don't optimize for computer nerd stats like package size (or the ram usage harpies...) when even semi-technical folks can't tell you the difference between kb/mb/gb, and have no idea what ram does.
Users care about performance in the same way that users buy cars. Most don't give a fuck about the numbers, they want to like the way it drives.
Your tech stack can definitely influence that, but you still have to make the right value decisions. Unless your audience is literally "software developers" like that file explorer, lay off the "software developer stats".
> Ms office
Everyone gets these preloaded in their laptops, and for people that need it it's a non-negotiable
> Games
Continuing the same point, people aren't going to be willing to put the same effort into trying some free app as they would into a game they just paid a bunch of money for.
> Disk so cheap
Cost per GB is irrelevant if the iPhone comes with a fixed amount of un-upgradable space the majority of which is taken by media. People routinely "free up" space on their phones by deleting apps that show up at the top of the settings -> storage page. Do you want that to be your app?
> 2GB
I think this is too high for a desktop app. Few hundred MB is probably the limit I'd guess.
This comments makes no sense.
The reason why people aren't sweating 200mb is because everything has gotten to be that big. Change that number to 2 terabytes.
Adn guess what? In 5 years time, someone will say "Nobody in the West is seating a 2 TB download" because it keeps increasing.
Yes, that's because you're all measuring the wrong factor for user satisfaction.
Users don't care about download size, they care about:
* will it fit on my storage device
* can I download it in a convenient amount of time
* does it run with acceptable performance
It really doesn't matter if it's a kilobyte or a petabyte.
Someone in the 80s, probably:
This comments makes no sense.
The reason why people aren't sweating 1mb is because everything has gotten to be that big. Change that number to 20mb.
And guess what? In 5 years time, someone will say "Nobody in the West is seating a 200mb download" because it keeps increasing.
The point obviously went over your head
I like this take, though deadlines do force you to make some tradeoffs. That's the conclusion I've come to.
I do think people nowadays over-index on iteration/shipping speed over quality. It's an escape. And it shows, when you "ship".
Yes, as a user I definitely shudder at Electron-based apps or anything built on the JVM.
I know plenty of non-technical users who still dislike Java, because its usage was quite visible in the past (you gotta install JVM) and lots of desktop apps made with were horrible.
My father for example, as it was the tech chosen by his previous bank.
Electron is way sneakier, so people just complain about Teams or something like that.
Nowadays users aren’t expected to install the vm. Desktop apps now jlink a vm and it’s like any other application. I’ve even seen the trimmed vm size get down to 30-50mb.
Unfortunately you still get developers who don’t set the correct memory settings and then it ends up eating 25% of a users available ram.
I strongly agree with this sentiment. And I realize that we might not be the representative of the typical user, but nonetheless, I think these things definitely matter for some subset of users.
Said it better than I could. It's always a deflection from the fact that whoever is saying doesn't know anything about their industry.
Had it been 18mb or 180mb, would it change anything? It takes seconds to download, seconds to install. Modern computers are fast.
It’s probably best to compare download sizes on a logarithmic scale.
18mb vs 180mb is probably the difference between an instant download and ~30 seconds. 1.8gb is gonna make someone stop and think about what they’re doing.
But 18mb vs 9mb is not significant in most cases.
It’s a little off topic but I used to think Go binaries were too big. I did a little evaluation at some point and changed my mind. Sounds like they are still bigger than Rust.
https://joeldare.com/small-go-binaries
Love this reply and learned a new term from it.
> But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
It's an incredibly effective argument to shut down people pushing for the new shiny thing just because they want to try it.
Some people are gullible enough to read some vague promises on the homepage of a new programming language or library or database and they'll start pushing to rewrite major components using the new shiny thing.
Case in point: i've worked at two very successful companies (one of them reached unicorn-level valuation) that were fundamentally built using PHP. Yeah, that thing that people claim has been dead for the last 15 years. It's alive, kicking and screaming. And it works beautifully.
> If you want to make something that starts instantly you can't use electron or java.
You picked the two technologies that are the worst examples for this.
Electron: electron has essentially breathed new life into GUI development, that essentially nobody was doing anymore.
Java: modern java is crazy fast nowadays , and on decent computer your code gets to the entrypoint (main) in less than a second. Whatever slows it down is codebase problem, it's not the jvm.
Users don't care if your binary is 3 or 4 mb. They might care if the binary was 3 or 400 mb. But then I also look at our company that uses Jira and Confluence and it takes 10+ seconds to load a damn page. Sometimes the users don't have a say.
> If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice.
Most users do not care at all.
If someone is sitting down for an hour-long gaming session with their friends, it doesn't matter if Discord takes 1 second or 20 seconds to launch.
If someone is sitting down to do hours of work for the day, it doesn't matter if their JetBrains IDE or Photoshop or Solidworks launches instantly or takes 30 seconds. It's an entirely negligible amount.
What they do care about is that the app works, gives them the features they want, and gets the job done.
We shouldn't carelessly let startup times grow and binaries become bloated for no reason, but it's also not a good idea to avoid helpful libraries and productivity-enhancing frameworks to optimize for startup time and binary size. Those are two dimensions of the product that matter the least.
> All else equal users will absolutely choose the zippiest products.
"All else equal" is doing a lot of work there. In real world situations, the products with more features and functionality tend to be a little heavier and maybe a little slower.
Dealing with a couple seconds of app startup time is nothing in the grand scheme of people's work. Entirely negligible. It makes sense to prioritize features and functionality over hyper-optimizing a couple seconds out of a person's day.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb.
Okay. Comparing a debug build to a released app is a blatantly dishonest argument tactic.
I have multiple deployed Rust services with binary sizes in the 1-2MB range. I do not care at all how large a "Hello World" app is because I'm not picking Rust to write Hello World apps.
> what they really mean is that product quality doesn’t matter.
But does it matter ? I think the only metric with optimising for is latency. Other stuff is something we do.
> In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
Yes, yes it is. But they were going to do it anyway. Even if people were to stop accepting this argument, they'll just start using another one.
Startup culture is never going to stop being startup culture and complacent corporations are never going to stop being complacent.
As the famous adage goes: If you want it done right, you gotta do it yourself.
> File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download.
File Pilot is... seemingly a fully-featured GUI file explorer in 1.8mb, complete with animations?
Dude. What.
Yes, there is a generation of programmers that doesn't believe something like File Pilot is even possible.
They'd be very confused at something like "A mind is born" then?
https://linusakesson.net/scene/a-mind-is-born/
7030 times smaller than the file pilot.
This was on the cover of a book and it prints a maze using block characters? copilot said the block characters part but not the maze, i remembered that after seeing the "one of two block characters."
unfortunately that requires a BASIC to run, which will probably be larger than 256B and possibly even 1800000B - the mind is born .prg can just be "run" on the computer it was written for with no interpreter.
yes, I am aware of the Commodore 64 :) and I also know what machine code is.
Generation? Are you for real?
I just don't really see drawing like that by binaries that small. Though I've seen plenty of demoscene, not really the same.
I just checked one of the project pages (behind some links - here: https://filepilot.handmade.network) and the first post says C with OpenGL. Boy is it rare to see one of those nowadays.
Pretty sure only techies care about that; an average user on their 10 year old device, couldn't care whether it took 0.1s or 5s to start.
Nice to have, not a must.
There's been a fair bit of research on this. People don't like slow interfaces. They may not necessarily _recognise_ that that's why they don't like the interface, but even slowdowns in the 10s of ms range can make a measurable difference to user sentiment.
And yet even Amazon, eBay, and Wikipedia don’t see value in building an SPA. Chew on that.
SPAs, in practice, tend to be slower than well-done conventional/old-fashioned webapps, and, particularly for Wikipedia-type applications, have all sorts of other usability concerns. Like, which feels more responsive, wikipedia or some random SPA?
Presumably they would if there were significant performance benefits of SPAs.
Most regular people buy a new phone when their old one has "gotten slow". And why do phones get slow? "What Andy giveth, Bill taketh away."
In tech circles regular people are believed to be stupid and blind. They are neither. People notice when apps get slower and less usable over time. It's impossible not to.
And then they spend their own money making them faster, not linking the slowness to the software, but to some belief their hardware is getting worse over time.
> Most regular people buy a new phone when their old one has "gotten slow".
Uh...
If this statement were true (IF), it would be even a better argument for software developers to NOT optimize their apps.
The big problem is that most of time users do not have options. Very often there is no better performing alternatives.
Apart from when it is optional consumption say games.
> They won’t notice those extra 10 milliseconds you save
They won't notice if this decision happens once, no. But if you make a dozen such decisions over the course of developing a product, then the user will notice. And if the user has e.g. old hardware or slow Internet, they will notice before a dozen such decisions are made.
In my career of writing software most developers are fully incapable of measuring things. They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
And yes, contrary to many comments here, users will notice that 10ms saved if it’s on every key stroke and mouse action. Closer to reality though is sub-millisecond savings that occurs tens of thousands of times on each user interaction that developers disregard as insignificant and users always notice. The only way to tell is to measure things.
When I was at Google, our team kept RUM metrics for a bunch of common user actions. We had a zero regression policy and a common part of coding a new feature was running benchmarks to show that performance didn't regress. We also had a small "forbidden list" of JavaScript coding constructs that we measured to be particularly slow in at least one of Chrome/Firefox/Internet Explorer.
Outside contributors to our team absolutely hated us for it (and honestly some of the people on the team hated it too); everyone likes it when their product is fast, and nobody likes being held to the standard of keeping it that way. When you ask them to rewrite their functional coding as a series of `for` loops because the function overhead is measurably 30% slower across browsers[0], they get so mad.
[0] This was in 2010, I have no idea what the performance difference is in the Year of Our Lord 2025.
Have you had the chance to interact with any of the web interfaces for their cloud products like GCP Console, Looker Studio, Big Query etc? It's painful, like when clicking a button or link you can feel a cloud run initializing itself in the background before processing your request.
Yes, I have the misfortune of needing to use Looker on a daily basis.
Boy, do I wish more teams worked this way. Too many product leaders are tasked with improving a single KPI (for example, reducing service calls) but without requiring other KPIs such as user satisfaction to remain constant. The end result is a worse experience for the customer, but hey, at least the leader’s goal was met.
> They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
I completely agree. It blows my mind how fifteen minutes of testing something gets replaced with a guess. The most common situation I see this in (over and over again) is with DB indexes.
The query is slow? Add a bunch of random indexes. Let's not look at the EXPLAIN and make sure the index improves the situation.
I just recently worked with a really strong engineer that kept saying we were going to need to shard our DB soon, but we're way to small of a company for that to be justified. Our DB shouldn't be working that hard (it was all CPU load), there had to be a bad query in there. He even started drafting plans for sharding because he was adamant that it was needed. Then we checked RDS Performance Insights and saw it was one rogue query (as one should expect). It was about about a 45 minute fix and after downsizing one notch on RDS, we're sitting at about 4% most of the time on the DB.
But this is a common thing. Some engineers will _think_ there's going to be an issue, or when there is one, completely guess what it is without getting any data.
Another anecdote from a past company was them upsizing their RDS instance way more than they should need for their load because they dealt with really high connection counts. There was no way this number of connections should be going on based on request frequency. After a very small amount of digging, I found that they would open a new DB connection per object they created (this was PHP). Sometimes they'd create 20 objects in a loop. All the code was synchronous. You ended up with some very simple HTTP requests that would cause 30 DB connections to be established and then dropped.
The amount of premature optimizations and premature abstractions I’ve seen (and stopped when possible) is staggering. Some people just really want to show how “clever” they are solving a problem that no one asked them to solve and wasn’t even an issue. Often it’s “solved” in a way that cannot be added to or modified without bringing the whole house of cards down (aka, requiring a rewrite).
I can handle premature optimization better than premature abstraction. The second is more insidious IMHO. I’m not throwing too many stones here because I did this too when I was a junior dev but the desire to try and abstract everything, make everything “reusable” kills me. 99.99999% of the time the developer (myself included) will guess /wrong/ on what future development will need.
They will make it take 2s to add a new column (just look at that beautiful abstraction!) but if you ask them to add another row then it’s a multi-day process to do because their little abstraction didn’t account for that.
I think we as developers think we can predict the future way better than we can which is why I try very hard to copy and paste code /first/ and then only once I’ve done that 3-4 times do I even consider an abstraction.
Most abstractions I’ve seen end up as what I’ve taken to calling “hourglass abstractions” (maybe there’s a real name for this I don’t know it). An hourglass abstraction is where you try to abstract some common piece of code, but your use cases are so different that the abstraction becomes a jumbled mess of if/else or switch/case statements to handle all the different cases. Just because 2 code blocks “rhyme” doesn’t mean you should abstract them.
The 2 ends of the hourglass represent your inputs/outputs and they all squeeze down to go though the center of the hourglass (your abstraction). Abstractions should look like funnels, not hourglasses.
My plex server was down and my lazy solution was to connect directly to the NAS. I was surprised just how much I noticed the responsiveness after getting used to web players. A week ago I wouldn't have said web player bothered me at all. Now I can't not notice.
Can you show me any longitudinal studies that show examples of a causal connection between incrementality of latency and churn? It’s easy to make such a claim and follow up with “go measure it”. That takes work. There are numerous other things a company may choose to measure instead that are stronger predictors of business impact.
There is probably some connection. Anchoring to 10ms is a bit extreme IMO because it’s indirectly implying that latency is incredibly important which isn’t universally true - each product’s metrics that are predictive of success are much more nuanced and may even have something akin to the set of LLM neurons called “polysemantic” - it may be a combination of several metrics expressed via some nontrivial function that are the best predictor.
For SaaS, if we did want to simplify things and pick just one - usage. That’s the strongest churn signal.
Takeaway: don’t just measure. Be deliberate about what you choose to measure. Measuring everything creates noise and can actually be detrimental.
Human factors has a long history of studying this. I'm 30 years out of school and wouldn't know where to find my notes (and thus references) , but there are places where users will notice 5ms. There are other places where seconds are not noticed.
The web forced people to get used to very long latency and so fail no longer comment on 10+ seconds but the old studies prove they notice them and shorter waits would drive better "feelings". Back in the old days (of 25mhz CPUs!) we had numbers of how long your application could take to do various things before users would become dissatisfied. Most of the time dissatisfied is not something they would blame on the latency even though the lab test proved that was the issue, instead it was a general 'feeling' they would be unable to explain.
There are many many different factors that UI studies used to measure. Lag in the mouse was a big problem, not just the point movement either: if the user clicks you have only so long before it must be obvious that the application saw a click (My laptop fails at this when I click on a link), but didn't have to bring up the respond nearly as fast so long as users could tell it was processing.
https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-...
Here is a study on performance that I did for JavaScript in the browser: https://github.com/prettydiff/wisdom/blob/master/performance...
TLDR; full state restoration of a OS GUI in the browser under 80ms from page request. I was eventually able to get that exact scenario down to 67ms. Not only is the state restoration complete but it covers all interactions and states of the application in a far more durable and complete way than big JavaScript frameworks can provide.
Extreme performance showed me two things:
1. Have good test automation. With a combination of good test automation and types/interfaces on everything you can refactor absolutely massive applications in about 2 hours with almost no risk of breaking anything.
2. Tiny performance improvements mean massive performance gains overall. The difference in behavior is extreme. Imagine pressing a button and what you want is just there before your brain can process screen flicker. This results in a wildly different set of user behaviors than slow software that causes users to wait between interactions.
Then there are downstream consequences to massive performance improvements, the second order consequences. If your software is extremely fast across the board then your test automation can be extremely fast across the board. Again, there is a wildly different set of expectations around quality when you can run end-to-end testing across 300 scenarios in under 8 seconds as compared to waiting 30 minutes to fully validate software quality. In the later case nobody runs the tests until they are forced to as some sort of CI step and even then people will debate if a given change is worth the effort. When testing takes less than 8 seconds everybody and their dog, including the completely non-technical people, runs the tests dozens of times a day.
I wrote my study of performance just a few months before being laid off from JavaScript land. Now, I will never go back for less than half a million in salary per year. I got tired of people repeating the same mistakes over and over. God forbid you know what the answer is to cure world hunger and bring in world peace, because any suggestion to make things better is ALWAYS met with hostility if it challenges a developer's comfort bubble. So, now I do something else where I can make just as much money without all the stupidity.
Soooooo, my "totally in the works" post about how direct connection to your RDMS is the next API may not be so tongue in cheek. No rest, no graphQL, no http overhead. Just plain SQL over the wire.
Authentication? Already baked-in. Discoverability? Already in. Authorization? You get it almost for free. User throttling? Some offer it.
Caching is for weak apps.
I find it fascinating that HN comments always assert that 10ms matters in the context of user interactions.
60Hz screens don't even update every 10ms.
What's even more amazing is that the average non-tech person probably won't even notice the difference between a 60Hz and a 120Hz screen. I've held 120Hz and 60Hz phones side by side in front of many people, scrolled on both of them, and had the other person shrug because they don't really see a difference.
The average user does not care about trivial things. As long as the app does what they want in a reasonable amount of time, it's fine. 10ms is nothing.
60hz screens update every 16.7 ms. So if you add 10 ms to your frame time you will probably miss that 16.7 ms window.
Almost all can see the difference between 60 and 120 fps. Most probably don't care though.
Multiply that by the number of users and total hours your software is used, and suddenly it's a lot of wasted Watts of energy people rarely talk about.
But they still don't care about your stack. They care that you made something slow.
Fix that however you like but don't pretend your non-technical users directly care that you used go vs java or whatever. The only time that's relevant to them is if you can use it for marketing.
That's fine, but I am responding directly to the article.
> [Questions like] “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that.
> extra 10 milliseconds you saved
Strawmen arguments are no fun so let's look at an actual example:
https://www.rippling.com/blog/the-garbage-collector-fights-b...
P99 of 3 SECONDS. App stalls for 2-4 SECONDS. All due to Python.
Their improved p99 is 1.5 seconds. Tons of effort and still could only get 1.5 seconds.
https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-...
> Amazon Found Every 100ms of Latency Cost them 1% in Sales
I've seen e-commerce companies with 1 second p50 latencies due to language choices. Not good for sales.
> Amazon Found Every 100ms of Latency Cost them 1% in Sales
I see this quoted but Amazon has become 5x slower (guestimate) and it doesn't seem like they are working on it as much. Sure the home page loads "fast" ~800ms over fiber, but clicking on a product routinely takes 2-3 seconds to load.
Amazon nowadays has a near monopoly powered by ad money due to the low margin on selling products versus ad spend. So unless you happen to be in the same position using them nowadays as an example isn't going to be very helpful. If they increased sales 20% at the cost of 1% less ad spend then they'd probably be at a net loss as a result.
So you're kinda falling into a fallacy here. You're taking a specific example and trying to make a general rule out of it. I also think the author of the article is doing the same thing, just in a different way.
Users don't care about the specifics of your tech stack (except when they do) but they do care about whether it solves their problem today and in the future. So they indirectly care about your tech stack. So, in the example you provided, the user cares about performance (I assume Rippling know their customer). In other examples, if your tech stack is stopping you from easily shipping new features, then your customer doesn't care about the tech debt. They do care, however, that you haven't given them any meaningful new value in 6 months but your competitor has.
I recall an internal project where a team discussed switching a Python service with Go. They wanted to benchmark the two to see if there was a performance difference. I suggested from outside that they should just see if the Python service was hitting the required performance goals. If so, why waste time benchmarking another language? It wasn't my team, so I think they went ahead with it anyway.
I think there's a balance to be struck. While users don't directly care about the specific tech, they do care about the results – speed, reliability, features. So, the stack is indirectly important. Picking the right tools (even if there are several "good enough" options) can make a difference in delivering a better user experience. It's about optimizing for the things users do notice.
All modern tech stacks have those properties in 2025.
They absolutely do not. In fact, relatively few do. Every single Electron app (which is a depressing number of apps) is a bloated mess. Most web pages are a bloated mess where you load the page and it isn't actually loaded, visibly loading more elements as the "loaded" page sits there.
Software sucks to use in 2025 because developers have stopped giving a shit about performance.
This is so true, and yet. . . Bad, sluggish performance is everywhere. I sometimes use my phone for online shopping, and I'm always amazed how slow ecommerce companies can make something as simple as opening a menu.
happens when you use a react library with 30000 lines to show a simple select menu
Having worked on similar solutions that use Java and Python, I can't say I agree (the former obviously being much faster).
Yeah, those languages do a really good point of demonstrating the original point! Java would lead to a lot better performance in a lot of cases (like building a native application say) but Python despite being slow, has great FFI (which Java doesn't) so is a good shout for use cases like data science where you really just want a high level controller for some C or Rust code.
Point being, Python despite being slow as a snail or a cruise ship will lead to faster performance in some specific contexts, so context really is everything.
Have you tried Java’s new ffi api? It’s a lot better than in the past.
https://openjdk.org/jeps/454
This is a mixed bag of advice. While it seems wise at the surface, and certainly works as an initial model, the reality is a bit more complex than aphorisms.
For example, what you know might not provide the cost benefit ratio your client would. Or the performance. If you only know Cloud Spanner but now there is a need for a small relational table? These maxims have obvious limitations.
I do agree that the client doesn't care about the tech stack. Or that seeking a golden standard is a McGuffin. But it does much deeper than that. Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
A good engineer balances tradeoffs and solves problems in a satisfying way sufficing all requirements. That can be MySQL and Node. But it can also be C++ and Oracle Coherence. Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
> Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
Your customer does care about how quickly you can iterate new features over time, and product stability. A stack with a complex mix of technologies is likely to be harder to maintain over the longer term.
That's also an aphorism that may or may not correspond to reality.
Not only there are companies with highly capable teams that are able to move fast using a complex mix of technologies, but also there are customers who have very little interest in new features.
This is the point of my comment: these maxims are not universal truths, and taking them as such is a mistake. They are general models of good ideas, but they are just starter models.
A company needs to attend to its own needs and solve its own problems. The way this goes might be surprisingly different from common sense.
Sure universal truths are rare - though I think there are many more people using such an argument to justify an overly complex stack, than there are cases where it truly is the best solution long term.
Remember even if you have an unchanging product, change can be forced on you in terms of regulatory compliance, security bugs, hardware and OS changes etc.
I think the point of the original post is that most important part of the context is the people ( developers ) and what they know how to use well and I'd agree.
I'd just say that one thing I've learnt is that even if the developer in the future that has to add some feature or fix some bug, is the developer who originally wrote it, life is so much easier if the original is as simple as possible - but hey maybe that's just me.
> these maxims are not universal truths, and taking them as such is a mistake.
Amen.
How big is your team?
One person writing a stack in 6 languages is different from a team of 100 using 6 languages.
The problem emerges if you have some eccentric person who likes using a niche language no one else on the team knows. Three months into development they decide they hate software engineering and move to a farm in North Carolina.
Who else is going to be able to pick up their tasks, are you going to be able to quickly on board someone else. Or are you going to have to hire someone new with a specialty in this specific language.
This is a part of why NodeJS quickly ate the world. A lot of web studios had a bunch of front end programmers who were already really good with JavaScript. While NodeJS and frontend JS aren't 100% the same, it's not hard to learn both.
Try to get a front end dev to learn Spring in a week...
Excellent comment. What you raised are two important aspects of the analysis that the article didn't bother thinking about:
- how to best leverage the team you currently have
- what is the most likely shape your team will have in the future
Jane Street has enough resources and experts to be able to train developers on OCaml, Nubank and Clojure also comes to mind. If one leaves, the impact is not devastating. Hiring is not straightforward, but they are able to hire engineers willing to learn and train them.
This is not true for a lot of places, that have tighter teams and budgets, whose product is less specialized, and so on.
But this is where the article fails and your comment succeeds: actually setting out parameters to establish a strategy.
Most of the engineering advice tries to be as general as possible.
The issue is nothing is general in software. What a solo dev, a 10 person startup and a FAANG should do are 3 different things.
For example, a hobbyist dev who spends all day writing unit tests and calculating their own points in Jira, that doesn't make sense.
But I'd be horrified to see the same dev not use any task tracking at all at work.
For myself I take long breaks on my side projects, particularly when it comes to boring stuff like authentication.
If course I can't do that at my full time job. I can't just stop and do something else.
Jane Street is making billions in revenue per year and all their software needs to be as optimized as possible. If making the software 1% better nets you another 100 million dollars, you can hire a dozen people to specifically write internal tooling to make it happen.
Verses a smaller team probably just has to be do with off the shelf solutions .
The other thing this article misses is this is going to drastically depend on your industry. It doesn't really matter if you're selling special cat food and need an e-commerce website for that.
It's a completely different story if you're developing a low latency trading system
> This is a part of why NodeJS quickly ate the world
And the other part is you can share, say, data validation code between client and server easily - or move logic either side of the network without having to rewrite it.
ie Even if you are an expert in Java and Javascript - there are still benefits to running the same both ends.
Very much this. The concerns with running a six person team are quite a bit different from the concerns of directing hundreds to thousands of developers across multiple projects. No matter how good the team is and how well they are paid and treated, there will be churn. Hiring and supporting folks until they are productive is very expensive and gets more expensive the more complicated and number of different stacks you have to maintain.
If you want to have efficient portability of developers between teams you've got to consolidate and simplify your stacks as much as possible. Yeah your super star devs already know most of the languages and can pick up one more in stride no problem. But that's not your average developer. That average dev in very large organizations has worked on one language in one capacity for the last 5-15 years and knows almost nothing else. They aren't reading HN or really anything technology related not directly assigned via certification requirements. It's just a job. They aren't curious about the craft. How are you able to get those folks as productive as possible within your environment while still building institutional resiliency and, when possible, improving things?
That's why the transition from small startup with a couple pizza teams to large organizations with hundreds of developers is so difficult. They are able to actually hire full teams of amazing developers who are curious about the craft. The CTO has likely personally interviewed every single developer. At some point that doesn't become feasible and HR processes become involved. So inevitably the hiring bar will drop. And you'll start getting in more developers who are better about talking through an interview process than jumping between tech stacks fluidly. At some point, you have to transition to a "serious business" with processes and standards and paperwork and all that junk that startup devs hate. Maybe you can afford to have a skunkworks team that can play like startups. But it's just not feasible for the rest of Very Large Organizations. They have to be boring and predictable.
I've seen a grown man nearly start crying when asked to switch programming languages. It would of at most been a few weeks to help with some legacy code.
He starts mouthing off about how much more he can make somewhere else. Management relents and he got his way.
>If you want to have efficient portability of developers between teams you've got to consolidate and simplify your stacks as much as possible.
Amen!
I'm often tasked with creating new frameworks when I come to a new company. I always try to use whatever people are already comfortable with. If we're a NodeJS shop, I'll do it in NodeJS.
Unless you have very unique challenges to solve, you can probably more or less use the same stack across the company.
On a personal level I think it's a good idea to be comfortable with at least 3 languages. I've worked professionally in NodeJS, Python, and C#. While building hobbyist stuff with Dart/Flutter.
I don't have strong opinions either way, but if my boss expected me to learn Rust I'd probably start looking elsewhere. Not that Rust is bad, I just don't think I can do it.
> Your customer does care about how quickly you can iterate new features over time
How true this is depends on your particular target market. There is a very large population of customers that are displeased by frequent iterations and feature additions/changes.
The author didn't say listen to the opinion of other, hype or not. The author said "set aside time to explore new technologies that catch your interest ... valuable for your product and your users. Finding the right balance is key to creating something truly impactful.".
It means we should make our own independent, educated judgement based on the need of the product/project we are working on.
> Finding the right balance is key to creating something truly impactful
This doesn't mean anything at all. These platitudes are pure vapor, and seem just solid enough that they make sense at first glance, but once you try to grasp it, there is nothing there. What is impactful? What is untruly impactful as opposed to truly impactful? Why is that important? Why is the right balance key for it? Balance of what? How do you measure if the balance is right?
My expectation for engineering (including its management) is that we deal in requirements, execution, delivery, not vibes. We need measurable outcomes, not vapor clouds.
> , the reality is a bit more complex than aphorisms.
This is the entire tech blog, social media influencer, devx schtick though. Nuance doesn't sell. Saying "It depends" doesn't get clicks.
> Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
Trying to explain this to a team is one of the most frustrating things ever. Most of the time people pick / reject tools because of "feels".
On a related note, I never understood the hype around GraphQL for example.
I heavily dislike GraphQL for all of the reasons. But I'll say that for a lot of developers, if you are already setting up an API gateway, you might as well batch the calls, and simplify the frontend code.
I don't buy it :) but I can see the reasoning.
I'd saw nowadays, C++ is rarely the best answer, especially for the users.
C++ is often the best answer for users, but this is about how bad the other options are, and not that C++ is good. Options like Rust doesn't have the mature frameworks that C++ does. (rust-qt is often used as a hack instead of a pure rust framework). There is a big difference between modern C++ and the old C++98 as well, and the more you force you code to be modern C++ the less the footguns in C++ will hit you. The C++ committee is also driving forward in eliminating the things people don't like about C++.
Users don't care about your tech stack. They care about things like battery life, and how fast your program runs, how fast your program starts - places where C++ does really well. (C, rust... also do very well). Remember this is real world benchmarks, you can find micro benchmarks where python is just as fast as well written C, but if write a large application in python they will be 30-60 times slower than the same written in C++.
Note however that users only care about security after it is too late. C++ can be much better than C, but since it is really easy to write C style code in C++ you need a lot more care than you would want.
If for your application Rust or ada does have mature enough frameworks to work with then I wouldn't write C++, but all too often the long history of C++ means it is the best choice. In some applications managed languages like Java works well, but in others the limits of the runtime (startup, worse battery life) make it a bad choice. Many things are scripts you won't run very much and so python is just fine despite how slow it is. Make the right choice, but don't call C++ a bad choice just because for you it is bad.
For real time audio synthesis or video game engines then C++ is the industry standard.
It's true, and of course, all models are wrong, especially as you go into deeper detail, so I can't really argue an edge case here. Indeed, C++ is rarely the best answer. But we all know of trading systems and gaming engines that rely heavily on C++ (for now, may Rust keep growing).
...unless you do HFT...
It would be funny if it weren’t tragic. So many of the comments here echo the nonsense of my software career: developers twisting themselves in knots to justify writing slow software.
I've not seen a compelling reason start the performance fight in ordinary companies doing CRUD apps. Even if I was good at performance, I wouldn't give that away for free, and I'd prefer to go to companies where it's a requirement (HFT or games), which only furthers the issue about slowness being ubiquitous.
For example, I dropped a 5s paginated query doing a weird cross join to ~30ms and all I got for that is a pat on the back. It wasn't skill, but just recognizing we didn't need the cross join part.
We'd need to start firing people who write slow queries, forcing them to become good, or pay more for developers who know how to measure and deliver performance, which I also don't think is happening.
For 99% of apps slow software is compensated by fast hardware. In almost all cases, the speed of your software does not matter anymore. Unless speed is critical, you can absolutely justify writing slow software if its more maintainable that way.
And thus when I clicked on the link to a NPR story just now it was 10 seconds before the page was reasable on my computer.
Now my computer (pinebook pro) was never known as fast, but still it runs circles around the first computer I ran a browser. (I'm not sure which computer that was, but likely the CPU was running at 25mhz, could have been a 80486 or a Sparc CPU though - now get off my lawn you kids)
Those are things developers say to keep themselves employable.
We are long past knitting RAM. You should go with the times. Development speed is key nowadays. People literally dont care about speed. If they did, Amazon & Facebook were long gone.
> Development speed is key nowadays.
What does that mean?
I find I develop faster when I am not waiting on a bunch of unnecessary nonsense that is only present to supplement untrained developers. For example I can refactor an absolutely massive application in around 2 hours if I have end to end tests that execute in seconds and everything is defined as well organized interfaces.
You cannot develop fast if everything you touch is slow.
> People literally dont care
I am really not interested in develops inventing wild guesses.
Your users feel otherwise. If you actually care at all about the quality of the software you produce, stop rationalizing slow software.
No. Only tech savy users care. And these are the minority of relevant users. Why do people still buy shit on Amazon despite Amazon being horribly slow?
This is a fairly classic rant. Many have gone before, and many will come after.
I have found that it's best to focus on specific tools, and become good with them, but always be ready to change. ADHD-style "buzzword Bingo," means that you can impress a lot of folks at tech conferences, but may have difficulty reliably shipping.
I have found that I can learn new languages and "paradigms," fairly quickly, but becoming really good at it, takes years.
That said, it's a fast-changing world, and we need to make sure that we keep up. Clutching onto old tech, like My Precioussss, is not likely to end well.
What do you think of Elixir in that regard? It seems to be evolving in parallel to current trends, but it still seems a bit too niche for my taste. I‘m asking because I‘m on the fence on whether I should/want to base my further server side career on it. My main income will likely come from iOS development for at least a few more years, but some things feel off in the Apple ecosystem, and I feel the urge to divest.
Ive been working in Elixir since 2015. I love the ecosystem and think its the best choice for building a web app from a pure tech/stability/scalability/productivity perspective (I also have a decade+ experience in Ruby on rails, Nodejs, and Php laravel, plus Rust to a lesser extent).
I am however having trouble in the human side of it. Ive got a strong resume but I was laid off in Nov 2024 and Im having trouble even getting Elixir interviews (with 9+ years of production Elixir experience!). Hiring people with experience was also hard when I was the hiring manager. It is becoming less niche these days. I love it too much to leave for other ecosystems in the web sphere
Thanks for sharing your perspective. FWIW, I hope you'll find a nice position soon!
Dawww thank you!
I couldn't even begin to speak to Elixir. Never used it.
Most of my work is client-side (native Apple app development, in Swift).
For server-side stuff, I tend to use PHP (not a popular language, hereabouts). Works great.
I like PHP. It has a raw power that is really nice to have in web development.
Elixir is similar but concurrency is a 'first class citizen', processes instead of objects, kind of. It's worth a look. I've never used it but there's a project for building iOS applications with the dominant Elixir web framework, https://github.com/liveview-native/live_view_native .
Elixir can be used for scripting tasks, config and test rig are usually scripts. In theory you can use the platform for desktop GUI too, one of the bespoke monitoring tools is built that way. Since a few years back there are libraries for numeric and ML computing too.
Users don't care - but users care how reliable your software is, users care about how quickly you can ship the features they request.
Tech stack determines software quality depending on the authors of the software of course.
But certain stacks allow devs to ship faster, fix bugs faster and accommodate user needs.
Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.
[1]: https://x.com/jorgemanru/status/1889989498986958958
> Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.
The linked Tweet thread says they have 16 Software Engineers and a separate ops team that he's not counting for some reason.
There are also comments further down that thread about how their "designers" also code, so there is definitely some creative wordplay happening to make the number of programmers sound as small as possible.
Basecamp (37Signals) also made headlines for losing a lot of employees in recent years. They had more engineers in the past when they were building their products.
Basecamp is also over 20 years old and, to be honest, not very feature filled. It's okay-ish if your needs fit within their features, but there's a reason it's not used by a lot of people.
DHH revealed their requests per second rate in a Twitter argument a while ago and it was a surprisingly low number. This was in the context of him claiming that he could host it all on one or two very powerful servers, if I recall correctly.
When discussing all things Basecamp (37Signals) it's really important to remember that their loud internet presence makes them seem like they have a lot more users than they really do. They've also been refining basically the same product for two decades and had larger teams working in the past.
Just joining all the other comments to say there's a split between:
- users don't care about your tech stack - you shouldn't care about your tech stack
I don't on-paper care what metal my car is going to be made of, I don't know enough information to have an opinion. But I reeaaally hope the person designing it has a lot of thoughts on the subject.
I find it funny that is message resurfaces on the front page once or twice a year for at least 10 years now. Product quality is often not the main argument advanced when deciding on a tech stack, only indirectly. Barring any special technical requirements, in the beginning what matters is: - Can we build quickly without making a massive mess? - Will we find enough of the right people who can and want to work with this stack? - Will this tech stack continue to serve us in the future?
Imagine it's 2014 and you're deciding between two hot new framework ember and react, this is not just a question about what is hot or shiny and new.
There's an obvious solution to "language doesn't matter". Let the opinionated people pick the stack. Then you satisfy the needs of the people who care and those who don't care.
The opinionated people disagree.
The opinionated people I disagree with sure like saying "language doesn't matter", as long as it preserves their status quo.
How long are we gonna be getting those patronizing pieces of advice from people who figured out something based on their experience, made wrong conclusions, and are now eager to "teach" others?
This discussion is not about technology. It's about technical people learning that business, product and users are actually important. The best advice I can give technical people about working at startups is that you should learn everything you can about business. You can do that at a startup much easier than at a big tech company. Spend as much time as you can with your actual users, watching them use your product. It will help you communicate with the rest of the team, prioritize your technical tasks, and help you elevate your impact.
Problem is your hiring manager at a startup will still care whether you're an expert in the stack-du-jour. So technical people aren't incentivised to care about the business.
I don't agree with the article's central premise. It assumes that tech stack choices should be driven solely by what the end user cares about.
In reality, selecting a tech stack is about more than the user side of things, it’s a strategic decision where factors like cost efficiency, the ease of hiring new developers, long-term maintainability, and the likelihood that the technology will still be relevant in five years are all critical.
These considerations directly impact how quickly and effectively a team can build and scale a product, even if the end user never sees the tech stack at work.
I worked on a small team with 2 or 3 backend Elixir devs as the sole JavaScript / TypeScript front end, React Native app, micro services running in node, browser automation developer. It was easiest for me to write a backend service in JavaScript and expose an interface for them to integrate with rather than wait for them to get around to building it. The services were usually small under a couple thousand lines of code and if they wanted to, they could translate the service to Elixir since the business logic was usually hardened and solved. One service might scrape data, store it in S3 buckets, and then process it when requested storing / caching the results in a Postgres database.
Here is the important part: the automated browser agents I built were core to the company's business model. Even today nobody can accomplish in any other language than JavaScript what I was doing because it requires injecting JavaScript into third party websites with the headless browser. Even if the surface area was small, the company was 100% dependent on JavaScript.
The issue is that they are huge Elixir fanboys. Monday morning meeting G. would start talking about how much JavaScript sucks and they should start to move all the front end code to LiveView. Every couple weeks .... "we should migrate to LiveView." Dude, I'm sitting right here, I can hear you. Moreover, your job depends on at least one person writing JavaScript as shitty a language as it might be. I don't think he understood that he is threatening my job. The fanboy Elixir conversations between the 3 of of them always made me feel like a second class citizen.
I'm one of those fanboys. I've done the react, angular etc front end thing. LiveView just absolutely smokes SPAs and other JS rats-nests in terms of productivity, performance and deployment simplicity (for certain types of apps). The fact that you don't have to write 6 layers of data layer abstraction alone is worth it.
And don't get me wrong, I even like things like Nuxt and have a few products on Astro (great framework btw). Agree regarding browser automation, not many options there so your gig is safe for now. But do play with LiveView, it's pretty special.
I'm going to agree with you that Elixir / Erlang is the most productive and will back it up with some data; elixir developers are the highest paid because they generate the most value. [0] Nonetheless, LiveView isn't a viable solution for a lot of what I do. Because of that, it is important to have developers who know and understand how to use JavaScript.
[0] https://survey.stackoverflow.co/2024/technology#4-top-paying...
Nevermind, folks - I retract my comments. Elixir is terrible, don't learn it, the market is saturated ;)
A mixture of contempt and resentment towards JavaScript makes developers worse engineers.
What will happen now you're gone, is, one of them will encounter a trivial problem working in a language they don't understand. They could solve this by consulting some documentation, but they won't do that. Instead they will make a huge fuss and embark on a six month project to rewrite everything in LiveView.
Like all rewrites it will be much harder than they think, come out worse than they hoped and fail to solve any customer problems.
TFA isn't coherent.
> Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that. They won’t notice those extra 10 milliseconds you saved, nor will their experience magically improve just because you’re using the latest JavaScript framework.
Users care about performance. The per user action latency is >10 ms (unless your in bootstrap phase).
> What truly makes a difference for users is your attention to the product and their needs.
False dichotomy.
> Every programming language shines in specific contexts. Every framework is born to solve certain problems. But none of these technical decisions, in isolation, will define the success of your product from a user’s perspective.
Yes so evaluate your product's context, choose your tools, frameworks, and languages accordingly.
Many comments arguing that the right stack and good "clean" code will then lead to user-appreciated performance improvements.
More often I've seen this used by developers as an excuse to yak-shave "optimizations" that deliver no (or negative) performance improvements. e.g. "Solving imaginary scaling problems ... at scale!"
As a user I actively look for apps in a specific tech stack because I know they will be much leaner and more enjoyable to use
Maybe not, but I do, and I hope anyone else who works in the space does as well. I'm not a big fan of this argument of cutting all "inefficient attention" out of what should be our craft. I want to take pride in my work, even if my users don't share that feeling
Indeed, hence why I always ask back when someone proposes rewrites, what is the business value.
Meaning the amount of money spent on developer salaries for the rewrite timeframe, how does that reflect in what business is doing, and how is the business going to get that investment back.
> There are no “best” languages or frameworks. There are only technologies designed to solve specific problems, and your job is to pick the ones that fit your use case
What if multiple technologies fit our use case? There will be one that fits the use case “best”.
> There will be one that fits the use case “best”.
You answered your own question: best that fits the use case, still not THE BEST
> There will be one that fits the use case “best”.
And how much time do you want to spend evaluating those technologies? Do a MVP with each?
Or maybe sell to customers / venture capitalists based on the first working MVP and move on?
The user has never been the primary factor in my choice of tech stack. Just like my employer doesn't care what car I drive to get to work. It's mostly about the developers and the best tools available to them at that point in time.
> Use what you enjoy working with.
No! This is a great advise if you are working on a personal project but a terrible advise in all other scenarios. Use the stack that solves your problem, not the stack you are simply comfortable with.
> There are no “best” languages or frameworks. There are only technologies designed to solve specific problems
Let's not forget that for virtually any random problem, you have a plenty of technologies which solve it.
> Users Don’t Care About Your Tech Stack
Oh, but they really should: the less proprietary and supporting the most platforms — the better.
Just ask the Russians porting stuff from something like MS SQL Server or Oracle to Postgres
I think the folks who lost access to funds due to an implementation choice of Synapse would disagree with this statement.
They don't care about most implementation details, but choice of 3rd parties to manage functionality and data does matter to your users when it breaks. And it will.
Sure, the language and framework you choose are less important to them, per parent post. But it's a slippery slope; don't assume that they are agnostic to _all_ your technical choices.
I am working as a contractor on a product where my principal incentives revolve around meeting customer-facing milestones. I am not being paid a dime for time spent on fancy technology experiments.
It has been quite revealing working with others on the same project who are regular salaried employees. The degree to which we ensure technology is well-aligned with actual customer expectations seems to depend largely on how we are compensated.
More importantly for open source projects with a lot of users, don't advertise a bunch of techno mumbo-jumbo on your website. That either means nothing to users, or may even put them off. Sure, have a link for developers that goes in to all that stuff so they can decide to contribute or build for source or whatever. Just keep it off the main page - it's meaningless to the general public.
This feels like one of those comments you hear in a tech stack meeting—something that doesn’t change the discussion but fills space, like a “nothing from my end.”
“So, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
'Hey guys, users don’t care about the tech stack. We need to focus on their needs.'
“Uh… right. So, users like speed, yeah? Anyway, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
Yes, it is right they do not care. This is the reason RubyOnRails teached us how to go fast on web development.
The problem is that there is no good rule to choose the RIGHT stack, and that is the challenge!
For instance Joel Sposlky's team chose ASP.NET for "Stack overflow" simply because they know it better than, say PHP or Java Struts 1.x
> Users don't care about your tech stack
Just give them the fucking .exe
https://github.com/sherlock-project/sherlock/issues/2019
They do not care about your stack, but do care that the stack works. Use what you're familiar with, sure, but if that does not produce a reliable system, your user will not use it.
Throwaway article used to get ads.
Nobody is making the argument that users care about your tech stack. I've literally never heard a dev justify using a library because "users care about it". Nobody.
> They won’t notice those extra 10 milliseconds you saved
Depends what you're doing. In my case I'm saving microseconds on the step time of an LLM used by hundreds of millions of people.
Beauty of scale. Saving ten milliseconds a hundred times is just a second. But do it a billion times and you've shaved off ~4 months.
If you work at Google or whatever else is popular or monopolistic this week.
In most real jobs those ten milliseconds will add up to what, 5 seconds to a minute?
There is probably a non-linear function of how slow your software is to how many users will put-up with it.
Those 10 ms may quite well mean the difference between success and failure... or they may be completely irrelevant. I don't know if this is knowable.
There is. But what the OP is doing is not that, it's "scaling". Which probably makes sense for whatever they're working on*. For the other 99% of projects, it doesn't.
* ... if they're at ClosedAI or Facebook or something. If they're at some startup selling "AI" solutions that has 10 customers, it may be wishful thinking that they'll reach ClosedAI levels of usage.
It's not really clear to me that the OP is talking about hardware costs. If so, yeah, once you have enough scale and with a read-only service like an LLM, those are perfectly linear.
If it's about saving the users time, it's very non-linear. And if it's not a scalable read-only service, the costs will be very non-linear too.
This makes me think about an interview Lex did with Pieter Levels. It is amazing how fast he gets something to market with just PHP.
Spoiler alert: the real answer is, like almost everything else in this industry (and most others), "it depends."
Users don't care about tech stacks. But they care about their experience and tech stacks can definitely influence that.
How could they? They don't have the required information.
Why should they? That's pretty much our responsibility.
If you make a game users definitely care if you are 10 ms slower per frame.
users don't care about semi ugly UI either, as long as it's fast and legible.
At least the article has the length which the topic deserves.
TLDR: Dev: "It's written in Rust", User: "What?"
Unless, of course, the user of your code is a financial institution, or a carmaker, building planes, medical devices, ...
This is dumb. Of course they don't care about the tech stack directly but they obviously care about the things that it affects - performance, reliability, features etc.
To top it off, it also includes a classic "none of them are perfect so they are all the same and it doesn't matter which you choose". I recently learnt the name for this: the fallacy of grey.
https://www.readthesequences.com/The-Fallacy-Of-Gray
Don't fall for it like this guy has.
Sometimes I have to remind myself of this. Take for example Bring a Trailer. It is a wordpress site. I know you're rolling your eyes or groaning at the mention of wordpress. It works. It is hugely successful in the niche it is in.
but but but what about "made in rust(tm)"
You can always oxidize later.
But your next interviewer will drill you for hours about it
Then users can write their own software.
If I even have to think about installing node.js or dealing with some Python rat's nest, I absolutely care.
People are using machines dozens of times more powerful than machines from 15 years ago, but do they do things that are materially different to what they did before? Not really.
They absolutely do care, even if they cannot articulate whats wrong.
`I still often find myself in discussions where the main focus is the choice of technologies. Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that.'
When developers are having those discussions, are they ever doing so in relation to some hypothetical user caring? This feels like a giant misdirection strawman.
When I discuss technology and framework choices with other developers, the context is the experience for developers. And of course business considerations come into play as well: Can we find lots of talent experienced in this set of technologies? Is it going to efficiently scale in a cost effective manner? Are members of the team going to feel rewarded mastering this stack, gaining marketable skills? And so on.
Stacking tech against users you don’t care about is not nice.
But investors do, and most startup founders are in a bigger need of investors than users
Sure if you want to use Fortran to build your app that's probably a no-go. But does it really matter if it's Go, JS, Java, Ruby, Python or PHP?
There’s a lot of vacancies of technical co-founders with a preference of stack, which usually comes from some advisor or investor. A pretty dumb filter, given that it’s often a Node/React combo. It is understandable where it comes from, but still… dumb.
Sometimes they do. Examples:
- Electron vs native (yes, electron/chromium bloat is a popular discussion point even amongst non-engineers)
- Mobile vs web app
- When users comment on how “clunky” the UI feels, they probably mean that a 5 year old Bootstrap implementation should be replaced with Tailwind (/s)
- When users fall in love with linear or figma, they are falling in love with the sync engine and multiplayer tech stack
Even if users don’t have the words to describe the stack, they do care when their needs overlap with the characteristics of the stack
[dead]
Lately, I've been thinking that LLMs will lift programming anyways to another level: the level of specification in natural language and some formal descriptions mixed in. LLMs will take care of transforming this into actual code. So not only users don't care about programming but also the developers. Switching the tech stack might become a matter of minutes.
How will that work out?
If it simply generates code from natural language then I am still fundamentally working with code. Aider as an example is useful for this, but anything that isn't a common function/component/class it falls apart even with flagship models.
If I actually put my "natural language code" under git then it'll lack specificity at compile time likely leading to large inconsistencies between versions. This is horrible user experience - like the random changes Excel makes every few years, but every week instead.
And everyone that has migrated a somewhat large database knows it isn't doable within minutes.
I don't think one would put only the specification in Git. LLMs are not a reliable compiler.
Actual code is still the important part of a business. However, how this code is developed will drastically change (some people actually work already with Cursor etc.). Imagine: If you want a new feature, you update the spec., ask an LLM for the code and some tests, test the code personally and ship it.
I guess no one would hand over the control of committing and deployment to an AI. But for coding yes.