There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
This is also the exact reason why all the bright-eyed pieces that some technology would increase worker's productivity and therefore allow more leisure time for the worker (20 hour workweek etc) are either hopelessly naive or pure propaganda.
Increased productivity means that the company has a new option to either reduce costs or increase output at no additional cost, one of which it has to do to stay ahead in the rat-race of competitors. Investing the added productivity into employee leisure time would be in the best case foolish and in the worst case suicidal.
Which is why government regulations that set the boundaries for what companies can and can't get away with (such as but not limited to labor laws) are so important. In absence of guardrails, companies will do anything to get ahead of the competition. And once one company breaks a norm or does something underhanded, all their competitors must do the same thing or they risk ceding a competitive advantage. It becomes a race to the bottom.
Of course we learned this all before a century ago, it's why we have things like the FDA in the first place. But this new generation of techno-libertarians and DOGE folks who grew up in a "move fast and break things" era, who grew up in the cleanest and safest times the world has ever seen, have no understanding or care of the dangers here and are willing to throw it all away because of imagined inefficiencies. Regulations are written in blood, and those that remove them will have new blood on their hands.
Some regulations are written in blood, a huge chunk are not. Shower head flow rate regulations were not written in blood.
Your post started out talking about labor laws but then switched to the FDA, which is very different. This is one of the reasons that people like the DOGE employees are tearing things apart. There are so many false equivalences on the importance of literally everything the government does that they look at things that are clearly useless and start to pull apart things they think might be useless.
The good will has been burned on the “trust me, the government knows best”, so now we’re in an era of cuts that will absolutely go too far and cause damage.
Your post mentioning “imagined inefficiencies” is a shining example of the issue of why they are there. Thinking the government doesn’t have inefficiencies is as dumb as thinking it’s pointless. Politicians are about as corrupt of a group as you can get and budget bills are filled with so much excess waste it’s literally called “pork”.
Efficiency related regulation like the energy star is THE reason why companies started caring.
Same with low flush toilets. I vaguely remember the initial ones had issues, but tbh less than the older use a ton of water toilets my family had before that were also super clog prone. Nowadays I can’t even remember the last time a low flush toilet clogged. Massive water saving that took regulation.
Efficiency regulations may not be directly written in blood, instead they are built on costly mountains of unaddressed waste.
I literally had a new toilet put in a couple of years ago. It clogs pretty easily. So you just end up flushing it more, so you don't actually save any water.
BTW the same thing happened with vacuum cleaners, you need to hover more to get the same amount of dust out because they capped the power in the EU. My old Vacuum Cleaner I managed to find, literally sticks to the carpet when hoovering.
My Philips Silentio vacuum cleaner is both quiet and powerful and is also within the EU limits on input power. It will stick to the floor if I turn up the power too high.
And the Norwegian made and designed low flow toilets in my house flush perfectly every time. Have the flush volumes reduced further in the last fifteen years?
And so we see the real outcome, on this axis, of these kinds of regulations, is to increase the quality gradient. A crappy old barebones water-hungry dishwasher with a phosphate-containing detergent worked just fine for me in an old apartment. Its comparably priced brand-new lower-water equivalent in a new house with phosphate-free detergent works awfully. Now you need a Bosch washer and premium detergent and so on. These exist and by all accounts are great. So we can say that the regulations didn't cause the quality problem, they just shifted the market.
Compliance with the regulations can be done both by the capable and the incapable, but caveat emptor rears its ugly head, and that assumes the end user is the buyer (right now, I'm renting). There's often quite a price gap between good enough and terrible too. A lot of people end up stuck with the crap and little recourse.
The government cares that your dishwasher uses less water and the detergent doesn't put phosphate into the water. It doesn't care that your dishwasher actually works well. We can layer more regulations to fix that problem too, but they will make things cost more, and they will require more expensive and competent civil servants to enforce, and so on. And I don't see any offer in that arrangement to replace my existing dishwasher, which is now just a sunk cost piece of future e-waste that neither the government nor the manufacturer have been made responsible for.
> My Philips Silentio vacuum cleaner is both quiet and powerful and is also within the EU limits on input power. It will stick to the floor if I turn up the power too high.
I don't believe you and it besides the point because I suspect that it is an expensive vacuum cleaner. I don't want to put any thought into a vacuum cleaner. I just want to buy the most powerful (bonus points if it is really loud), I don't care about it being quiet or efficient. I want the choice to buy something that makes a dent in my electricity bill if I so choose to.
> And the Norwegian made and designed low flow toilets in my house flush perfectly every time. Have the flush volumes reduced further in the last fifteen years?
This reads as "I have some fancy bathroom that costs a lot, if you had this fancy bathroom you wouldn't have issues". I don't want to have to care whether my low flush toilet is some fancy Norwegian brand or not. I just want something to flush the shit down the hole. The old toilets never had the problems the newer ones have. I would rather buy the old design, but I can't. I am denied the choice because someone else I have never met thinks they know better than I.
I was being hyperbolic throughout the entire post.
Every-time you have a conversation around older stuff being better than newer stuff (some of this is due to regulation), you will have someone say their boutique item that costs hundreds of pounds (or maybe 1000s) works perfectly well. Ignoring the fact that most people don't wish to buy these boutique items (the dude literally talked about some Norwegian toilet design). I buy whatever is typically on offer than is from a brand that I recognise. I don't care about the power consumption of my vacuum cleaner. I am not using it for the entire day. It is maybe 30 minutes to an hour twice a week. I just want to do this task (which I find tedious) as quickly as possible.
BTW Dysons count in this regard as boutique, they are expensive and kinda rubbish. They are rendered useless by cat fur (my mother had three cats and it constantly got clogged with it). Bagless vacuum cleaners are generally garbage anyway (this is a separate complaint) because when you try to empty them, you have to empty it into a bag typically.
Sorry to hear you got a bum toilet, luckily for you, there’s the other huge benefit of low flush toilets that I didn’t mention.
Even with a total clog, there’s a 1-2 flush bowl capacity before it over flows.
Who remembers the abject terror of watching the water rise in a clogged high flush toilet and just praying it didn’t overflow.
Also unless every usage is a big poop requiring extra flushes, it’s far fetched that more flushes occasionally are adding up to the same water usage. If the toilet clogs for #1, something is very wrong - likely installed wrong, plumbing issues, or user error. Your toilet might not have been seated right so the wax seal ring is partially blocking the sewer line.
I don't think regulations are enough. They're just a band-aid on the gaping wound that is a capitalist, market based economy. No matter what regulations you make, some companies and individuals become winners and over time will grow rich enough to influence the government and the regulations. We need a better economic system, one that does not have these problems built in.
We haven't really been trying to find such a system. The technological progress that we've had since the last attempts at a different kind of a system has been huge, so what was once impossible might now be possible if we put some effort into it.
There is no system that fulfills your requirements.
It is even easy to explain why: Humans are part of all the moving pieces in such a system and they will always subvert it to their own agenda, no matter what rules you put into place. The more complex your rule set, the easier it is to break.
Look at games, can be a card game, a board game, some computer game. There is a fixed set of rules, and still humans try to cheat. We are not even talking adults here, you see this with kids already. Now with games there is either other players calling that out or you have a computer not allowing to cheat (maybe). Now imagine everyone could call someone else a cheater and stop them from doing something. This in itself is going to be misused. Humans will subvert systems.
So the only working system will be one with a non-human incorruptible game master, so to speak. Not going to happen.
With that out of the way, we certainly can ask the question: What is the next best thing to that? I have no answer to that, though.
Cheating happens in competition based systems. No one cheats in games where the point is to co-operate to achieve some common goal. We should aim to have a system based on recognizing those common goals and enabling large scale co-operation to achieve them.
> What is the next best thing to that? I have no answer to that, though.
i argue that what we have today is the so called next best thing - free market capitalism, with a good dose of democracy and strong gov't regulations (but not overbearing).
I assume they are saying that in practice, if wealth gives one influence (if one lives in capitalism), one will use that influence to make one's market less free to one's benefit.
Sure, but you can't ignore the negative sides like environmental destruction and wealth and power concentration. Just because we haven't yet invented a system that produces a good standard of living without these negative side effects doesn't mean it can't be done. But we aren't even trying, because the ones benefiting from this system the most, and have the most power, have no incentive to do so.
Capitalism is a good economic engine. Now put that engine in a car without steering wheel nor brakes and feed the engine with the thickest and ever-thickening pipe from the gas tank you can imagine, and you get something like USA.
But most of the world doesn't work like that. Countries like China and Russia have dictators that steer the car. Mexico have gangs and mafia. European countries have parliamentary democracies and "commie journalists" that do their job and reign political and corporate corruption--sometimes over-eagerly--and unions. In many of those places, wealth equals material well-being but not overt political power. In fact, wealth often employs stealth to avoid becoming a target.
USA is not trying to change things because people are numbed down[^1]. Legally speaking, there is nothing preventing that country from having a socialist party win control of the government with popular support and enact sweeping legislation to overcome economic inequality somewhat. Not socialist, but that degree of unthinkable was done by Roosevelt before and with the bare minimum of popular support.
[^1]: And, I'm not saying that's a small problem. It is not, and the capitalism of instant gratification entertainment is entirely responsible for this outcome. But the culprit is not capitalism at large. IMO, the peculiarities of American culture are, to a large extent, a historic accident.
You can't really separate wealth and power, they're pretty much the same thing. The process that is going on in the US is also happening in Europe, just at a slower pace. Media is consolidating in the hands of the wealthy, unions are being attacked and are slowly losing their power, etc. You can temporarily reverse the process by having someone steer the car into some other direction for a while, but wealth/power concentration is an unavoidable part of free market capitalism, so the problem will never go away completely. Eventually capital accumulates again, and will corrupt the institutions meant to control it.
A smart dictator is probably harder to corrupt, but they die and then if you get unlucky with the next dictator the car will crash and burn.
Political corruption is a consequence of capitalism. Taking over the political system provides a huge competitive advantage, so any entity rich enough to influence it has an incentive to do so in an competition based economy that incentivizes growth.
When did Political corruption not exist? In what system in history did the people in power have so few rotten apples that corruption was an anomally?
Blaming corruption on capitalism is silly. As long has worldhas resources, people want control of reasources, and bad actors will do bad actors thingies.
You're right, political corruption is a problem in other systems as well, not just capitalism. I guess it would be more accurate to say that power concentration causes political corruption. We should try to figure out if it's possible to manage the economy in a way that limits the amount of power any individual can have to such an extent that corruption would be impossible.
I don't think there is exists a magical political system that we set up and it magically protects us from corruption. Forever. Just like any system (like surviving in an otherwise hostile nature) it needs maintenance. Maintenance in a political or any social structure is getting off your bottom and imposing some "reward" signal on the system.
Corruption mainly exists because people have low standards for enforcing eradication of it. This is observable in the smallest levels. In countries where corruption is deeply engraved, even university student groups will be corrupted. Elected officials of societies of any size will be prone to put their personal interests in front of the groups' and will appoint or employ friends instead of randomers based on some quality metrics. The question is what are the other people willing to do? Is anyone willing to call them out? Is anyone willing to instead put on the job themselves and do it right (which can be demanding)?
The real question is how far are the individuals willing to go and how much discomfort are they willing to embrace to impose their requirements, needs, moral expectations on the political leader? The outcomes of many situations you face in society (should that be a salary negotiation or someone trying to rip you off in a shop) depend on how much sacrifice (e.g. discomfort) you are willing to take on to get out as a "winner" (or at least non-loser) of the situation? Are you willing to quit your job if you cannot get what you want? Are you going to argue with the person trying to rip you off? Are you willing to go to a lawyer and sue them and take a long legal battle?
If people keep choosing the easier way, there will always be people taking advantage of that. Sure, we have laws but laws also need maintenance and anyone wielding power needs active check! It doesn't just magically happen but the force that can keep it in check is every individual in the system. Technological advances and societal changes always lead to new ideas how to rip others off. What we would need is to truly punish the people trying to take advantage of such situations: no longer do business with them, ask others to boycott such behaviour (and don't vote for dickheads!, etc.) -- even in the smallest friends group such an issue could arise.
The question is: how much are people willing to sacrifice on a daily basis to put pressure on corrupt people? There is no magic here, just the same bare evolutionary forces in place for the past 100,000 years of humankind.
(Just think about it: even in rule of law, the ultimate way of enforcing someone to obey the rules is by pure physical force. If someone doesn't listen, ever, he will be picked up by other people and forced into a physical box and won't be allowed to leave. And I don't expect that to ever change, regardless of the political system. Similarly, we need to keep up an army at all times. If you simply go hard pacifist, someone will take advantage of that... Evolution. )
Democracy is an active game to be played and not just every 4 years. In society, people's everyday choices and standards are the "natural forces of evolution".
Actually, the system that produced the greatest standard of living increase in human history is whatever Communist China's been doing for the last century.
Mao and communism brought famine and death to millions.
The move from that to "capitilism with Chinese characteristics" is what has brought about the greatest standard of living increase in human history.
What they're doing now is a mix of socialism, capitilism and CPP dominance. I'm not an American, but I understand FDR wielded socialism too, and that really catapulted the US towards its golden era.
Chinese do capitalism better than anyone else. Chinese companies ruthlessly compete within China to destroy their competition. Their firms barely have profits because everyone is competing so hard against others. Whereas US/EU is full of rent seeking monopolies that used regulatory capture to destroy competition.
We have that already. It's called part-time jobs. Usually they don't pay as much as full-time jobs, provide no health insurance or other benefits, etc.
As someone who straddles two fields (CS and Healthcare) and has careers/degrees in both -- the grass isn't always greener on the other side.
This could be said about most jobs in the 21st century these days in any career field given. That's a culture shift and business management/organization practice change that isn't likely to happen anytime soon.
Oh I'm not saying we have it worse. But there are jobs where time spent is more proportional to productive output, so working half the time for half the money is a fair deal.
Indeed, and I don't know why people keep saying that we ever thought the 20 hour workweek was feasible, because there is always more work to be done. Work expands to fill the constraints available, similar to Parkinson's Law.
You're on the right track, but missing an important aspect.
In most cases the company making the inferior product didn't spend less. But they did spend differently. As in, they spent a lot on marketing.
You were focused on quality, and hoped for viral word of mouth marketing. Your competitors spent the same as you, but half their budget went to marketing. Since people buy what they know, they won.
Back in the day MS made Windows 95. IBM made OS/2. MS spend a billion $ on marketing Windows 95. That's a billion back when a billion was a lot. Just for the launch.
Techies think that Quality leads to sales. If does not. Marketing leads to sales. There literally is no secret to business success other than internalizing that fact.
Quality can lead to sales - this was the premise behind the original Google (they never spent a dime on advertising their own product until the Parisian Love commercial [1] came out in 2009, a decade after founding), and a few other tech-heavy startups like Netscape or Stripe. Microsoft certainly didn't spend a billion $ marketing Altair Basic.
The key point to understand is the only effort that matters is that which makes the sale. Business is a series of transactions, and each individual transaction is binary: it either happens or it doesn't. Sometimes, you can make the sale by having a product which is so much better than alternatives that it's a complete no-brainer to use it, and then makes people so excited that they tell all their friends. Sometimes you make the sale by reaching out seven times to a prospect that's initially cold but warms up in the face of your persistence. Sometimes, you make the sale by associating your product with other experiences that your customers want to have, like showing a pretty woman drinking your beer on a beach. Sometimes, you make the sale by offering your product 80% off to people who will switch from competitors and then jacking up the price once they've become dependent on it.
You should know which category your product fits into, and how and why customers will buy it, because that's the only way you can make smart decisions about how to allocate your resources. Investing in engineering quality is pointless if there is no headroom to deliver experiences that will make a customer say "Wow, I need to have that." But if you are sitting on one of those gold mines, capitalizing on it effectively is orders of magnitude more efficient than trying to market a product that doesn't really work.
> Investing in engineering quality is pointless if there is no headroom to deliver experiences that will make a customer say "Wow, I need to have that."
This. Per your example, this is exactly what it was like when most of us first used Google after having used AltaVista for a few years. Or Google Maps after having used MapQuest for a few years. Google invested their resources correctly in building a product that was head and shoulders above the competition.
And yes, if you are planning to sell beer, you are going to need the help of scantily clad women on the beach much more than anything else.
>> Or Google Maps after having used MapQuest for a few years. Google invested their resources correctly in building a product that was head and shoulders above the competition.
Except that they didn't: they bought a company that had been building a product that was head and shoulders above the competition (Where 2 Technologies), then they also bought Keyhole which became Google Earth.
Incidentally they also bought, not built, Youtube .. and Android.
So, yes, they had a good nose for "experiences that will make a customer say "Wow, I need to have that.""
They arguably did do a good job investing their resources but it was mostly in buying, not building.
Google Maps as it launched was the integration of 3 pre-existing products: KeyHole (John Hanke, provided the satellite imagery), Where 2 (Lars & Jens Rasmussen, was a desktop-based mapping system), and Google Local (internal, PM was Bret Taylor, provided the local business data). Note that both KeyHole and Where 2 were C++ desktop apps; it was rewritten as browser-based Javascript internally. Soon after launch they integrated functionality from ZipDash (traffic data) and Waze (roadside events).
People read that YouTube or Android were acquisitions and don't realize just how much development happened internally, though. Android was a 6-person startup; basically all the code was written post-acquisition. YouTube was a pure-Python application at time of acquisition; they rewrote everything on the Google stack soon afterwards, and that was necessary for it to scale. They were also facing a company-ending lawsuit from Viacom that they needed Google's legal team to fight; the settlement to it hinged on ContentID, which was developed in-house at Google.
> They arguably did do a good job investing their resources but it was mostly in buying, not building.
They did build a large part of those products, Keyhole is just a part of Google earth google maps in general has many more features than that.
For example driving around cars in every country that allowed it to take street photos is really awesome and nobody else does that even today. Google did that, not some company they aquired, they built it.
Android was nothing like the Android today when it was bought. The real purchase was the talent that came with Android and not the product at the time.
YouTube now, well only someone with deep pockets could have made it what it is today(unlimited video uploads and the engineering to support it). It was nothing special.
It's not just software -- My wife owns a restaurant. Operating a restaurant you quickly learn the sad fact that quality is just not that important to your success.
We're still trying to figure out the marketing. I'm convinced the high failure rate of restaurants is due largely to founders who know how to make good food and think their culinary skills plus word-of-mouth will get them sales.
My wife ran a restaurant that was relatively successful due to the quality of its food and service. She was able to establish it as an upper-tier experience, by both some word of mouth, but also by catering to right events, taking part in shows, and otherwise influencing the influencers of the town, without any massive ad campaigns. As a result, there were many praises in the restaurant's visitor book, left by people from many countries visiting the city.
It was not a huge commercial success though, even though it wasn't a failure either; it generated just enough money to stay afloat.
If it paid for people's lives and sustained itself, that sounds like a huge success to me. There's a part of me that thinks, maybe we'd all be better off if we set the bar for success of a business at "sustains the lives of the people who work there and itself is sustainable."
> you quickly learn the sad fact that quality is just not that important to your success.
Doesn't that depend on your audience? Also, what do you mean by quality?
Where I live, the best food can lead to big success. New tiny restaurants open, they have great food, eventually they open their big successor (or their second restaurant, third restaurant, etc.).
I believe this is called something like the 'Michelin Curse' but my google is not returning hits for that phrase, though the sentiment seems roughly correct [0]
In the restaurant business, the keys are value and market fit.
There is a market for quality, but it's a niche. Several niches actually.
But you need to attract that customer. And the food needs to be interesting. And the drinks need to match. Because foodies care about quality but also want a certain experience.
Average Joe Blow who dines at McDonald's doesn't give a flying fuck about quality, that's true. Market quality to him and he'll probably think it tastes worse.
If you want to make quality food, everything else needs to match. And if you want to do it profitably, your business model needs to be very focused.
It can't just be the same as a chain restaurant but 20% more expensive...
You must have read that the Market for Lemons is a type of market failure or collapse. Market failure (in macroeconomics) does not yet mean collapse. It describes a failure to allocate resources in the market such that the overall welfare of the market participants decreases. With this decrease may come a reduction in trade volume. When the trade volume decreases significantly, we call it a market collapse. Usually, some segment of the market that existed ceases to exist (example in a moment).
There is a demand for inferior goods and services, and a demand for superior goods. The demand for superior goods generally increases as the buyer becomes wealthier, and the demand for inferior goods generally increases as the buyer becomes less wealthy.
In this case, wealthier buyers cannot buy the superior relevant software previously available, even if they create demand for it. Therefore, we would say a market fault has developed as the market could not organize resources to meet this demand. Then, the volume of high-quality software sales drops dramatically. That market segment collapses, so you are describing a market collapse.
> There’s probably another name for this
You might be thinking about "regression to normal profits" or a "race to the bottom." The Market for Lemons is an adjacent scenario to both, where a collapse develops due to asymmetric information in the seller's favor. One note about macroecon — there's never just one market force or phenomenon affecting any real situation. It's always a mix of some established and obscure theories.
The Wikipedia page for Market for Lemons more or less summarizes it as a condition of defective products caused by information asymmetry, which can lead to adverse selection, which can lead to market collapse.
The Market for Lemons idea seems like it has merit in general but is too strong and too binary to apply broadly, that’s where I was headed with the suggestion for another name. It’s not that people want low quality. Nobody actually wants defective products. People are just price sensitive, and often don’t know what high quality is or how to find it (or how to price it), so obviously market forces will find a balance somewhere. And that balance is extremely likely to be lower on the quality scale than what people who care about high quality prefer. This is why I think you’re right about the software market tolerating low quality; it’s because market forces push everything toward low quality.
Once upon a time, the price of a product was often a good indicator of its quality. If you saw two products side by side on the shelf and one was more expensive, then you might assume that it was less likely to break or wear out soon.
Now it seems that the price has very little to do with quality. Cheaply made products might be priced higher just to give the appearance of quality. Even well known brands will cut corners to save a buck or two.
I have purchased things at bargain prices that did everything I wanted and more. I have also paid a lot for things that disappointed me greatly.
A big part of the drive towards lower prices is likely driven by companies exploiting that lack of information to deliver a low-quality product for a high price. Consumers rationally respond to this by just always picking the low-price product
Unless, of course, there's another factor (such as brand) that assures users they are receiving something worth spending extra on (and of course it's oh so easy for companies with such a reputation to temporarily juice returns if they are willing to make sacrifices)
Within the (wide!) price tier in which most people buy furniture, almost everything is worse than IKEA but a lot of it’s 2-3x the price. You have to go even higher to get consistently-better-than-ikea, but most people won’t even see that kind of furniture when they go shopping for a new couch or kitchen table.
By the way, inferior goods are not necessarily poor-quality products, though there is a meaningful correlation, and I based my original comment on it. Still, a OnePlus Android phone is considered an inferior good; an iPhone (or a Samsung Galaxy Android phone) is considered superior. Both are of excellent quality and better than one another in key areas. It's more about how wealth, brand perception, and overall market sentiment affect their demand. OnePlus phones will be in more demand during recessions, and demand for iPhones and Samsung Galaxys will decrease.
No objection to your use/non-use of the Market for Lemons label. Just wanted to clarify a possible misconception.
P.S. Apologies for editing this comment late. I thought the original version wasn't very concise.
> A OnePlus Android phone is considered an inferior good; an iPhone (or a Samsung Galaxy Android phone) is considered superior. Both are of excellent quality
No, the inferior good is a device with 2GB RAM, a poor quality battery, easy to crack screen, a poor camera. poor RF design and thus less stable connectivity, and poor mechanical assembly. But it has its market segment because it costs like 15% of the cost of an iPhone. Some people just cannot afford the expensive high-quality goods at all. Some people, slightly better-off, sometimes don't see the point to "overpay" because they are used to the bottom-tier functionality and can't imagine how much higher quality may be materially beneficial in comparison.
In other words, many people have low expectations, and low resources to match. It is a large market to address once a product-market fit was demonstrated in the high-end segment.
I mean "inferior good" as a macroeconomics term: https://www.investopedia.com/terms/i/inferior-good.asp. And the point of my comment is to show that product quality alone doesn't determine whether it's an inferior good.
I see your point. But the choice between an iPhone and a Galaxy is mostly the ecosystem. And the choice between OnePlus and a Galaxy S is mostly about the quality of the camera. And the choice between a Galaxy and a Xioami is mostly about trusting a Chinese brand (not for its technical merits; they make excellent devices). The real quality / price differentiation, to my mind, lies farther down the scale.
That is, the choice between a $10 organic grass-fed milk and $8 organic grass-fed milk is literally a matter of taste, not the $2 price difference. The real price/quality choice is between the $10 fancy organic milk, $4.99 okay milk, and $2.49 bottom-shelf milk. They attract materially different customer segments.
There are many behavioral economics ideas about smartphone choices. There are various psychological aspects, such as lifestyle, status, social and personal values, and political influences. That is all true.
The strongest decider for whether a good will show positive or negative elastic demand (and be considered superior or inferior) is probably how it's branded, pricing strategy included. For example, wealthy people shop in boutiques more than large retail centers, though the items sold are often sourced from the same suppliers. The difference? Branding, including pricing.
You're right about basic goods, such as groceries. Especially goods that are almost perfectly identical and freely substitutable, like milk. What's a superior or inferior good becomes hard to guess when there is a high degree of differentiation (as you say, ecosystems, cameras, security). It's easier to measure than predict.
Anyway, this is all a "fun fact." My original comment really does make the assumption that software, which is relatively substitutable, is like the milk example — the price and the inferiority/superiority are strongly correlated. And the entire expensive software market has collapsed like the expensive secondary market for used cars.
My wife has a perfume business. She makes really high quality extrait de parfums [1] with expensive materials and great formulations. But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price. We've had so many conversations about whether she should dilute everything like the other companies do, but you lose so much of the beauty of the fragrance when you do that. She really doesn't want to go the route of mediocrity, but that does seem to be what the market demands.
First, honest impression: At least on my phone (Android/Chromium) the typography and style of the website don't quite match that "high quality & expensive ingredients" vibe the parfums are supposed to convey. The banners (3 at once on the very first screen, one of them animated!), italic text, varying font sizes, and janky video header would be rather off-putting to me. Maybe it's also because I'm not a huge fan of flat designs, partially because I find they make it difficult to visually distinguish important and less important information, but also because I find them a bit… unrefined and inelegant. And, again, this is on mobile, so maybe on desktop it comes across differently.
Disclaimer: I'm not a designer (so please don't listen only to me and take everything with a grain of salt) but I did work as a frontend engineer for a luxury retailer for some time.
I am somewhat familiar with this market and would probably be turned off by this site mostly because it looks too slick and the ones I’ve seen that were this slick mostly weren’t for me (marketed to, and making perfume entirely or almost entirely for, women).
The ones for me usually look way shittier or just use Etsy.
[edit] the only exception I can come up with is Imaginary Authors, which is much slicker-looking than this, actually, but with a far darker palette—this one definitely says “this is feminine stuff” in the design. And actually I’d say IA leans far more feminine as far as overall vibe of their catalog than most others that’ve had at least one scent that worked out for me.
I'm hesitant to reply because it sounds pejorative and snarky, and I will be downvoted, but... you are not the target market for this. End of story.
This design is very 2025 and the rules you're judging by have long-since been thrown out the window. Most brands run on Shopify now, marketing is via myriad social channels in ways that feel insane and unintuitive, aesthetics are all over the map.
What's old is new is old is different is the same is good is bad, and what is garish to you (strangely, honestly) isn't to most; you'll see if you hang out with some young people lol, promise.
P.S. I am not young, I'm figuring this out by watching from afar HAHAHA
Yeah, her customer is gen z or millennial women and queer men. It doesn't look like where I shop, but I'm not the target demo. A lot of the beauty and fragrance world looks like this these days, particularly as you go down towards gen z.
this website looks like a scam website redirecter
the one where you have to click on 49 ads and wait for 3 days before you get to your link
the video playing immediately makes me think that's a Google ad unrelated to what the website is about
the different font styles reminds me of the middle school HTML projects we had to do with each line in a different size and font face to prove that we know how to use <font face> and <font size>. All its missing is a jokerman font
For sure. I suggested having an eau de parfum option, but it does make things smell totally different -- much weaker, doesn't last long on the body, and can get overpowered by the alcohol carrier. Plus as a small business it'd mean having a dozen new formulations, with the associated packaging changes, inventory, etc. which makes it harder as a totally bootstrapped business. It's definitely still something to think about though, as even fragrances like a Tom Ford or Le Labo selling for $300-400 are just eau de parfums.
> But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price.
Has she tried raising prices? To signal that her product is highly quality and thus more expensive than her competition?
She has, these prices are actually lower than they were before, as most customers don't seem to care about things like concentration. Likely it's just that most aren't that informed about the differences. They'll pay more because it's Chanel or because a European perfumer made it, not because the quality is higher.
That's actually been new for her, maybe the past two or so months after 10 years in business, and it seems to be working better than any other type of advertising she's done in the past.
I had the same realization but with car mechanics. If you drive a beater you want to spend the least possible on maintenance. On the other hand, if the car mechanic cares about cars and their craftsmanship they want to get everything to tip-top shape at high cost. Some other mechanics are trying to scam you and get the most amount of money for the least amount of work. And most people looking for car mechanics want to pay the least amount possible, and don't quite understand if a repair should be expensive or not. This creates a downward pressure on price at the expense of quality and penalizes the mechanics that care about quality.
Luckily for mechanics, the shortage of actual blue collar Hands-On labor is so small, that good mechanics actually can charge more.
The issue is that you have to be able to distinguish a good mechanic from a bad mechanic cuz they all get to charge a lot because of the shortage. Same thing for plumbing, electrical, HVAC, etc etc etc
Here in Atlanta Georgia, we have a ToyoTechs business. They perform maintenance on only Toyota-family automobiles. They have 2 locations, one for large trucks, one for cars, hybrids, and SUV-looking cars. Both are always filled up with customers. Some of whom drive hundreds of miles out of state to bring their vehicles exclusively there, whether the beater is a customized off-roader or a simple econobox with sentimental value.
Why? Because they are on a different incentive structure: non-comissioned payments for employees. They buy OEM parts, give a good warranty, charge fair prices, and they are always busy.
If this computer fad goes away, I'm going to open my own Toyota-only auto shop, trying to emulate them. They have 30 years of lead time on my hypothetical business, but the point stands: when people discover that high quality in this market, they stick to it closely.
With the introduction of insurance for covering the cost of a security breach, suddenly managers have an understanding of the value of at least the security aspect of software quality. As it impacts their premiums.
I really hope so. But I do not have much faith in insurance companies. I have seen what they have done to worker safety, made it a minefield for workers, a box ticking exercise for bosses, and done very little for worker safety.
What works for worker safety is regulation. I am afraid the same will be true for software.
Exactly. People on HN get angry and confused about low software quality, compute wastefulness, etc, but what's happening is not a moral crisis: the market has simply chosen the trade-off it wants, and industry has adapted to it
If you want to be rewarded for working on quality, you have to find a niche where quality has high economic value. If you want to put effort into quality regardless, that's a very noble thing and many of us take pleasure in doing so, but we shouldn't act surprised when we aren't economically rewarded for it
I actually disagree. I think that people will pay more for higher quality software, but only if they know the software is higher quality.
It's great to say your software is higher quality, but the question I have is whether or not is is higher quality with the same or similar features, and second, whether the better quality is known to the customers.
It's the same way that I will pay hundreds of dollars for Jetbrains tools each year even though ostensibly VS Code has most of the same features, but the quality of the implementation greatly differs.
If a new company made their IDE better than jetbrains though, it'd be hard to get me to fork over money. Free trials and so on can help spread awareness.
That does not describe the current subscription-based software market, then, because we do try it, and we can always stop paying, transaction costs aside.
There are two costs to software: what you pay for it, and the time needed to learn how to use it. That's a big different to the original Lemon paper. You don't need to invest time in learning how to use a car, so the only cost to replacing it is the upfront cost of a new car. Worse "Time needed to learn it" understates it, because the cost replacing lemon software is often far more than just training. For example: replacing your accounting system, where you need to keep the data it has for 7 years as a tax record. Replacing a piece of software will typically cost many times the cost of the software itself.
If you look around, notice people still use Microsoft yet ransomware almost universally attacks Windows installations. This is despite everyone knowing Windows is a security nightmare courtesy of the Sony hack 2014: https://en.wikipedia.org/wiki/2014_Sony_Pictures_hack
Mind you, when I say "everyone", Microsoft's marketing is very good. A firm I worked lost $500k to a windows keyboard logger stealing banking credentials. They had virus scanners for firewalls installed of course, but they aren't a sure deference. As the technical lead for many years, I was asked about my opinion of what they could do. The answer is pretty simple: don't use Windows for banking. Buy an iPad of Android tablet, and do you safety critical stuff on there. The CEO didn't believe a tablet could be more secure than a several thousand dollar laptop when copy of Windows cost more than the tablet. Sigh.
So the answer to why don't people move away from poor quality subscription software is by the time they've figure out it's crap, the cost of moving isn't just the subscription. It's much larger than that.
> but only if they know the software is higher quality.
I assume all software is shit in some fashion because every single software license includes a clause that has "no fitness for any particular purpose" clause. Meaning, if your word processor doesn't process words, you can't sue them.
When we get consumer protection laws that require that software does what is says on the tin quality will start mattering.
I used to write signal processing software for land mobile radios. Those radios were used by emergency services. For the most part, our software was high quality in that it gave good quality audio and rarely had problems. If it did have a problem, it would recover quickly enough that the customer would not notice.
Our radios got a name for reliability: such as feedback from customers about skyscrapers in New York being on fire and the radios not skipping a beat during the emergency response. Word of mouth traveled in a relatively close knit community and the "quality" did win customers.
Oddly we didn't have explicit procedures to maintain that quality. The key in my mind was that we had enough time in the day to address the root cause of bugs, it was a small enough team that we knew what was going into the repository and its effect on the system, and we developed incrementally. A few years later, we got spread thinner onto more products and it didn't work so well.
Don't know. The customer ran a radio network which was used by fire brigade(s?) in NY, so we weren't on the "coal face". It was about 15 years ago.
It was an interesting job. Among other things, our gear ran stage management for a couple of Olympic opening ceremonies. Reliability was key given the size of the audience. We also did gear for the USGC, covering the entire US coastline. If you placed an emergency call at sea, it was our radios that were receiving that signal and passing it into the USCG's network.
I kind of see this in action when I'm comparing products on Amazon. When comparing two products on Amazon that are substantially the same, the cheaper one will have way more reviews. I guess this implies that it has captured the majority of the market.
I think this honestly has more to do with moslty Chinese sellers engaging in review fraud, which is a rampant problem. I'm not saying non-Chinese sellers don't engage in review fraud, but I have noticed a trend that around 98% of fake or fraudulently advertised products are of Chinese origin.
If it was just because it was cheap, we'd also see similar fraud from Mexican or Vietnamese sellers, but I don't really see that.
There are various ways to do the trick, sometimes they ship out rocks to create a paper trail, sometimes they take a cheap/light product and then replace the listing with something more expensive and carry over all the reviews (which is just stupid that Amazon allows but apparently they do)
If you think about it there is basically no scalable way for Amazon to ensure a seller is providing the same product over time - and to all customers.
Random sampling can make sure a product matching the description arrives. But someone familiar with it would have to carefully compare over time. And that process doesn’t scale.
One thing Walmart does right is having “buyers” in charge of each department in the store. For example fishing - and they know all the gear and try it out. And they can walk into any store and audit and know if something is wrong.
I’m sure Amazon has responsible parties on paper - but the size and rate at which the catalog changes makes this a lower level of accountability.
There's an analogy with evolution. In that case, what survives might be the fittest, but it's not the fittest possible. It's the least fit that can possibly win. Anything else represents an energy expenditure that something else can avoid, and thus outcompete.
I had the exact same experience trying to build a startup. The thing that always puzzled me was Apple: they've grown into one of the most profitable companies in the world on the basis of high-quality stuff. How did they pull it off?
They focused heavily on the quality of things you can see, i.e. slick visuals, high build quality, even fancy cardboard boxes.
Their software quality itself is about average for the tech industry. It's not bad, but not amazing either. It's sufficient for the task and better than their primary competitor (Windows). But, their UI quality is much higher, and that's what people can check quickly with their own eyes and fingers in a shop.
These economic forces exist in math too. Almost every mathematician publishes informal proofs. These contain just enough discussion in English (or other human language) to convince a few other mathematicians in the same field that they their idea is valid. But it is possible to make errors. There are other techniques: formal step-by-step proof presentations (e.g. by Leslie Lamport) or computer-checked proofs that would be more reliable. But almost no mathematician uses these.
I'm thinking out loud but it seems like there's some other factors at play. There's a lower threshold of quality that needs to happen (the thing needs to work) so there's at least two big factors, functionality and cost. In the extreme, all other things being equal, if two products were presented at the exact same cost but one was of superior quality, the expectation is that the better quality item would win.
There's always the "good, fast, cheap" triangle but with Moore's law (or Wright's law), cheap things get cheaper, things iterate faster and good things get better. Maybe there's an argument that when something provides an order of magnitude quality difference at nominal price difference, that's when disruption happens?
So, if the environment remains stable, then mediocrity wins as the price of superior quality can't justify the added expense. If the environment is growing (exponentially) then, at any given snapshot, mediocrity might win but will eventually be usurped by quality when the price to produce it drops below a critical threshold.
You're laying it out like it's universal, in my experience there are products where people will seek for the cheapest good enough but there are also other product that people know they want quality and are willing to pay more.
Take cars for instance, if all people wanted the cheapest one then Mercedes or even Volkswagen would be out of business.
Same for professional tools and products, you save more by buying quality product.
And then, even in computer and technology. Apple iPhone aren't cheap at all, MacBook come with soldered ram and storage, high price, yet a big part of people are willing to buy that instead of the usual windows bloated spyware laptop that run well enough and is cheap.
> the cheapest one then Mercedes or even Volkswagen would be out of business
I would argue this is a bad example - most luxury cars aren't really meaningfully "better", they just have status symbol value. A mid range Honda civic or Toyota corolla is not "worse" than a Mercedes for most objective measurements.
As someone who drove both, I vehemently disagree. Stripped of logos, one is delightful, the other just nominally gets the job done.
The Mercedes has superior suspension that feels plush and smooth. Wonderful materials in the cabin that feel pleasant to the touch. The buttons press with a deep, satisfying click. The seats hug you like a soft cloud.
All of that isn’t nothing. It is difficult to achieve, and it is valuable.
All of that make the Mercedes better than a Corolla, albeit at a higher cost.
Not everyone wants the cheapest, but lemons fail and collapse the expensive part of the market with superior goods.
To borrow your example, it's as if Mercedes started giving every 4th customer a Lada instead (after the papers are signed). The expensive Mercedes market would quickly no longer meet the luxury demand of wealthy buyers and collapse. Not the least because Mercedes would start showing super-normal profits, and all other luxury brands would get in on the same business model. It's a race to the bottom. When one seller decreases the quality, so must others. Otherwise, they'll soon be bought out, and that's the best-case scenario compared to being outcompeted.
There is some evidence that the expensive software market has collapsed. In the 00s and 90s, we used to have expensive and cheap video games, expensive and cheap video editing software, and expensive and cheap office suites. Now, we have homogeneous software in every niche — similar features and similar (relatively cheap) prices. AAA game companies attempting to raise their prices back to 90s levels (which would make a AAA game $170+ in today's money) simply cannot operate in the expensive software market. First, there was consumer distrust due to broken software, then there were no more consumers in that expensive-end market segment.
Hardware you mention (iPhones, Androids, Macs, PCs) still have superior and inferior hardware options. Both ends of the market exist. The same applies to most consumer goods - groceries, clothes, shoes, jewelry, cars, fuel, etc. However, for software, the top end of the market is now non-existent. It's gone the way of expensive secondary market (resale) cars, thanks to how those with hidden defects undercut their price and destroyed consumer trust.
for software, the top end of the market is now non-existent
The issue here isn't absence, but misrecognition: the top end absolutely does exist -- it just doesn't always look like what people wish it looked like.
If by "top end" you mean "built to spec, hardened, and close to bug-free", it's alive and well in heavy manufacturing, telecommunication, automotive, aerospace, military, and medical industries. The technologies used there are not sexy (ask anyone working at Siemens or Nokia), the code wouldn't delight you, the processes are likely glacial, but there you will find software that works because it absolutely has to.
If by "top end" you mean "serves the implied user need in the best way imaginable", then modern LLMs systems are a good example. Despite the absolute mess and slop that those systems are built of, very few people come to ChatGPT and leave unsatisfied with its results.
If by "top end" you mean "beautifully engineered and maintained", think SQLite, LLVM and some OS kernels, like seL4. Those are well-written, foundational pieces of software that are not end-products in themselves, but they're built to last, studied by developers, and trusted everywhere. This is the current forefront in our knowledge of how to write software.
If by "top end" you mean "maximising profit through code", then the software in the top trading firms match this description. All those "hacker-friendly" and "tech-driven" firms run on the same sloppy code as everyone else, but they are ruthlessly optimised to make money. That's performance too.
You can carry on. For each definition of "top end", there is a real-life example of software matching it.
One can moan about the market rewarding mediocrity, but we, as technologists, all have better things to do instead of endless hand-wringing, really.
> We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
If you’re trying to sell a product to the masses, you either need to make it cheap or a fad.
You cannot make a cheap product with high margins and get away with it. Motorola tried with the RAZR. They had about five or six good quarters from it and then within three years of initial launch were hemorrhaging over a billion dollars a year.
You have to make premium products if you want high margins. And premium means you’re going for 10% market share, not dominant market share. And if you guess wrong and a recession happens, you might be fucked.
Yes, I was in this place too when I had a consulting company. We bid on projects with quotes for high quality work and guaranteed delivery within the agreed timeframe. More often than not we got rejected in favor of some students who submitted a quote for 4x less. I sometimes asked those clients how the project went, and they'd say, well, those guys missed the deadline and asked for more money several times
There is an exception: luxury goods. Some are expensive, but people don't mind them being overpriced because e.g. they are social status symbols. Is there such a thing like "luxury software"? I think Apple sort of has this reputation.
> What I realized is that lower costs, and therefore lower quality,
This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.
I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.
For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:
let usEmployees = from x in Employees where x.Country == "US";
func byFemale(Query<Employees> q) =>
from x in q where x.Sex == "Female";
let femaleUsEmployees = byFemale(usEmployees);
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.
You can do this in Scala[0], and you'll get type inference and compile time type checking, informational messages (like the compiler prints an INFO message showing the SQL query that it generates), and optional schema checking against a database for the queries your app will run. e.g.
case class Person(name: String, age: Int)
inline def onlyJoes(p: Person) = p.name == "Joe"
// run a SQL query
run( query[Person].filter(p => onlyJoes(p)) )
// Use the same function with a Scala list
val people: List[Person] = ...
val joes = people.filter(p => onlyJoes(p))
// Or, after defining some typeclasses/extension methods
val joesFromDb = query[Person].onlyJoes.run
val joesFromList = people.onlyJoes
This integrates with a high-performance functional programming framework/library that has a bunch of other stuff like concurrent data structures, streams, an async runtime, and a webserver[1][2]. The tools already exist. People just need to use them.
Notice how you're still specifying List types? That's not what I'm describing.
You're also just describing a SQL mapping tool, which is also not really it either, though maybe that would be part of the runtime invisible to the user. Define a temporary table whose shape is inferred from another query, that's durable and garbage collected when it's no longer in use, and make it look like you're writing code against any other collection type, and declaratively specify the time complexity of insert, delete and lookup operations, then you're close to what I'm after.
The explicit annotation on people is there for illustration. In real code it can be inferred from whatever the expression is (as the other lines are).
I don't think it's reasonable to specify the time complexity of insert/delete/lookup. For one, joins quickly make you care about multi-column indices and the precise order things are in and the exact queries you want to perform. e.g. if you join A with B, are your results sorted such that you can do a streaming join with C in the same order? This could be different for different code paths. Simply adding indices also adds maintenance overhead to each operation, which doesn't affect (what people usually mean by) the time complexity (it scales with number of indices, not dataset size), but is nonetheless important for real-world performance. Adding and dropping indexes on the fly can also be quite expensive if your dataset size is large enough to care about performance.
That all said, you could probably get at what you mean by just specifying indices instead of complexity and treating an embedded sqlite table as a native mutable collection type with methods to create/drop indices and join with other tables. You could create the table in the constructor (maybe using Object.hash() for the name or otherwise anonymously naming it?) and drop it in the finalizer. Seems pretty doable in a clean way in Scala. In some sense, the query builders are almost doing this, but they tend to make you call `run` to go from statement to result instead of implicitly always using sqlite.
Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:
var set = new HashSet<Employee>();
Done. Don't need any fancy support for that. Or if you want to load from a database, using the repository pattern and Kotlin this time instead of Java:
That would turn into an efficient SQL query that does a WHERE ... AND ... clause. But you can also compose queries in a type safe way client side using something like jOOQ or Criteria API.
> Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:
But now you've hard-coded this selection, why can't the performance characteristics also be easily parameterized and combined, eg. insert is O(1), delete is O(log(n)), or by defining indexes in SQL which can be changed at any time at runtime? Or maybe the performance characteristics can be inferred from the types of queries run on a collection elsewhere in the code.
> That would turn into an efficient SQL query that does a WHERE ... AND ... clause.
For a database you have to manually construct, with a schema you have to manually and poorly to an object model match, using a library or framework you have to painstakingly select from how many options?
You're still stuck in this mentality that you have to assemble a set of distinct tools to get a viable development environment for most general purpose programming, which is not what I'm talking about. Imagine the relational model built-in to the language, where you could parametrically specify whether collections need certain efficient operations, whether collections need to be durable, or atomically updatable, etc.
There's a whole space of possible languages that have relational or other data models built-in that would eliminate a lot of problems we have with standard programming.
There are research papers that examine this question of whether runtime optimizing data structures is a win, and it's mostly not outside of some special cases like strings. Most collections are quite small. Really big collections tend to be either caches (which are often specialized anyway), or inside databases where you do have more flexibility.
A language fully integrated with the relational model exists, that's PL/SQL and it's got features like classes and packages along with 'natural' SQL integration. You can do all the things you ask for: specify what operations on a collection need to be efficient (indexes), whether they're durable (temporary tables), atomically updatable (LOCK TABLE IN EXCLUSIVE MODE) and so on. It even has a visual GUI builder (APEX). And people do build whole apps in it.
Obviously, this approach is not universal. There are downsides. One can imagine a next-gen attempt at such a language that combined the strengths of something like Java/.NET with the strengths of PL/SQL.
> There are research papers that examine this question of whether runtime optimizing data structures is a win
If you mean JIT and similar tech, that's not really what I'm describing either. I'm talking about lifting the time and space complexity of data structures to parameters so you don't have to think about specific details.
Again, think about how tables in a relational database work, where you can write queries against sets without regard for the underlying implementation, and you have external/higher level tools to tune a running program's data structures for better time or space behavior.
> A language fully integrated with the relational model exists, that's PL/SQL
Not a general purpose language suitable for most programming, and missing all of the expressive language features I described, like type/shape inference, higher order queries and query composition and so on. See my previous comments. The tool you mentioned leaves a lot to be desired.
Other functional languages, too, but Clojure. You get exactly this, minus all the <'s =>'s ;'s and other irregularities, and minus all the verbosity...
I consider functional thinking and ability to use list comprehensions/LINQ/lodash/etc. to be fundamental skills in today's software world. The what, not the how!
Agreed, but it doesn't go far enough IMO. Why not add language/runtime support for durable list comprehensions, and also atomically updatable ones so they can be concurrently shared, etc. Bring the database into the language in a way that's just as easily to use and query as any other value.
LINQ is on the right track but doesn't quite go far enough with query composition. For instance, you can't "unquote" a query within another query (although I believe there is a library that tries to add this).
EF code-first is also on the right track, but the fluent and attribute mapping are awkward, foreign key associations often have to be unpacked directly as value type keys, there's no smooth transition between in-memory native types and durable types, and schema migration could be smoother.
Lots of the bits and pieces of what I'm describing are around but they aren't holistically combined.
Many high-quality open-source designs suggest this is a false premise, and as a developer who writes high-quality and reliable software for much much lower rates than most, cost should not be seen as a reliable indicator of quality.
I see another dynamic "customer value" features get prioritized and eventually product reaches a point of crushing tech debt. It results in "customer value" features delivery velocity grinding to a halt. Obviously subject to other forces but it is not infrequent for someone to come in and disrupt the incumbents at this point.
But do you think you could have started with a bug laden mess? Or is it just the natural progression down the quality and price curve that comes with scale
> People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality).
Sure, but what about the people who consider quality as part of their product evaluation? All else being equal everyone wants it cheaper, but all else isn't equal. When I was looking at smart lighting, I spent 3x as much on Philips Hue as I could have on Ikea bulbs: bought one Ikea bulb, tried it on next to a Hue one, and instantly returned the Ikea one. It was just that much worse. I'd happily pay similar premiums for most consumer products.
But companies keep enshittifying their products. I'm not going to pay significantly more for a product which is going to break after 16 months instead of 12 months. I'm not going to pay extra for some crappy AI cloud blockchain "feature". I'm not going to pay extra to have a gaudy "luxury" brand logo stapled all over it.
Companies are only interested in short-term shareholder value these days, which means selling absolute crap at premium prices. I want to pay extra to get a decent product, but more and more it turns out that I can't.
>There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
It's the same concept as the age old "only an engineer can build a bridge that just barely doesn't fall down" circle jerk but for a more diverse set of goods than just bridges.
I’d argue this exists for public companies, but there are many smaller, private businesses where there’s no doctrine of maximising shareholder value
These companies often place a greater emphasis on reputation and legacy
Very few and far between, Robert McNeel & Associates (American) is one that comes to mind (Rhino3D), as his the Dutch company Victron (power hardware)
The former especially is not known for maximising their margins, they don’t even offer a subscription-model to their customers
Victron is an interesting case, where they deliberately offer few products, and instead of releasing more, they heavily optimise and update their existing models over many years in everything from documentation to firmware and even new features. They’re a hardware company mostly so very little revenue is from subscriptions
Maybe you could compete by developing new and better products? Ford isn't selling the same car with lower and lower costs every year.
It's really hard to reconcile your comment with Silicon Valley, which was built by often expensive innovation, not by cutting costs. Were Apple, Meta, Alphabet, Microsoft successful because they cut costs? The AI companies?
Apple's incredible innovation and attention to detail is what made them legendary and successful. Steve Jobs was legendary for both.
> Meta and Alphabet had zero cost products (to consumers) that they leveraged to become near monopolies.
What does zero cost have to do with it? The comment I responded to spoke of cutting the business's costs - quality inputs, labor, etc. - not their customers' costs. Google made a much better search engine than competitors and then better advertising engine; Facebook made the best social media network.
> Aren’t all the AI companies believed to be providing their products below cost for now to grab market share?
Again, what does that have to do with cutting costs rather than innovating to increase profit?
Capitalism? Marx's core belief was that capitalists would always lean towards paying the absolute lowest price they could for labor and raw materials that would allow them to stay in production. If there's more profit in manufacturing mediocrity at scale than quality at a smaller scale, mediocrity it is.
Not all commerce is capitalistic. If a commercial venture is dedicated to quality, or maximizing value for its customers, or the wellbeing of its employees, then it's not solely driven by the goal of maximizing capital. This is easier for a private than a public company, in part because of a misplaced belief that maximizing shareholder return is the only legally valid business objective. I think it's the corporate equivalent of diabetes.
In the 50s and 60s, capitalism used to refer to stakeholder capitalism. It was dedicated to maximize value for stakeholders, such as customers, employees, society, etc.
But that shifted later, with Milton Friedman, who pushed the idea of shareholder capitalism in the 70s. Where companies switched to thinking the only goal is to maximize shareholder value.
In his theory, government would provide regulation and policies to address stakeholder's needs, and companies therefore needed focus on shareholders.
In practice, lobbying, propaganda and corruption made it so governments dropped the ball and also sided to maximize shareholder value, along with companies.
The problem with your thesis is that software isn't a physical good, so quality isn't tangible. If software does the advertised thing, it's good software. That's it.
With physical items, quality prevents deterioration over time. Or at least slows it. Improves function. That sort of thing.
Software just works or doesn't work. So you want to make something that works and iterate as quickly as possible. And yes, cost to produce it matters so you can actually bring it to market.
I'm a layman, but in my opinion building quality software can't really be a differentiator because anyone can build quality software given enough time and resources. You could take two car mechanics and with enough training, time, assistance from professional dev consultants, testing, rework, so and so forth, make a quality piece of software. But you'd have spent $6 million to make a quality alarm clock app.
A differentiator would be having the ability to have a higher than average quality per cost. Then maybe you're onto something.
I'm proud of you, it often takes people multiple failures before they learn to accept their worldview that regulations aren't necessary and the tragedy of Commons is a myth are wrong.
> the market sells as if all goods were high-quality
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
Disagree, the main reason so many apps are using "slow" languages/frameworks is precisely that it allows them to develop way more features way quicker than more efficient and harder languages/frameworks.
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
I never said performance wasn't an important quality metric, just that it's not the only quality metric. If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
> How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Like any other feature, whether or not performance is important depends on the user and context. Chrome being faster than IE8 at general browsing (rendering pages, opening tabs) was very noticeable. uv/ruff being faster than pip/poetry is important because of how the tools integrate into performance-sensitive development workflows. Does Slack taking 5-10 seconds to load on startup matter? -- to me not really, because I have it come up on boot and forget about it until my next system update forced reboot. Do I use LibreOffice or Word and Excel, even though LibreOffice is faster? -- I use Word/Excel because I've run into annoying compatibility issues enough times with LO to not bother. LibreOffice could reduce their startup and file load times to 10 picoseconds and I would still use MS Office, because I just want my damn documents to keep the same formatting my colleagues using MS Office set on their Windows computers.
Now of course I would love the best of all worlds; programs to be fast and have all the functionality I want! In reality, though, companies can't afford to build every feature, performance included, and need to pick and choose what's important.
> If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
That’s irrelevant here, the fully featured product can also be fast. The overwhelming majority of software is slow because the company simply doesn’t care about efficiency. Google actively penalized slow websites and many companies still didn’t make it a priority.
> That’s irrelevant here, the fully featured product can also be fast.
So why is it so rarely the case? If it's so simple, why hasn't anyone recognized that Teams, Zoom, etc are all bloated and slow and made a hyper-optimized, feature-complete competitor, dominating the market?
Software costs money to build, and performance optimization doesn't come for free.
> The overwhelming majority of software is slow because the company simply doesn’t care about efficiency.
Don't care about efficiency at all, or don't consider it as important as other features and functionality?
> Software costs money to build, and performance optimization doesn't come for free.
Neither do caching, operational/architectural overhead, slow builds and all the hoops we jump through in order to satisfy stylistic choices. All of this stuff introduces complexity and often demands specialized expertise on top.
And it's typically not about optimization, but about not doing things that you don't necessarily have to do. A little bit of frugality goes a long way. Often leading to simpler code and fewer dependencies.
The hardware people are (actually) optimizing, trying hard to make computers fast, to a degree that it introduces vulnerabilities (like the apple CPU cache prefetching memory from arrays of pointers, which opened it up for timing attacks, or the branch prediction vulnerability on intel chips). Meanwhile we software people are piling more and more stuff into programs that aren't needed, from software patterns/paradigms to unnecessary dependencies etc.
There's also the issue of programs feeling entitled to resources. When I'm running a video game or a data migration, I obviously want to give it as many resources as possible. But it shouldn't be necessary to provide gigabytes of memory for utility programs and operative applications.
Not being free upfront isn’t the same thing as expensive.
Zoom’s got 7,412 employees a small team of say 7 employees could make a noticeable difference here and the investment wouldn’t disappear, it would help drive further profits.
> Don't care about efficiency at all
Doesn’t care beyond basic functionality. Obviously they care if something takes an hour to load, but rarely do you see considerations for people running on lower hardware than the kind of machines you see at a major software company etc.
> Zoom’s got 7,412 employees a small team of say 7 employees could make a noticeable difference here
What would those 7 engineers specifically be working on? How did you pick 7? What part of the infrastructure would they be working on, and what kind of performance gains, in which part of the system, would be the result of their work?
What consumers care about is the customer facing aspects of the business. As such you’d benchmark Zoom on various clients/plugins (Windows, Max, Android, iOS) and create a never ending priority list of issues weighted by marketshare.
7 people was roughly chosen to be able to cover the relevant skills while also being a tiny fraction of the workforce. Such efforts run into diminishing returns, but the company is going to keep creating low hanging fruit.
If you're being honest, compare Slack and Teams not with weechat, but with Telegram. Its desktop client (along with other clients) is written by an actually competent team that cares about performance, and it shows. They have enough money to produce a native client written in C++ that has fantastic performance and is high quality overall, but these software behemoths with budgets higher than most countries' GDP somehow never do.
In an efficient market people buy things based on a value which in the case of software, is derived from overall fitness for use. "Quality" as a raw performance metric or a bug count metric aren't relevant; the criteria is "how much money does using this product make or save me versus its competition or not using it."
In some cases there's a Market of Lemons / contract / scam / lack of market transparency issue (ie - companies selling defective software with arbitrary lock-ins and long contracts), but overall the slower or more "defective" software is often more fit for purpose than that provided by the competition. If you _must_ have a feature that only a slow piece of software provides, it's still a better deal to acquire that software than to not. Likewise, if software is "janky" and contains minor bugs that don't affect the end results it provides, it will outcompete an alternative which can't produce the same results.
I don't think it's necessarily a market for lemons. That involves information asymmetry.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
There's likely some, although it depends on the environment. The more users of the system there are, the more there are going to be reviews and people will know that it's kind of buggy. Most people seem more interested in cost or features though, as long as they're not losing hours of work due to bugs.
I have worked for large corporations that have foisted awful HR, expense reporting, time tracking and insurance "portals" that were so awful I had to wonder if anyone writing the checks had ever seen the product. I brought up the point several times that if my team tried to tell a customer that we had their project all done but it was full of as many bugs and UI nightmares as these back office platforms, I would be chastised, demoted and/or fired.
> I had to wonder if anyone writing the checks had ever seen the product
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
What you're describing is Enterprise(tm) software. Some consultancy made tens of millions of dollars building, integrating, and deploying those things. This of course was after they made tens of millions of dollars producing reports exploring how they would build, integrate, and deploy these things and all the various "phases" involved. Then they farmed all the work out to cheap coders overseas and everyone went for golf.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
I used to work at a large company that had a lousy internal system for doing performance evals and self-reviews. The UI was shitty, it was unreliable, it was hard to use, it had security problems, it would go down on the eve of reviews being due, etc. This all stressed me out until someone in management observed, rather pointedly, that the reason for existence of this system is that we are contractually required to have such a system because the rules for government contracts mandate it, and that there was a possibility (and he emphasized the word possibility knowingly) that the managers actully are considering their personal knowledge of your performance rather than this performative documentation when they consider your promotions and comp adjustments. It was like being hit with a zen lightning bolt: this software meets its requirements exactly, and I can stop worrying about it. From that day on I only did the most cursory self-evals and minimal accomplishents, and my career progressed just fine.
You might not think about this as “quality” but it does have the quality of meeting the perverse functional requirements of the situation.
> the market buys bug-filled, inefficient software about as well as it buys pristine software
In fact, the realization is that the market buy support.
And that includes google and other companies that lack much of human support.
This is the key.
Support is manifested in many ways:
* There is information about it (docs, videos, blogs, ...)
* There is people that help me ('look ma, this is how you use google')
* There is support for the thing I use ('OS, Browser, Formats, ...')
* And for my way of working ('Excel let me do any app there...')
* And finally, actual people (that is the #1 thing that keep alive even the worst ERP on earth). This also includes marketing, sales people, etc. This are signal of having support even if is not exactly the best. If I go to enterprise and only have engineers that will be a bad signal, because well, developers then to be terrible at other stuff and the other stuff is support that matters.
If you have a good product, but there is not support, is dead.
And if you wanna fight a worse product, is smart to reduce the need to support for ('bugs, performance issues, platforms, ...') for YOUR TEAM because you wanna reduce YOUR COSTS but you NEED to add support in other dimensions!
The easiest for a small team, is just add humans (that is the MOST scarce source of support). After that, it need to be creative.
(also, this means you need to communicate your advantages well, because there is people that value some kind of support more than others 'have the code vs propietary' is a good example. A lot prefer the proprietary with support more than the code, I mean)
So you're telling me that if companies want to optimize profitability, they’d release inefficient, bug-ridden software with bad UI—forcing customers to pay for support, extended help, and bug fixes?
Suddenly, everything in this crazy world is starting to make sense.
Afaik, SAS does exactly that (haven't any experience with them personally, just retelling gossips). Also Matlab. Not that they are BAD, it's just that 95% of matlab code could be python or even fortran with less effort. But matlab have really good support (aka telling the people in charge how they are tailored to solve this exact problem).
I worked in a previous job on a product with 'AI' in the name. It was a source of amusement to many of us working there that the product didn't, and still doesn't use any AI.
Even if end-users had the data to reasonably tie-break on software quality and performance, as I scroll my list of open applications not a single one of them can be swapped out with another just because it were more performant.
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
I’m surprised in your list because it contains 3 apps that I’ve replaced specifically due to performance issues (docker, iterm and notes). I don’t consider myself particularly performance sensitive (at home) either. So it might be true that the world is even _less_ likely to pay for resource efficiency than we think.
Podman might have some limited API compatibility, but it's a completely different tool. Just off the bat it's not compatible with Skaffold, apparently.
That an alternate tool might perform better is compatible with the claim that performance alone is never the only difference between software.
Podman might be faster than Docker, but since it's a different tool, migrating to it would involve figuring out any number of breakage in my toolchain that doesn't feel worth it to me since performance isn't the only thing that matters.
Except you’ve already swapped terminal for iterm, and orbstack already exists in part because docker left so much room for improvement, especially on the perf front.
I swapped Terminal for iTerm2 because I wanted specific features, not because of performance. iTerm2 is probably slower for all I care.
Another example is that I use oh-my-zsh which is adds weirdly long startup time to a shell session, but it lets me use plugins that add things like git status and kubectl context to my prompt instead of fiddling with that myself.
> But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies
I'd take this one step further, 99% of the software written isn't being done with performance in mind. Even here in HN, you'll find people that advocate for poor performance because even considering performance has become a faux pas.
That means you L4/5 and beyond engineers are fairly unlikely to have any sort of sense when it comes to performance. Businesses do not prioritize efficient software until their current hardware is incapable of running their current software (and even then, they'll prefer to buy more hardware is possible.)
Is this really tolerance and not just monopolistic companies abusing their market position? I mean workers can't even choose what software they're allowed to use, those choices are made by the executive/management class.
The used car market is market for lemons because it is difficult to distinguish between a car that has been well maintained and a car close to breaking down. However, the new car market is decidedly not a market for lemons because every car sold is tested by the state, and reviewed by magazines and such. You know exactly what you are buying.
Software is always sold new. Software can increase in quality the same way cars have generally increased in quality over the decades. Creating standards that software must meet before it can be sold. Recalling software that has serious bugs in it. Punishing companies that knowingly sell shoddy software. This is not some deep insight. This is how every other industry operates.
A hallmark of well-designed and well-written software is that it is easy to replace, where bug-ridden spaghetti-bowl monoliths stick around forever because nobody wants to touch them.
Just through pure Darwinism, bad software dominates the population :)
Right now, the market buys bug-filled, inefficient software because you can always count on being able to buy hardware that is good enough to run it. The software expands to fill the processing specs of the machine it is running on - "What Andy giveth, Bill taketh away" [1]. So there is no economic incentive to produce leaner, higher-quality software that does only the core functionality and does it well.
But imagine a world where you suddenly cannot get top-of-the-line chips anymore. Maybe China invaded Taiwan and blockaded the whole island, or WW3 broke out and all the modern fabs were bombed, or the POTUS instituted 500% tariffs on all electronics. Regardless of cause, you're now reduced to salvaging microchips from key fobs and toaster ovens and pregnancy tests [2] to fulfill your computing needs. In this world, there is quite a lot of economic value to being able to write tight, resource-constrained software, because the bloated stuff simply won't run anymore.
Carmack is saying that in this scenario, we would be fine (after an initial period of adjustment), because there is enough headroom in optimizing our existing software that we can make things work on orders-of-magnitude less powerful chips.
I have that washing machine btw. I saw the AI branding and had a chuckle. I bought it anyway because it was reasonably priced (the washer was $750 at Costco).
A big part of why I like shopping at Costco is that they generally don't sell garbage. Their filter doesn't always match mine, but they do have a meaningful filter.
These days I feel like I'd be willing to pay more for a product that explicitly disavowed AI. I mean, that's vulnerable to the same kind of marketing shenanigans, but still. :-)
You must be referring only to security bugs because you would quickly toss Excel or Photoshop if it were filled with performance and other bugs. Security bugs are a different story because users don't feel the consequences of the problem until they get hacked and even then, they don't know how they got hacked. There are no incentives for developers to actually care.
Developers do care about performance up to a point. If the software looks to be running fine on a majority of computers why continue to spend resources to optimize further? Principle of diminishing returns.
> This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI.
The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
> The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
In the end it all hinges on the users ability to assess the quality of the product. Otherwise, the user cannot judge whether an assistant recommends quality products and the assistant has an incentive to suggest poorly (e.g. sellout to product producers).
> In the end it all hinges on the users ability to assess the quality of the product
The AI can use tools to extract various key metrics from the product that is analysed. Even if we limit such metrics down to those that can be verified in various "dumb" ways we should be able to verify products much further than today.
> And one of them is the cheapest software you could make.
I actually disagree a bit. Sloppy software is cheap when you're a startup but it's quite expensive when you're big. You have all the costs of transmission and instances you need to account for. If airlines are going to cut an olive from the salad why wouldn't we pay programmers to optimize? This stuff compounds too.
We're currently operate in a world where new features are pushed that don't interest consumers. While they can't tell the difference between slop and not at purchase they sure can between updates. People constantly complain about stuff getting slower. But they also do get excited when things get faster.
Imo it's in part because we turned engineers into MBAs. Wherever I ask why can't we solve a problem some engineer always responds "well it's not that valuable". The bug fix is valuable to the user but they always clarify they mean money. Let's be honest, all those values are made up. It's not the job of the engineer to figure out how much profit a big fix will result in, it's their job to fix bugs.
Famously Coke doesn't advertise to make you aware of Coke. They advertise to associate good feelings. Similarly, car companies advertise to get their cars associated with class. Which is why sometimes they will advertise to people who have no chance of buying the car. What I'm saying is that brand matters. The problem right now is that all major brands have decided brand doesn't matter or brand decisions are always set in stone. Maybe they're right, how often do people switch? But maybe they're wrong, switching seems to just have the same features but a new UI that you got to learn from scratch (yes, even Apple devices aren't intuitive)
That's generally what I think as well. Yes, the world could run on older hardware, but you keep making faster and adding more CPU's so, why bother making the code more efficient?
> The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
That's where FOSS or even proprietary "shared source" wins. You know if the software you depend on is generally badly or generally well programmed. You may not be able to find the bugs, but you can see how long the functions are, the comments, and how things are named. YMMV, but conscientiousness is a pretty great signal of quality; you're at least confident that their code is clean enough that they can find the bugs.
Basically the opposite of the feeling I get when I look at the db schemas of proprietary stuff that we've paid an enormous amount for.
1. Sometimes speed = money. Being the first to market, meeting VC-set milestones for additional funding, and not running out of runway are all things cheaper than the alternatives. Software maintenance costs later don't come close to opportunity costs if a company/project fails.
2. Most of the software is disposable. It's made to be sold, and the code repo will be chucked into a .zip on some corporate drive. There is no post-launch support, and the software's performance after launch is irrelevant for the business. They'll never touch the codebase again. There is no "long-term" for maintenance. They may harm their reputation, but that depends on whether their clients can talk with each other. If they have business or govt clients, they don't care.
3. The average tenure in tech companies is under 3 years. Most people involved in software can consider maintenance "someone else's problem." It's like the housing stock is in bad shape in some countries (like the UK) because the average tenure is less than 10 years. There isn't a person in the property's owner history to whom an investment in long-term property maintenance would have yielded any return. So now the property is dilapidated. And this is becoming a real nationwide problem.
4. Capable SWEs cost a lot more money. And if you hire an incapable IC who will attempt to future-proof the software, maintenance costs (and even onboarding costs) can balloon much more than some inefficient KISS code.
5. It only takes 1 bad engineering manager in the whole history of a particular piece of commercial software to ruin its quality, wiping out all previous efforts to maintain it well. If someone buys a second-hand car and smashes it into a tree hours later, was keeping the car pristinely maintained for that moment (by all the previous owners) worth it?
And so forth. What you say is true in some cases (esp where a company and its employees act in good faith) but not in many others.
The other factor here is that in the number-go-up world that many of the US tech firms operate in, your company has to always be growing in order to be considered successful, and as long as your company is growing, future engineer time will always be cheaper than current engineering time (and should you stop growing, you are done for anyway, and you won't need those future engineers).
In my experiences, companies can afford to care about good software if they have extreme demands (e.g. military, finance) or amortize over very long timeframes (e.g. privately owned). It's rare for consumer products to fall into either of these categories.
What does "make in the long-term" even mean? How do you make a sandwich in the long-term?
Bad things are cheaper and easier to make. If they weren't, people would always make good things. You might say "work smarter," but smarter people cost more money. If smarter people didn't cost more money, everyone would always have the smartest people.
the thing is - countries have set down legal rules preventing selling of food that actively harms the consumer(expired, known poisonous, addition of addictive substances(opiates) etc) to continue your food analogy.
in software the regulations can be boiled down to 'lol lmao' in pre-GDPR era. and even now i see GDPR violations daily.
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
I like to point out that since ~1980, computing power has increased about 1000X.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.
On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.
> Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch.
> On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional
I just launched IntelliJ (first time since reboot). Took maybe 2 seconds to the projects screen. I clicked a random project and was editing it 2 seconds after that.
I tried Twitter, Reddit, AirBnB, and tried to count the loading time. Twitter was the slowest at about 3 seconds.
I have a 4 year old laptop. If you're seeing 10 second load times for every website and 20 second launch times for every app, you have something else going on. You mentioned corporate VPN, so I suspect you might have some heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.
> heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.
Ugh, I personally witnessed this. I would wait to take my break until I knew the unavoidable, unkillable AV scans had started and would peg my CPU at 100%. I wonder how many human and energy resources are wasted checking for non-existant viruses on corp hardware.
In a previous job, I was benchmarking compile times. I came in on a Monday and everything was 10-15% slower. IT had installed carbon black on my machine over the weekend, which was clearly the culprit. I sent WPA traces to IT but apparently the sales guys said there was no overhead so that was that.
I used to think that was the worst, but then my org introduced me to pegging HDD write at 100% for half an hour at a time. My dad likes to talk about how he used to turn on the computer, then go get coffee; in my case it was more like turn on machine, go for a run, shower, check back, coffee, and finally... maybe.
Every Wednesday my PC becomes so slow it is barely usable. It is the Windows Defender scans. I tried doing a hack to put it on a lower priority but my hands are tied by IT.
Same. I had nearly full administrative privs on the laptop, yet I get "Access denied" trying to deprioritize the scan. We got new hardware recently, so we should be good until the scanners catch up and consume even more resources...
I'm on a four year old mid-tier laptop and opening VS Code takes maybe five seconds. Opening IDEA takes five seconds. Opening twitter on an empty cache takes perhaps four seconds and I believe I am a long way from their servers.
On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.
I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.
VS Code defers a lot of tasks to the background at least. This is a bit more visible in intellij; you seem to measure how long it takes to show its window, but how long does it take for it to warm up and finish indexing / loading everything, or before it actually becomes responsive?
Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Also keep in mind that desktop computers haven't gotten significantly faster for tasks like opening applications in the past years; they're more efficient (especially the M line CPUs) and have more hardware for specialist workloads like what they call AI nowadays, but not much innovation in application loading.
You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.
I wish the big desktop app builders would invest in native applications. I understand why they go for web technology (it's the crossplatform GUI technology that Java and co promised and offers the most advanced styling of anything anywhere ever), but I wish they invested in it to bring it up to date.
I disagree. Vs code uses plugins for all its heavy lifting. Even a minimal plugin setup is substantially slower to load than sublime is, which can also have an LSP plugin.
>Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Do any of those do the indexing that cause the slowness? If not it's comparing apples to oranges.
> You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.
11 years ago I put in a ticket to slack asking them about their resource usage. Their desktop app was using more memory than my IDE and compilers and causing heap space issues with visual studio. 10 years ago things were exactly the same. 15 years ago, my coworkers were complaining that VS2010 was a resource hog compared to 10 years ago. My memory of loading photoshop in the early 2000’s was that it took absolutely forever and was slow as molasses on my home PC.
I don’t think it’s necessarily gotten worse, I think it’s always been pathetically bad.
"early 2000s" was at least 22 years ago, as well. Sorry if this ruins your night. 100mhz 1994 vs 1000mhz in 2000, that's the only parallel i was drawing. 10x faster yet somehow adobe...
Ah sorry - I’m in my mid 30s so my early pc experiences as a “power user” were win XP, by which point photoshop had already bolted on the kitchen sink and autodesk required a blood sacrifice to start up.
5 seconds is a lot for a machine with an M4 Pro, and tons of RAM and a very fast SSD.
There's native apps just as, if not more, complicated than VSCode that open faster.
The real problem is electron. There's still good, performant native software out there. We've just settled on shipping a web browser with every app instead.
There is snappy electron software out there too, to be fair. If you create a skeleton electron app it loads just fine. A perceptible delay but still quick.
The problem is when you load it and then react and all its friends, and design your software for everything to be asynchronous and develop it on a 0 latency connection over localhost with a team of 70 people where nobody is holistically considering “how long does it take from clicking the button to doing the thing I want it to do”
It's probably more so that any corporate Windows box has dozens of extra security and metrics agents interrupting and blocking every network request and file open and OS syscall installed by IT teams while the Macs have some very basic MDM profile applied.
This is exactly it. My Debian Install on older hardware than my work machine is relatively snappy. The real killer is the Windows Defender Scans once a week. 20-30% CPU usage for the entire morning because it is trying to scan some CDK.OUT directory (if I delete the directory, the scan doesn't take nearly as long).
This is my third high end workstation computer in the last 5 years and my experience has been roughly consistent with.
My corporate vpn app is a disaster on so many levels, it’s an internally developed app as opposed to Okta or anything like that.
I would likewise say that your experience is not universal, and that in many circumstances the situation is much worse. My wife is running an i5 laptop from 2020 and her work intranet is a 60 second load time. Outlook startup and sync are measured in minutes including mailbox fetching. You can say this is all not the app developers fault, but the crunch that’s installed on her machine is slowing things down by 5 or 10x and that slowdown wouldn’t be a big deal if the apps had reasonable load times in the first place.
> are all applications I use every day that take 20+ seconds to launch.
I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).
IMO they just don't think of "initial launch speed" as a meaningful performance stat to base their entire tech stack upon. Most of these applications and even websites, once opened, are going to be used for several hours/days/weeks before being closed by most of their users
For all the people who are doubting that applications are slow and that it must just be me - here [0] is a debugger that someone has built from the ground up that compiles, launches, attaches a debugger and hits a breakpoint in the same length of time that visual studio displays the splash screen for.
Odd, I tested two news sides (tagesschau.de and bbc.com) and both load in 1 - 2 seconds. Airbnb in about 4 - 6 seconds though. My reddit never gets stuck, or if it does it's on all tabs because something goes wrong on their end.
That sounds like a corporate anti-virus slowing everything down to me. vscode takes a few seconds to launch for me from within WSL2, with extensions. IntelliJ on a large project takes a while I'll give you that, but just intelliJ takes only a few seconds to launch.
Even 4-5 seconds is long enough for me to honestly get distracted. That is just so much time even on a single core computer from a decade ago.
On my home PC, in 4 seconds I could download 500MB, load 12GB off an SSD, perform 12 billion cycles (before pipelining ) per core (and I have 24 of them) - and yet miro still manages to bring my computer to its knees for 15 seconds just to load an empty whiteboard.
HOW does Slack take 20s to load for you? My huge corporate Slack takes 2.5s to cold load.
I'm so dumbfounded. Maybe non-MacOS, non-Apple silicon stuff is complete crap at that point? Maybe the complete dominance of Apple performance is understated?
I have an i9 windows machine with 64GB ram and an M1 Mac. I’d say day to day responsiveness the Mac is heads and tails above the windows machine, although getting worse. I’m not sure if the problem is the arm electron apps are getting slower or if my machine is just aging
It's Windows. I'm on Linux 99% of the time and it's significantly more responsive on hardware from 2014 than Windows is on a high end desktop from 2023. I'm not being dramatic.
(Yes, I've tried all combinations of software to hardware and accounted for all known factors, it's not caused by viruses or antiviruses).
XP was the last really responsive Microsoft OS, it went downhill from then and never recovered.
My current machine I upgraded from win10 to win11 and I noticed an across the board overnight regression in everything. I did a clean install so if anything it should have been quicker but boot times, app launch times, compile times all took a nosedive on that update.
I still think there’s a lot of blame to go around for the “kitchen sink” approach to app development where we have entire OS’s that can boot faster than your app can get off a splash screen.
Unfortunately, my users are on windows and work has no Linux vpn client so a switch isn’t happening any time soon.
Most likely the engineers at many startups only use apple computers themselves and therefore only optimize performance for those systems. It's a shame but IMO result of their incompetence and not result of some magic apple performance gains.
Yes it is and the difference isn't understated, I think everyone knows by now that Apple has run away with laptop/desktop performance. They're just leagues ahead.
It's a mix of better CPUs, better OS design (e.g. much less need for aggressive virus scanners), a faster filesystem, less corporate meddling, high end SSDs by default... a lot of things.
What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.
The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.
Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.
E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.
After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).
Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):
On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.
E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.
I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.
You're comparing the art(!) of two different games, that targeted two different sets of hardware while using the ideal hardware for one and not the other. Kind of a terrible example.
The art direction, modelling and animation work is mostly fine, the worse look results from the lack of dynamic lighting and ambient occlusion in the Oblivion Remaster when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
Basically, the best art will always look bad without good lighting (and even baked or faked ambient lighting like in Skyrim looks better than no ambient lighting at all.
Digital Foundry has an excellent video about the issues:
> …when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
This also happens to many other UE5 games like S.T.A.L.K.E.R. 2 where they try to push the graphics envelope with expensive techniques and most people without expensive hardware have to turn the settings way down (even use things like upscaling and framegen which further makes the experience a bit worse, at least when the starting point is very bad and you have to use them as a crutch), often making these modern games look worse than something a decade old.
Whatever UE5 is doing (or rather, how so many developers choose to use it) is a mistake now and might be less of a mistake in 5-10 years when the hardware advances further and becomes more accessible. Right now it feels like a ploy by the Big GPU to force people to upgrade to overpriced hardware if they want to enjoy any of these games; or rather, sillyness aside, is an attempt by studios to save resources by making the artists spend less time on faking and optimizing effects and detail that can just be brute forced by the engine.
In contrast, most big CryEngine and idTech games run great even on mid range hardware and still look great.
I haven't really played it myself but it sounds like from the video you posted the remasters a bit of an outlier in terms of bad performance. Again it seems like a bad example to pull from.
I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.
It's strange, it visibly loading the buttons is indicative they use async technology that can use multithreaded CPUs effectively... but it's slower than the old synchronous UI stuff.
I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.
Windows 3.1 wasn't checking WiFi, Bluetooth, energy saving profile, night light setting, audio devices, current power status and battery level, audio devices, and more when clicking the non-existent icon on the non-existent taskbar. Windows XP didn't have this quick setting area at all. But I do recall having the volume slider take a second to render on XP from time to time, and that was only rendering a slider.
And FWIW this stuff is then cached. I hadn't clicked that setting area in a while (maybe the first time this boot?) and did get a brief gray box that then a second later populated with all the buttons and settings. Now every time I click it again it appears instantly.
For a more balanced comparison, observe how long it takes for the new "Settings" app to open and how long interactions take, compared to Control Panel, and what's missing from the former that the latter has had for literally decades.
I'm far faster changing my default audio device with the new quick settings menu than going Start > Control Panel > Sound > Right click audio device > Set as Default. Now I just click the quick settings > the little sound device icon > chosoe a device.
I'm far faster changing my WiFi network with the new quick settings menu than going Start > Control Panel > Network and Sharing Center (if using Vista or newer) > Network Devices > right click network adapter > Connect / Disconnect > go through Wizard process to set up new network. Now I just click the quick settings, click the little arrow to list WiFi networks, choose the network, click connect. Way faster.
I'm also generally far faster finding whatever setting in the Settings menu over trying to figure out which tab on which little Control Panel widget some obscure setting is, because there's a useful search box that will pull up practically any setting these days. Sure, maybe if you had every setting in Control Panel memorized you could be faster, but I'm far faster just searching for the setting I'm looking for at the moment for anything I'm not regularly changing.
The new Settings area, now that it actually has most things, is generally a far better experience unless you had everything in Control Panel committed to muscle memory. I do acknowledge though there are still a few things that aren't as good, but I imagine they'll get better. For most things most users actually mess with on a regular basis, it seems to me the Settings app is better than Control Panel. The only thing that really frustrates me with Settings now on a regular basis is only being able to have one instance of the app open at a time, a dumb limitation.
Every time I'm needing to mess with something in ancient versions of Windows these days is now a pain despite me growing up with it. So many things nested in non-obvious areas, things hidden behind tab after tab of settings and menus. Right click that, go to properties, click that, go to properties on that, click that button, go to the Options tab, click Configure, and there you go that's where you set that value. Easy! Versus typing something like the setting you want to set into the search box in Settings and have it take you right to that setting.
But is this cache trustworthy or will it eventually lead you to click in the wrong place because the situation changed and now there's a new button making everything change place?
And even if every information takes a bit to figure out, it doesn't excuse taking a second to even draw the UI. If checking bluetooth takes a second, then draw the button immediately but disable interaction and show a loading icon, and when you get the blutooth information update the button, and so on for everything else.
As someone who routinely hops between WiFi networks, I've never seen a wrong value here.
And OK, we'll draw a tile with all the buttons with greyed out status for that half second and then refresh to show the real status. Did that really make things better, or did it make it worse?
And if we bothered keeping all that in memory, and kept using the CPU cycles to make sure it was actually accurate and up to date on the click six hours later, wouldn't people then complain about how obviously bloated it was? How is this not a constant battle of being unable to appease any critics until we're back at the Win 3.1 state of things with no Bluetooth devices, no WiFi networks, no dynamic changing or audio devices, etc?
And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.
Rendering a volume slider or some icons shouldn't take half a second, regardless. e.g. speaking of Carmack, Wolfenstein: Enemy Territory hits a consistent 333 FPS (the max the limiter allows) on my 9 year old computer. That's 3 ms/full frame for a 3d shooter that's doing considerably more work than a vector wifi icon.
Also, you could keep the status accurate because it only needs to update on change events anyway, events that happen on "human time" (e.g. you plugged in headphones or moved to a new network location) last for a practical eternity in computer time, and your pre-loaded icon probably takes a couple kB of memory.
It seems absurd to me that almost any UI should fail to hit your monitor's refresh rate as its limiting factor in responsiveness. The only things that make sense for my computer to show its age are photo and video editing with 50 MB RAW photos and 120 MB/s (bytes, not bits) video off my camera.
It's not the drawing an icon to a screen that takes the half second, it's querying out to hardware on driver stacks designed for PCI WiFi adapters from the XP era along with all the other driver statuses.
It's like how Wi-Fi drivers would cause lag from querying their status, lots of poorly designed drivers and archaic frameworks for them to plug in.
And I doubt any hardware you had when Wolfenstein:ET came out rendered the game that fast. I remember it running at less than 60fps back in '03 on my computer. So slow, poorly optimized, I get better frame rates in Half Life. Why would anyone write something so buggy, unoptimized, and slow?!
You don't need to query the hardware to know the network interface is up. A higher level of the stack already knows that along with info like addresses, routes, DNS servers, etc.
IIRC it ran at 76 fps (higher than monitor refresh, one of the locally optimal frame rates for move speed/trick jumps) for me back then on something like an GeForce FX 5200? As long as you had a dedicated GPU it could hit 60 just fine. I think it could even hit 43 (another optimal rate) on an iGPU, which were terrible back then.
In any case, modern software can't even hit monitor refresh latency on modern hardware. That's the issue.
It's not just showing "is the interface up", it's showing current signal strength, showing current ssid, showing results from the recent poll of stations, etc.
And then doing the same for Bluetooth.
And then doing the same for screen rotation and rotation lock settings. And sound settings, And then another set of settings. And another set of settings. All from different places of the system configuration while still having the backwards compatibility of all those old systems.
It's not a slowness on painting it. It can do that at screen refresh rates no problem. It's a question of querying all these old systems which often result in actual driver queries to get the information.
43fps? Sure sounds slow to me. Why not 333fps on that hardware? So bloated, so slow.
You're just listing mechanisms for how it might be slow, but that doesn't really make it sensible. Why would the OS query hardware for something like screen rotation or volume? It knows these things. They don't randomly change. It also knows the SSID it's connected to and the results of the last poll (which it continuously does to see if it should move).
And yes it should cache that info. We're talking bytes. Less than 0.0001% of the available memory.
Things were different on old hardware because old hardware was over 1000x slower. On modern hardware, you should expect everything to be instantaneous.
And yet doing an ipconfig or netsh wlan show interfaces isn't always instantaneous depending on your hardware and the rest of your configuration. I can't tell you what all it's actually doing under the hood, but I've definitely seen variations of performance on different hardware.
Sometimes the devices and drivers just suck. Sometimes it's not the software's fault it's running at 43fps.
I'm hitting the little quick settings area on my exceptionally cheap and old personal laptop. I haven't experienced that slowness once. Once again I imagine the other stuff running interrupting all the OS calls and what not loading this information causes it to be slow.
I don't know what operating system you're talking about, but the bottleneck on my linux machine for asking for interfaces is the fact that stdout is write blocking.
I routinely have shy of 100 network interfaces active and `ip a` is able to query everything in nanoseconds.
Considering this whole conversation is about sometimes some people have a little bit of slowness drawing the quick settings area in Windows 11 and I gave commands like "netsh" it should be pretty dang obvious which OS we're talking about. But I guess some people have challenges with context clues.
And once again, on some Linux machines I've had over the years, doing an ip a command could hang or take a while if the device is in a bad state or being weird. It normally returns almost instantly, but sometimes has been slow to give me the information.
> And OK, we'll draw a tile with all the buttons with greyed out status for that half second and then refresh to show the real status. Did that really make things better, or did it make it worse?
Clearly better. Most of the buttons should also work instantly, most of the information should also be available instantly. The button layout is rendered instantly, so I can already figure out where I want to click without having to wait one second even if the button is not enabled yet, and by the time my mouse reaches it it will probably be enabled.
> And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.
I've never seen the volume slider in Windows 98 take one second to render. Not even the start menu, which is much more complex, and which in Windows 11 often takes a second, and search results also show up after a random amount of time and shuffle the results around a few times, leading to many misclicks.
It doesn't even know if the devices are still attached (as it potentially hasn't tried interfacing them for hours) but should instantly be able to allow input to control them and fully understand their current status. Right. Makes sense.
And if you don't remember the volume slider taking several seconds to render on XP you must be much wealthier than me or have some extremely rose colored glasses. I play around with old hardware all the time and get frustrated with the unresponsiveness of old equipment with period accurate software, and had a lot of decent hardware (to me at least) in the 90s and 00s. I've definitely experienced lots of times of the start menu painting one entry after the other at launch, taking a second to roll out, seeking on disk for that third level menu in 98, etc.
Rose colored glasses, the lot of you. Go use an old 386 for a month. Tell me how much more productive you are after.
XP had gray boxes and laggy menus like you wouldn't believe. It didn't even do search in the start menu, and maybe that was for the best because even on an SSD its search functionality was dog slow.
A clean XP install in a VM for nostalgia's sake is fine, but XP as actually used by people for a while quickly ground to a halt because of all the third party software you needed.
The task bar was full of battery widgets, power management icons, tray icons for integrated drivers, and probably at least two WiFi icons, and maybe two Bluetooth ones as well. All of them used different menus that are slow in their own respect, despite being a 200KiB executable that looks like it was written in 1995.
And the random crashes, there were so many random crashes. Driver programmes for basic features crashed all the time. Keeping XP running for more than a day or two by using sleep mode was a surefire way to get an unusual OS.
Modern Windows has its issues but the olden days weren't all that great, we just tolerated more bullshit.
Honestly it behaves like the interface is some Electron app that has to load the visual elements from a little internal webserver. That would be a very silly way to build an OS UI though, so I don't know what Microsoft is doing.
Yep. I suspect GP has just gotten used to this and it is the new “snappy” to them.
I see this all the time with people who have old computers.
“My computer is really fast. I have no need to upgrade”
I press cmd+tab and watch it take 5 seconds to switch to the next window.
That’s a real life interaction I had with my parents in the past month. People just don’t know what they’re missing out on if they aren’t using it daily.
Yeah, I play around with retro computers all the time. Even with IO devices that are unthinkably performant compared to storage hardware actually common at the time these machines are often dog slow. Just rendering JPEGs can be really slow.
Maybe if you're in a purely text console doing purely text things 100% in memory it can feel snappy. But the moment you do anything graphical or start working on large datasets its so incredibly slow.
I still remember trying to do photo editing on a Pentium II with a massive 64MB of RAM. Or trying to get decent resolutions scans off a scanner with a Pentium III and 128MB of RAM.
64MB is about the size of (a big) L3 cache. Today's L3 caches have a latency of 3-12ns and throughput measured in hundreds of gigabytes per second. And yet we can't manage to get responsive UIs because of tons of crud.
My modern machine running a modern OS is still way snappier while actually loading the machine and doing stuff. Sure, if I'm directly on a tty and just running vim on a small file its super fast. The same on my modern machine. Try doing a few things at once or handle some large dataset and see how well it goes.
My older computers would completely lock up when given a large task to do, often for many seconds. Scanning an image would take over the whole machine for like a minute per page! Applying a filter to an image would lock up the machine for several seconds even for a much smaller image a much simpler filter. The computer cannot even play mp3's and have a responsive word processor, if you really want to listen to music while writing a paper you better have it pass through the audio from a CD, much less think about streaming it from some remote location and have a whole encrypted TCP stream and decompression.
These days I can have lots of large tasks running at the same time and still have more responsiveness.
I have fun playing around with retro hardware and old applications, but "fast" and "responsive" are not adjectives I'd use to describe them.
I struggle because everything you're saying is your subjective truth, and mine differs.
Aside from the seminal discussion about text input latency from Dan Luu[0] there's very little we can do to disprove anything right now.
Back in the day asking my computer to "do" something was the thing I always dreaded, I could navigate, click around, use chat programs like IRC/ICQ and so on, and everything was fine, until I opened a program or "did" something that caused the computer to think.
Now it feels like there's no distinction between using a computer and asking it to do something heavy. The fact that I can't hear the harddisk screaming or the fan spin up (and have it be tied to something I asked the computer to do) might be related.
It becomes expectation management at some point, and nominally a "faster computer" in those days meant that those times I asked the computer to do something the computer would finish it's work quicker. Now it's much more about how responsive the machine will be... for a while, until it magically slows down over time again.
> Back in the day asking my computer to "do" something was the thing I always dreaded, I could navigate, click around, use chat programs like IRC/ICQ and so on, and everything was fine, until I opened a program or "did" something that caused the computer to think.
This is exactly what I'm talking about. When I'm actually using my computer, its orders of magnitude faster. Things where I'd do one click and then practically have to walk away and come back to see if it worked happen in 100ms now. This is the machine being way faster and far more responsive.
Like, OK, some Apple IIe had 30ms latency on a key press compared to 50ms on a Haswell desktop with a decent refresh rate screen or 100ms on some Thinkpad from 2017, assuming these machines aren't doing anything.
But I'm not usually doing nothing when I want to press the key. I've got dozens of other things I want my computer to do. I want it listening for events on a few different chat clients. I want it to have several dozen web pages open. I want it to stream music. I want it to have several different code editors open with linters examining my code. I want it paying attention if I get new mail. I want it syncing directories from this machine to other machines and cloud storage. I want numerous background agents handling tons of different things. Any one of those tasks would cause that Apple IIe to crawl instantly and it doesn't even have the memory to render a tiny corner of my screen.
The computer is orders of magnitude "faster", in that it is doing many times as much work much faster even when it's seemingly just sitting there. Because that's what we expect from our computers these days.
Tell me how fast a button press is when you're on a video call on your Apple IIe while having a code linter run while driving a 4K panel and multiple virtual desktops. How's its Unicode support?
The newish windows photo viewer in Win 10 is painfully slow and it renders a lower res preview first, but then the photo seems to move when the full resolution is shown. The photo viewer in windows 7 would prerender the next photo so the transition to the next one would be instant. The is for 24 megapixel photos, maybe 4mb jpegs.
So the quality has gone backwards in the process of rewriting the app into the touch friendly style. A lot of core windows apps are like that.
Note that the windows file system is much slower than the linux etx4, I don't know about Mac filesystems.
There's a problem when people who aren't very sensitive to latency and try and track it, and that is that their perception of what "instant" actually means is wrong. For them, instant is like, one second. For someone who cares about latency, instant is less than 10 milliseconds, or whatever threshold makes the difference between input and result imperceptible. People have the same problem judging video game framerates because they don't compare them back to back very often (there are perceptual differences between framerates of 30, 60, 120, 300, and 500, at the minimum, even on displays incapable of refreshing at these higher speeds), but you'll often hear people say that 60 fps is "silky smooth," which is not true whatsoever lol.
If you haven't compared high and low latency directly next to each other then there are good odds that you don't know what it looks like. There was a twitter video from awhile ago that did a good job showing it off that's one of the replies to the OP. It's here: https://x.com/jmmv/status/1671670996921896960
Sorry if I'm too presumptuous, however; you might be completely correct and instant is instant in your case.
Sure, but there's not limit to what people can decide to care about. There will always be people who want more speed and less latency, but the question is: are they right to do so?
I'm with the person you're responding. I use the regular suite of applications and websites on my 2021 M1 Macbook. Things seem to load just fine.
> For someone who cares about latency, instant is less than 10 milliseconds
Click latency of the fastest input devices is about 1ms and with a 120Hz screen you're waiting 8.3ms between frames. If someone is annoyed by 10ms of latency they're going to have a hard time in the real world where everything takes longer than that.
I think the real difference is that 1-3 seconds is completely negligible launch time for an app when you're going to be using it all day or week, so most people do not care. That's effectively instant.
The people who get irrationally angry that their app launch took 3 seconds out of their day instead of being ready to go on the very next frame are just never going to be happy.
I think you're right, maybe the disconnect is UI slowness?
I am annoyed at the startup time of programs that I keep closed and only open infrequently (Discord is one of those, the update loop takes a buttload of time because I don't use it daily), but I'm not annoyed when something I keep open takes 1-10s to open.
But when I think of getting annoyed it's almost always because an action I'm doing takes too long. I grew up in an era with worse computers than we have today, but clicking a new list was perceptibly instant- it was like the computer was waiting for the screen to catch up.
Today, it feels like the computer chugs to show you what you've clicked on. This is especially true with universal software, like chat programs, that everyone in an org is using.
I think Casey Muratori's point about the watch window in visual studio is the right one. The watch window used to be instant, but someone added an artificial delay to start processing so that the CPU wouldn't work when stepping fast through the code. The result is that, well, you gotta wait for the watch window to update... Which "feels bad".
I fear that such comments are similar to the old 'a monster cable makes my digital audio sound more mellow!'
The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.
Not sure what this means; the eye doesn’t perceive anything. Maybe you’re thinking of saccades or round-trip response times or something else? Those are in the ~100ms range, but that’s different from whether the eye can see something.
Well if you believe that, start up a video game with a framerate limiter and set your game's framerate limit to 10 fps and tell me how much you enjoy the experience. By default your game will likely be running at either 60 fps or 120 fps if you're vertical synced (depends on your monitor's refresh rate). Make sure to switch back and forth between 10 and 60/120 to compare.
Even your average movie captures at 24 hz. Again, very likely you've never actually just compared these things for yourself back to back, as I mentioned originally.
>The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.
It takes effectively no effort to conduct such a study yourself. Just try re-encoding a video at different frame rates up to your monitor refresh rate. Or try looking at a monitor that has a higher refresh rate than the one you normally use.
Modern operating systems run at 120 or 144 hz screen refresh rates nowadays, I don't know if you're used to it yet but try and go back to 60, it should be pretty obivous when you move your mouse.
You say snappy, but what is snappy?
I right now have a toy project in progress in zig that uses users perception as a core concept.
Rarely one can react to 10ms jank. But when you get to bare metal development 10ms becomes 10 million of reasonably high level instructions that can be done. Now go to website, click. If you can sense a delay from JS this means that jank is approximately 100ms; does clicking that button, really should be 100 million instructions?
When you look close enough you will find that not only it’s 100 million instructions but your operating system along with processor made tens of thousands of tricks in the background to minimize the jank and yet you still can sense it.
Today even writing in non optimized, unpopular languages like Prolog is viable because hardware is mindblowing fast, and yet some things are slow, because we utilize that speed to decrease development costs.
I notice a pattern in the kinds of software that people are complaining about. They tend to be user-facing interactive software that is either corporate, proprietary, SaaS, “late-stage” or contains large amounts of telemetry. Since I tend to avoid such software, the vast majority of software I use I have no complaints about with respect to speed and responsiveness. The biggest piece of corporate bloatware I have is Chromium which (only) takes 1-2 seconds to launch and my system is not particularly powerful. In the corporate world bloat is a proxy for sophistication, for them it is a desirable feature so you should expect it. They would rather you use several JavaScript frameworks when the job could be done with plain HTML because it shows how rich/important/fashionable/relevant/high-tech they are.
Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.
A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.
> Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.
Same thing happens with UI & Website design. When the designers and front-end devs all have top-spec MacBooks, with 4k+ displays, they design to look good in that environment.
Then you ship to the rest of the world which are still for the most part on 16:9 1920x1080 (or god forbid, 1366x768), low spec windows laptops and the UI looks like shit and is borderline unstable.
Now I don't necessarily think things should be designed for the lowest common denominator, but at the very least we should be taking into consideration that the majority of users probably don't have super high end machines or displays. Even today you can buy a brand new "budget" windows laptop that'll come with 8GB of RAM, and a tiny 1920x1080 display, with poor color reproduction and crazy low brightness - and that's what the majority of people are using, if they are using a computer at all and not a phone or tablet.
I've found so many performance issues at work by booting up a really old laptop or working remotely from another continent. It's pretty straightforward to simulate either poor network conditions or generally low performance hardware, but we just don't generally bother to chase down those issues.
Oh yeah, I didn't even touch on devs being used to working on super faster internet.
If you're on Mac, go install Network Link Conditioner and crank that download an upload speed way down. (Xcode > Open Developer Tools > More Developer Tools... > "Additional Tools for Xcode {Version}").
When I bought my current laptop, it was the cheapest one Costco had with 8 gigs of memory, which was at the time plenty for all but specialized uses. I've since upgraded it to 16, which feels like the current standard for that.
But...why? Why on earth do I need 16 gigs of memory for web browsing and basic application use? I'm not even playing games on this thing. But there was an immediate, massive spike in performance when I upgraded the memory. It's bizarre.
Most cheap laptops these days ship with only one stick of RAM, and thus are only operating in single-channel mode. By adding another memory module, you can operate in dual-channel mode which can increase performance a lot. You can see the difference in performance by running a full memory test in single-channel mode vs multi-channel mode with a program like memtest86 or memtest86+ or others.
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency.
I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
To note, people will have wildly different tolerance to delays and lag.
On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.
Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.
Spotify takes 7 seconds from clicking on its icon to playing a song on a 2024 top-of-the-range MacBook Pro. Navigating through albums saved on your computer can take several seconds. Double clicking on a song creates a 1/4sec pause.
This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.
You're a pretty bad sample, that machine you're talking about probably cost >$2,000 new; and if it's an M-series chip; well that was a multi-generational improvement.
I (very recently I might add) used a Razer Blade 18, with i9 13950HX and 64G of DDR5 memory, and it felt awfully slow, not sure how much of that is Windows 11's fault however.
My daily driver is an M2 Macbook Air (or a Threadripper 3970x running linux); but the workers in my office? Dell Latitudes with an i5, 4 real cores and 16G of RAM if they're lucky... and of course, Windows 11.
Don't even ask what my mum uses at home, it cost less than my monthly food bill; and that's pretty normal for people who don't love computers.
One example is Office. Microsoft is going back to preloading office during Windows Boot so that you don't notice it loading. With the average system spec 25 years ago it made sense to preload office. But today, what is Office doing that it needs to offload its startup to running at boot?
How long did your computer take to start up, from power off (and no hibernation, although that presumably wasn't a thing yet), the first time you got to use a computer?
How long did it take the last time you had to use an HDD rather than SSD for your primary drive?
How long did it take the first time you got to use an SSD?
How long does it take today?
Did literally anything other than the drive technology ever make a significant difference in that, in the last 40 years?
> Almost everything loads instantly on my 2021 MacBook
Instantly? Your applications don't have splash screens? I think you've probably just gotten used to however long it does take.
> 5 year old mobile CPUs load modern SPA web apps with no problems.
"An iPhone 11, which has 4GB of RAM (32x what the first-gen model had), can run the operating system and display a current-day webpage that does a few useful things with JavaScript".
This should sound like clearing a very low bar, but it doesn't seem to.
I think it's a very theoretical argument: we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?
The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.
These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.
> These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning assembly and sticking to C++ or PERL because that's what they learned first.
Why stop at C++? Is that what you happen to be comfortable with? Couldn't you create even faster software if you went down another level? Why don't you?
Couldn't you create even faster software if you went down another level? Why don't you?
No and if you understood what makes software fast you would know that. Most software is allocating memory inside hot loops and taking that out is extremely easy and can easily be a 7x speedup. Looping through contiguous memory instead of chasing pointers through heap allocated variables is another 25x - 100x speed improvement at least. This is all after switching from a scripting language, which is about a 100x in itself if the language is python.
It isn't about the instructions it is about memory allocation and prefetching.
Sorry but it is absolutely the case that there are optimizations available to someone working in assembly that are not available to someone working in C++.
You are probably a lazy or inexperienced engineer if you choose to work in C++.
In fact, there are optimizations available at the silicon level that are not available in assembly.
You are probably a lazy or inexperienced engineer if you choose to work in assembly.
I'm talking about speeding software up by 10x-100x by language choice, then 7x with extremely minimal adjustments (allocate memory outside of hot loops), then 25x - 100x with fairly minimal design changes (use vectors, loop through them straight).
I'm also not saying people are lazy, I'm saying they don't know that with something like modern C++ and a little bit of knowledge of how to write fast software MASSIVE speed gains are easy to get.
You are helping make my point here, most programmers don't realize that huge speed gains are low hanging fruit. They aren't difficult, they don't mean anything is contorted or less clear (just the opposite), they just have to stop rationalizing not understanding it.
I say this with knowledge of both sides of the story instead of guessing based on conventional wisdom.
So you agree there’s a trade off between developer productivity and optimization (coding in assembly isn’t worth it, but allocating memory outside of hot loops is)
Are you seriously replying and avoiding everything we both said? I'll simplify it for you:
Writing dramatically fast software that is 1,000x or even 10,000 times faster than a scripting language takes basically zero effort once you know how to do it and these assembly optimization are a myth that you would have already shown me if you could.
“Zero effort once you know how to do it” is another way of saying “time and effort.”
Congratulations you’ve discovered the value of abstractions!
I mean, you’re the one who started this off with the insane claim that there’s no tradeoff, then claimed there are no optimizations available below C++ (i.e. C++ is the absolute most optimized code a person can write). Not my fault you stake out indefensible positions.
Your original comment was saying you have to give up features and development speed to have faster software. I've seen this claim before many times, but it's always from people rationalizing not learning anything beyond the scripting languages they learned when they got in to programming.
I explained to you exactly why this is true, and it's because writing fast software just means doing some things slightly differently with a basic awareness of what makes programs fast, not because it is difficult or time consuming. Most egregiously bad software is probably not even due to optimization basics but from recomputing huge amounts of unnecessary results over and over.
What you said back is claims but zero evidence or explanation of anything. You keep talking about assembly language, but it has nothing to do with getting huge improvements for no time investment, because things like instruction count are not where the vast majority of speed improvements come from.
I mean, you’re the one who started this off with the insane claim that there’s no tradeoff, then claimed there are no optimizations available below C++ (i.e. C++ is the absolute most optimized code a person can write).
This is a hallucination that has nothing to do with your original point. The vast majority of software could be sped up 100x to 1000x easily if they were written slightly different. Asm optimizations are extremely niche with modern CPUs and compilers and the gains are minuscule compared to C++ that is already done right. This is an idea that permeates through inexperienced programmers, that asm is some sort of necessity for software that runs faster than scripting languages.
Go ahead and show me what specifically you are talking about with C++, assembly or any systems language or optimization.
Show me where writing slow software saves someone so much time, show me any actual evidence or explanation of this claim.
So again, what you're saying is there is a tradeoff. You just think it should be made in a different place than where the vast majority of engineers in the world choose to make it. That's fine! It's probably because they're idiots and you're really smart, but it's obviously not because there's no tradeoff.
> that asm is some sort of necessity for software that runs faster than scripting languages.
It seems you're not tracking the flow of the conversation if you believe this is what I'm saying. I am saying there is always a way to make things faster by sacrificing other things developer productivity, feature sets, talent pool, or distribution methods. You agree with me, it turns out!
So again, what you're saying is there is a tradeoff. You just think it should be made in a different place than where the vast majority of engineers in the world choose to make it.
Show me what it is I said that makes you think that.
That's fine! It's probably because they're idiots and you're really smart, but it's obviously not because there's no tradeoff.
Where did I say any of this? I could teach anyone to make faster software in an hour or two, but myths like the ones you are perpetuating make people think it's difficult or faster software is more complicated.
You originally said that making software faster 'decreases velocity and sacrifices features' but you can't explain or backup any of that.
You agree with me, it turns out!
I think what actually happened is that you made some claims that get repeated but they aren't from your actual experience and you're trying to avoid giving real evidence or explanations so you keep trying to shift what you're saying to something else.
The truth is that if someone just learns to program with types and a few basic techniques they can get away from writing slow software forever and it doesn't come at any development speed, just a little learning up front that used to be considered the basics.
Next time you reply show me actual evidence of the slow software you need to write to save development time. I think the reality is that this is just not something you know a lot about, but instead of learning about it you want to pretend there is any truth to what you originally said. Show me any actual evidence or explanation instead of just making the same claims over and over.
> I could teach anyone to make faster software in an hour or two,
Is one or two hours of two engineers' time more than zero hours, or no?
> just a little learning up front
Is a little learning more than zero learning, or no?
IMO your argument would hold a lot more weight if people felt like their software (as users) is slow, but many people do not. Save for a few applications, I would prefer they keep their same performance profile and improve their feature set than spend any time doing the reverse. And as you have said multiple times now: it does indeed take time!
If your original position was what it is now, which is "there's low hanging fruit," I wouldn't disagree. But what you said is there's no tradeoff. And of course now you are saying there is a tradeoff... so now we agree! Where any one person should land on that tradeoff is super project-specific, so not sure why you're being so assertive about this blanket statement lol.
Now learning something new for a few hours means we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity." ?
You made up stuff I didn't say, you won't back up your claims with any sort of evidence, you keep saying things that aren't relevant, what is the point of this?
This thread is john carmack saying the world could get by with cheaper computers if software wasn't so terrible and you are basically trying to argue with zero evidence that software needs to be terrible.
Why can't you give any evidence to back up your original claim? Why can't you show a single program fragment or give a single example?
For what it is worth, there is room for improvement in how people use scripting languages. I have seen Kodi extensions run remarkably slowly and upon looking at their source code to see why, I saw that everything was being done in a single thread with blocking on relatively slow network traffic. There was no concurrency being attempted at all, while all of the high performance projects I have touched in either C or C++ had concurrency. The plugin would have needed a major rewrite to speed things up, but it would have made things that took minutes take a few seconds if it were done. Unfortunately, doing the rewrite was on the wrong side of a simple “is it worth the time” curve, so I left it alone:
Just today, I was thinking about the slow load times of a bloated Drupal site that heard partially attributable to a YouTube embed. I then found this, which claims to give a 224x performance increase over YouTube’s stock embed (and shame on YouTube for not improving it):
In the past, I have written electron applications (I had tried Qt first, but had trouble figuring out how what I wanted after 20 hours of trying, and got what I needed from electron in 10). The electron applications are part of appliances that are based on the Raspberry Pi CM4. The electron application loads in a couple seconds on the CM4 (and less than 1 second on my desktop). Rather than using the tools web developers often use that produce absurd amounts of HTML and JS, I wrote nearly every line of HTML and JavaScript by hand (as I would have done 25 years ago) such that it was exactly what I needed and there was no waste. I also had client side JavaScript code running asynchronously after the page loaded. To be fair, I did use a few third party libraries like express and an on screen keyboard, but they were relatively light weight ones.
Out of curiosity, I did a proof of concept port of one application from electron to WebKitGTK with around 100 lines of C. The proof of concept kept nodejs running as a local express server that was accessed by the client side JavaScript running in the WebKitGTK front end via HTTP requests. This cut memory usage in half and seemed to launch slightly faster (although I did not measure it). I estimated that memory usage would be cut in half again if I rewrote the server side JavaScript in C. Memory usage would likely have dropped even more and load times would have become even quicker if I taught myself how to use a GUI toolkit to eliminate the need for client side HTML and JavaScript, but I had more important things to do than spend many more hours to incrementally improve what already worked (and I suspect many are in the same situation).
To give a final example, I had a POSIX shell script that did a few tasks, such as polling a server on its LAN for configuration updates to certain files and doing HA failover of another system were down, among other things. I realized the script iterated too slowly, so I rewrote it to launch a subshell as part of its main loop that does polling (with file locking to prevent multiple sub shells from doing polling at the same time). This allowed me to guarantee HA failover always happens within 5 seconds of another machine going down, and all it took were using concepts from C (threading and locking). They were not as elegant as actual C code (since subshells are not LWPs and thus need IPC mechanisms like file locks), but they worked. I know polling is inefficient, but it is fairly foolproof (no need to handle clients being offline when it is time for a push), robustness was paramount and development time was needed elsewhere.
In any case, using C (or if you must, C++) is definitely better than a scripting language, provided you use it intelligently. If you use techniques from high performance C code in scripting languages, code written in them often becomes many times faster. I only knew how to do things in other languages relatively efficiently because I was replicating what I would be doing in C (or if forced, C++). If I could use C for everything, I would, but I never taught myself how to do GUIs in C, so I am using my 90s era HTML skills as a crutch. However, reading this exchange (and writing this reply) has inspired me to make an effort to learn.
Javascript was made in a few weeks so that some sort of programmability could be built in to web pages. Python was made in the 90s as a more modern competition to perl for scripting.
Modern C++ and systems language didn't exist and neither was made with the intention that people would write general purpose interactive programs that leveraged computers 1,000x faster so that software could run 1,000x slower.
Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.
Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.
Why not log them to a file and cron a script to upload the data? Even if the feature request is nonsensical, you can architect a solution that respect the platform's constraints. It's kinda like when people drag in React and Next.js just to have a static website.
You’re right, and I shouldn’t necessarily blame devs for the idea, though I do blame their CTO for not standing up to it if nothing else.
Though it’s also unclear to me in this particular case why they couldn’t collect commands being issued, and then batch-send them hourly, daily, etc. instead of having each one route through the cloud.
I think it’s a little more nuanced than the broad takes make it seem.
One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.
Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.
The proliferation of Electron apps is one of the main things. Discord, Teams, Slack, all dogshit slow. Uses over a gigabyte of RAM, and uses it poorly. There's a noticeable pause any time you do user input; type a character, click a button, whatever it is, it always takes just barely too long.
All of Microsoft's suite is garbage. Outlook, Visual Studio, OneNote.
Edge isn't slow, (shockingly) but you know what is? Every webpage. The average web page has 200 dependencies it needs to load--frameworks, ads, libraries, spyware--and each of those dependencies has a 99% latency of 2 seconds, which means on average, at least two of those dependencies takes 2 seconds to load, and the page won't load until they do.
Steam is slow as balls. It's 2025 and it's a 32 bit application for some reason.
At my day job, our users complain that our desktop application is slow. It is slow. We talk about performance a lot and how it will be a priority and it's important. Every release, we get tons of new features, and the software gets slower.
My shit? My shit's fast. My own tiny little fiefdom in this giant rat warrens is fast. It could be faster, but it's pretty fast. It's not embarrassing. When I look at a flamegraph of our code when my code is running, I really have to dig in to find where my code is taking up time. It's fine. I'm--I don't feel bad. It's fine.
I love this industry. We are so smart. We are so capable of so many amazing things. But this industry annoys me. We so rarely do any of them. We're given a problem, and the solution is some god forsaken abomination of an electron app running javascript code on the desktop and pumping bytes into and out of a fucking DOM. The most innovative shit we can come up with is inventing a virtual dumbass and putting it into everything. The most value we create is division, hate, fear, and loathing on social media.
Online Word (or Microsoft 365, or whatever it is called) regularly took me 2 minutes to load a 120 page document. I'm being very literal here. You could see it load in real time approximately 1 page a second. And it wasn't a network issue, mind you. It was just that slow.
Worse, the document strained my laptop so much as I used it, I regularly had to reload the web-page.
Try forcefully closing VSCode and your browser, and see how long it takes to open them again. The same is true for most complex webpages/'webapps' (Slack, Discord, etc).
A lot of other native Mac stuff is also less than ideal. Terminal keeps getting stuck all the time, Mail app can take a while to render HTML emails, Xcode is Xcode, and so on.
The Nintendo Switch on a chipset that was outdated a decade ago can run Tears of the Kingdom. It's not sensible that modern hardware is anything less than instant.
That's because TOTK is designed to run on it, with careful compromises and a lot of manual tuning.
Nintendo comes up with a working game first and then adds the story - BotW/TotK are post-apocalyptic so they don't have to show you too many people on screen at once.
The other way you can tell this is that both games have the same story even though one is a sequel! Like Ganon takes over the castle/Hyrule and then Link defeats him, but then they go into the basement and somehow Ganon is there again and does the exact same thing again? Makes no sense.
The framing device for The Legend of Zelda games is that it's a mythological cycle in which Link, Ganon, and Zelda are periodically reborn and the plot begins anew with new characters. It lets them be flexible with the setting, side quests, and characters as the series progresses and it's been selling games for just shy of forty years.
In Carmack's Lex Fridman interview he says he knows C++ devs who still insist on using some ancient version of MSVC because it's *so fast* compared to the latest, on the latest hardware.
It really depends on the software. I have the top-of-the-line M4 Max laptop with 128GB of memory. I recently switched from Zotero [1] to using papis [2] at the command line.
Zotero would take 30 seconds to a minute to start up. papis has no startup time as it's a cli app and searching is nearly instantaneous.
There is no reason for Zotero to be so slow. In fact, before switching I had to cut down on the number of papers it was managing because at one point it stopped loading altogether.
It's great you haven't run into poorly optimized software, but but not everyone is so lucky.
It vastly depends on what software you're forced to use.
Here's some software I use all the time, which feels horribly slow, even on a new laptop:
Slack.
Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.
Outlook
Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.
Teams
Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.
These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.
Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?
Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.
Some of this is due to the adoption of React. GUI optimization techniques that used to be common are hard to pull off in the React paradigm. For instance, pre-rendering parts of the UI that are invisible doesn't mesh well with the React model in which the UI tree is actually being built or destroyed in response to user interactions and in which data gets loaded in response to that, etc. The "everything is functional" paradigm is popular for various legitimate reasons, although React isn't really functional. But what people often forget is that functional languages have a reputation for being slow...
I don't get any kind of spinner on Outlook opening emails. Especially emails which are pure text or only lightly stylized open instantly. Even emails with calendar invites load really fast, I don't see any kind of spinner graphic at all.
Running latest Outlook on Windows 11, currently >1k emails in my Inbox folder, on an 11th gen i5, while also on a Teams call a ton of other things active on my machine.
This is also a machine with a lot of corporate security tools sapping a lot of cycles.
I don't doubt it's happening to you, but I've never experienced it. And I'm not exactly using bleeding edge hardware here. A several year old i5 and a Ryzen 3 3200U (a cheap 2019 processor in a cheap Walmart laptop).
Maybe your IT team has something scanning every email on open. I don't know what to tell you, but it's not the experience out of the box on any machine I've used.
You're probably right, I'm likely massively underestimating the time, it's long enough to be noticable, but not so long that it feels instantly frustrating the first time, it just contributes to an overall sluggishness.
I’m sure you know this, but a reminder that modern devices cache a hell of a lot, even when you “quit” such that subsequent launches are faster. Such is the benefit of more RAM.
I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).
A better example is Visual Studio [0], since it’s apples to apples.
A lot of nostalgia is at work here. Modern tech is amazing. If the old tools were actually better people would actually use them. Its not like you can't get them to work.
I can never tell if all of these comments are exaggerations to make a point, or if some people really have computers so slow that everything takes 20 seconds to launch (like the other comment claims).
I'm sure some of these people are using 10 year old corporate laptops with heavy corporate anti-virus scanning, leading to slow startup times. However, I think a lot of people are just exaggerating. If it's not instantly open, it's too long for them.
I, too, can get programs like Slack and Visual Studio Code to launch in a couple seconds at most, in contrast to all of these comments claiming 20 second launch times. I also don't quit these programs, so the only time I see that load time is after an update or reboot. Even if every program did take 20 seconds to launch and I rebooted my computer once a week, the net time lost would be measured in a couple of minutes.
I have a 12 core Ryzen 9 with 64GB of RAM, and clicking the emoji reaction button in Signal takes long enough to render the fixed set of emojis that I've begun clicking the empty space where I know the correct emoji will appear.
For years I've been hitting the Windows key, typing the three or four unique characters for the app I want and hitting enter, because the start menu takes too long to appear. As a side note, that no longer works since Microsoft decided that predictability isn't a valuable feature, and the list doesn't filter the same way every time or I get different results depending on how fast I type and hit enter.
Lots of people literally outpace the fastest hardware on the market, and that is insane.
I have a 16 core Ryzen 9 with 128GB of RAM. I have not noticed any slowness in Signal. This might be caused by differences in our operating systems. It sounds like you run Windows. I run Gentoo Linux.
Apple unlike the other Silicon Valley giants has figured out that latency >>> throughput. Minimizing latency is much more important for making a program "feel" fast than maximizing latency. Some of the apps I interact with daily are Slack, Teams (ugh), Gmail, and YouTube and they are all slow as dogshit.
You are using a relatively high end computer and mobile device. Go and find a cheap laptop x86 and try doing the same. It will be extremely painful. Most of this is due to a combination of Windows 11 being absolute trash and JavaScript being used extensively in applications/websites. JavaScript is memory hog and can be extremely slow depending on how it is written (how you deal with loops massively affects the performance).
What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.
They're comparing these applications to older applications that loaded instantly on much slower computers.
Both sides are right.
There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:
- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.
- Better font rendering is the same.
- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.
- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.
- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...
- The data sets we deal with today are often a lot larger.
- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.
- More memory means more data structure overhead to manage that memory.
- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.
The major slowdown of modern applications is network calls. Spend 50-500ms a pop for a few kilos of data. Many modern applications will spin up a half dozen blocking network calls casually.
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.
I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.
Use of underpowered databases and abstractions that don't eliminate round-trips is a big one. The hardware is fast but apps take seconds to load because on the backend there's a lot of round-trips to the DB and back, and the query mix is unoptimized because there are no DBAs anymore.
It's the sort of thing that can be handled via better libraries, if people use them. Instead of Hibernate use a mapper like Micronaut Data. Turn on roundtrip diagnostics in your JDBC driver, look for places where they can be eliminated by using stored procedures. Have someone whose job is to look out for slow queries and optimize them, or pay for a commercial DB that can do that by itself. Also: use a database that lets you pipeline queries on a connection and receive the results asynchronously, along with server languages that make it easy to exploit that for additional latency wins.
Even worse, our bottom most abstraction layers pretend that we are running on a single core system from the 80s. Even Rust got hit by that when it pulled getenv from C instead of creating a modern and safe replacement.
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
But each vendor only develop a few software and generally supports only three platforms -/+ one. It’s so damning when I see projects reaching out for electron, when they only support macOS and Windows. And software like Slack has no excuse for being this slow on anything other than latest gen cpu and 1gb internet connection.
Users only want 5% of the features of the few programs they use. However everyone has a different list of features and a different list of programs. And so to get a market you need all the features on all the programs.
Did people make this exchange or did __the market__? I feel like we're assigning a lot of intention to a self-accelerating process.
You add a new layer of indirection to fix that one problem on the previous layer, and repeat it ad infinitum until everyone is complaining about having too many layers of indirection, yet nobody can avoid interacting with them, so the only short-term solution is a yet another abstraction.
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves
The backend programming language usually isn't a significant bottleneck; running dozens of database queries in sequence is the usual bottleneck, often compounded by inefficient queries, inappropriate indexing, and the like.
Yep. I’m a DBRE, and can confirm, it’s almost always the DB, with the explicit caveat that it’s also rarely the fault of the DB itself, but rather the fault of poor schema and query design.
Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.
The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!
It seems to me most developers don't want to learn much about the database and would prefer to hide it behind the abstractions used by their language of choice. I can relate to a degree; I was particularly put off by SQL's syntax (and still dislike it), but eventually came to see the value of leaning into the database's capabilities.
Certain ORMs such as Rails's ActiveRecord are part of the problem because they create the illusion that local memory access and DB access are the same thing. This can lead to N+1 queries and similar issues. The same goes for frameworks that pretend that remote network calls are just a regular method access (thankfully, such frameworks seem to have become largely obsolete).
The fact that this was seen as an acceptable design decision both by the creators, and then taken up by the industry is in an of itself a sign of a serious issue.
I made a vendor run their buggy and slow software on a Sparc 20 against their strenuous complaints to just let them have an Ultra, but when they eventually did optimize their software to run efficiently (on the 20) it helped set the company up for success in the wider market. Optimization should be treated as competitive advantage, perhaps in some cases one of the most important.
> Optimization should be treated as competitive advantage
That's just so true!
The right optimizations at the right moment can have a huge boost for both the product and the company.
However the old tenet regarding premature optimization has been cargo-culted and expanded to encompass any optimization, and the higher-ups would rather have ICs churn out new features instead, shifting the cost of the bloat to the customer by insisting on more and bigger machines.
It's good for the economy, surely, but it's bad for both the users and the eventual maintainers of the piles of crap that end up getting produced.
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
> It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.
A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.
Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.
> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
Do you have any examples at all? Or is this just speculation?
> Your understanding of how bounds checking works in modern languages and compilers is not up to date.
One I am familiar with is Swift - which does exactly this because it’s a library feature of Array.
Which languages will always be able to determine through function calls, indirect addressing, etc whether it needs to bounds check or not?
And how will I know if it succeeded or whether something silently failed?
> if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong
I agree. And note this is an example of a scenario you can encounter in other forms.
> Do you have any examples at all? Or is this just speculation?
Yes. Java and python are not competitive for graphics and audio processing.
Your argument is exactly why we ended up with the abominations of C and C++ instead of the safety of Pascal, Modula-2, Ada, Oberon, etc. Programmers at the time didn't realize how little impact safety features like bounds checking have. The bounds only need to be checked once for a for loop, not on each iteration.
There are inevitably those who don't know how to program but are responsible for hiring those that can. Language popularity is an obvious metric with good utility for that case.
Even so you haven't provided any compelling evidence that C or C++ made it's decisions to be more appealing or more popular.
I recently watched a video that can be summarised quite simply as: "Computers today aren't that much faster than the computers of 20 years ago, unless you specifically code for them".
It's a little bit ham-fisted, as the author was shirking decades of compile optimisations also, and it's not apples to apples as he's comparing desktop class hardware with what is essentially laptop hardware; but it's also interesting to see that a lot of the performance gains really weren't that great actually. he observes a doubling of performance in 15 years! Truth be told most people use laptops now, and truth be told 20 years ago most people used desktops, so it's not totally unfair.
The cost of bounds checking, by itself, is low. The cost of using safe languages generally can be vastly higher.
Garbage collected languages often consume several times as much memory. They aren't immediately freeing memory no longer being used, and generally require more allocations in the first place.
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
> However these days the majority of the bad attacks are social
You're going to have to cite a source for that.
Bounds checking is one mechanism that addresses memory safety vulnerabilities. According to MSFT and CISA[1], nearly 70% of CVEs are due to memory safety problems.
You're saying that we shouldn't solve one (very large) part of the (very large) problem because there are other parts of the problem that the solution wouldn't address?
While I do not have data comparing them, I have a few remarks:
1. Scammer Payback and others are documenting on-going attacks that involve social engineering that are not getting the attention that they deserve.
2. You did not provide any actual data on the degree to which bounds checks are “large”. You simply said they were because they are a subset of a large group. There are diseases that only affect less than 100 people in the world that do not get much attention. You could point out that the people affected are humans, which is a group that consists of all people in the world. Thus, you can say that one of these rare diseases affects a large number of people and thus should be a priority. At least, that is what you just did with bounds checks. I doubt that they are as rare as my analogy would suggest, but the point is that the percentage is somewhere between 0 and 70% and without any real data, your claim that it is large is unsubstantiated. That being said, most C software I have touched barely uses arrays for bound checks to be relevant, and when it does use arrays, it is for strings. There are safe string functions available for use like strlcpy() and strlcat() that largely solve the string issues by doing bounds checks. Unfortunately, people keep using the unsafe functions like strcpy() and strcat(). You would have better luck if you suggested people use safe string handling functions rather than suggest compilers insert bounds checks.
3. Your link mentions CHERI, which a hardware solution for this problem. It is a shame that AMD/Intel and ARM do not modify their ISAs to incorporate the extension. I do not mean the Morello processor, which is a proof of concept. I mean the ISA specifications used in all future processors. You might have more luck if you lobby for CHERI adoption by those companies.
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.
That's a fairly worthless metric. What you want is "Cost of cyberattacks / Revenue from attacked systems."
> We're really bad at measuring the secondary effects of our short-sightedness.
We're really good at it. There's an entire industry that makes this it's core competency... insurance. Which is great because it means you can rationalize risk. Which is also scary because it means you can rationalize risk.
But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.
Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.
depends on whether the fact that software can be finished will ever be accepted. If you're constantly redeveloping the same thing to "optimize and streamline my experience" (please don't) then yes, the advantage is dubious. But if not, then the saved value in operating costs keeps increasing as time goes on. It won't make much difference in my homelab, but at datacenter scale it does
Even the fact that value keeps increasing doesn't mean it's a good idea. It's a good idea if it keeps increasing more than other value. If a piece of software is more robust against attacks then the value in that also keeps increasing over time, possibly more than the cost in hardware. If a piece of software is easier to add features to, then that value also keeps increasing over time.
If what we're asking is whether value => X, i.e. to get the most value we should do X, you cannot answer that in the positive by proving X => value. If optimising something is worth a gazillion dollars, you still should not do it if doing something else is worth two gazillion dollars.
The first reply is essentially right. This isn't what happened at all, just because C is still prevalent. All the inefficiency is everything down the stack, not in C.
I don't trust that shady-looking narrator. 5% of what exactly? Do you mean that testing for x >= start and < end is only 5% as expensive as assigning an int to array[x]?
Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.
>Personally I think the 1000Xers kinda ruined things for the rest of us.
Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.
You can always install DOS as your daily driver and run 1980's software on any hardware from the past decade, and then tell me how that's slow.
1000x referred to the hardware capability, and that's not a rarity that is here.
The trouble is how software has since wasted a majority of that performance improvement.
Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.
But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.
The sad thing is that even running DOS software in DOSBox (or in QEMU+FreeDOS), or Amiga software in UAE, is much faster than any native software I have run in many years on any modern systems. They also use more reasonable amounts of storage/RAM.
Animations is part of it of course. A lot of old software just updates the screen immediately, like in a single frame, instead of adding frustrating artificial delays to every interaction. Disabling animations in Android (an accessibility setting) makes it feel a lot faster for instance, but it does not magically fix all apps unfortunately.
I think it'd be pretty funny if to book travel in 2035 you need to use a travel agent that's objectively dumber than a human. We'd be stuck in the eighties again, but this time without each other to rely on.
Of course, that would be suicide for the industry. But I'm not sure investors see that.
I don't think we are gonna go there. Talking is cumbersome. There's a reason, besides social anxiety that people prefer to use self-checkout and electronically order fastfood. There are easier ways to do a lot of things than with words.
I'd bet on maybe ad hoc ai designed ui-s you click but have a voice search when you are confused about something.
If you know what you want then not talking to a human is faster. However if you are not sure a human can figure out. I'm not sure I'd trust a voice assistant - the value in the human is an informed opinion which is hard to program, but it is easy to program a recommendation for whatever makes the most profit. Of course humans often don't have an informed opinion either, but at least sometimes they do, and they will also sometimes admit it when they don't.
> the value in the human is an informed opinion which is hard to program
I don't think I ever used a human for that. They are usually very uninformed about everything that's not their standard operational procedure or some current promotional materials.
20 years ago when I was at McDonalds there would be several customers per shift (so many 1 in 500?) who didn't know what they wanted and asked for a recommendation. Since I worked there I ate there often enough to know if the special was something I liked or not.
Bless your souls. I'm not saying it doesn't happen. I just personally had only bad experiences so I actively avoid human interactive input in my commercial activity.
> I actively avoid human interactive input in my commercial activity
Not to mention that the "human input" can be pre-scripted to urge you to purchase more, so it's not genuinely a human interaction, it's only a human delivering some bullshit "value add" marketing verbiage.
Search is being replaced by LLM chat. Agent workflows are going to get us to a place where people can rally software to their own purposes. At that point, they don't have to interact with the web front end, they can interact with their own personal front-end that is able to navigate your backend.
Today a website is easier. But just like there's a very large percentage of people doing a great many things from their phone instead of tying themselves to a full-blown personal computer, there will be an increasing number of people who send their agents off to get things done. In that scenario, the user interface is further up the stack than a browser, if there's a browser as typically understood in the stack at all.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
That's about a 168x difference. That was from before Moores law started petering out.
For only a 5x speed difference you need to go back to the 4th or 5th generation Intel Core processors from about 10 years ago.
It is important to note that the speed figure above is computed by adding all of the cores together and that single core performance has not increased nearly as much. A lot of that difference is simply from comparing a single core processor with one that has 20 cores. Single core performance is only about 8 times faster than that ancient Pentium 4.
As the other guy said, top of the line CPUs today are roughly ~100x faster than 20 years ago. A single core is ~10x faster (in terms of instructions per second) and we have ~10x the number of cores.
I think on year 2001 GHz CPU should be a performance benchmark that every piece of basic non-high performance software should execute acceptably on.
This is kind of been a disappointment to me of AI when I've tried it. This has kind of been a disappointment to me of AI when I've tried it. Llm should be able to Port things. It should be able to rewrite things with the same interface. It should be able to translate from inefficient languages to more efficient ones.
It should even be able to optimize existing code bases automatically, or at least diagnose or point out poor algorithms, cache optimization, etc.
Heck I remember powerbuilder in the mid 90s running pretty well on 200 mhz CPUs. It doesn't even really interpreted stuff. It's just amazing how slow stuff is. Do rounded corners and CSS really consume that much CPU power?
My limited experience was trying to take the unix sed source code and have AI port it into a jvm language, and it could do the most basic operations, but utterly failed at even the intermediate sed capabilities. And then optimize? Nope
Of course there's no desire for something like that. Which really shows what the purpose of all this is. It's to kill jobs. It's not to make better software. And it means AI is going to produce a flood of bad software. Really bad software.
I've pondered this myself without digging into the specifics. The phrase "sufficiently smart compiler" sticks in my head.
Shower thoughts include whether there are languages that have features, other than through their popularity and representation in training corpuses, help us get from natural language to efficient code?
I was recently playing around with a digital audio workstation (DAW) software package called Reaper that honestly surprised me with its feature set, portability (Linux, macOS, Windows), snappiness etc. The whole download was ~12 megabytes. It felt like a total throwback to the 1990s in a good way.
It feels like AI should be able to help us get back to small snappy software, and in so doing maybe "pay its own way" with respect to CPU and energy requirements. Spending compute cycles to optimize software deployed millions of times seems intuitively like a good bargain.
So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.
More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.
Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.
Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.
I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.
Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.
The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.
I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.
I don't know how to solve that problem or even if it will ever be "solved".
I don't know how to solve that problem or even if it will ever be "solved".
It will not be “solved” because it’s a non-problem.
You can run a thought experiment imagining an alternative universe where human resource were directed towards optimization, and that alternative universe would look nothing like ours. One extra engineer working on optimization means one less engineer working on features. For what exactly? To save some CPU cycles? Don’t make me laugh.
Except you’re self selecting for a company that has high engineering costs, big fat margins to accommodate expenses like additional hardware, and lots of projects for engineers to work on.
The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
The problem is that almost no one is doing it, because the way we make these decisions has nothing to do with the economical calculus behind, most people just do “what Google does”, which explains a lot of the disfunction.
I think the parent's point is that if Google with millions of servers can't make performance optimization worthwhile, then it is very unlikely that a smaller company can. If salaries dominate over compute costs, then minimizing the latter at the expense of the former is counterproductive.
> The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
That's debatable. Performance optimization almost always lead to complexity increase. Doubled performance can easily cause quadrupled complexity. Then one has to consider whether the maintenance burden is worth the extra performance.
I think it's the reverse: a small company doesn't have the liquidity, buying power or ability to convert more resource into more money like Google.
And of course a lot of small companies will be paying Google with a fat margin to use their cloud.
Getting by with less resources, or even on-premise reduced hardware will be a way bigger win. That's why they'll pay a DBA full time to optimize their database needs to reduce costs 2 to 3x the salary. Or have full team of infra guys mostly dealing with SRE and performance.
I worked there too and you're talking about performance in terms of optimal usage of CPU on a per-project basis.
Google DID put a ton of effort into two other aspects of performance: latency, and overall machine utilization. Both of these were top-down directives that absorbed a lot of time and attention from thousands of engineers. The salary costs were huge. But, if you're machine constrained you really don't want a lot of cores idling for no reason even if they're individually cheap (because the opportunity cost of waiting on new DC builds is high). And if your usage is very sensitive to latency then it makes sense to shave milliseconds off because of business metrics, not hardware $ savings.
The key part here is "machine utilization" and absolutely there was a ton of effort put into this. I think before my time servers were allocated to projects but even early on in my time at Google Borg had already adopted shared machine usage and therew was a whole system of resource quota implemented via cgroups.
Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.
I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.
> I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands.
I think this probably holds true for outfits like Google because 1) on their scale "a core" is much cheaper than average, and 2) their salaries are much higher than average. But for your average business, even large businesses? A lot less so.
I think this is a classic "Facebook/Google/Netflix/etc. are in a class of their own and almost none of their practices will work for you"-type thing.
Maybe not to the same extent, but an AWS EC2 m5.large VM with 2 cores and 8 GB RAM costs ~$500/year (1 year reserved). Even if your engineers are being paid $50k/year, that's the same as 100 VMs or 200 cores + 800 GB RAM.
The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
A subtext here may be his current AI work. In OP, Carmack is arguing, essentially, that 'software is slow because good smart devs are expensive and we don't want to pay for them to optimize code and systems end-to-end as there are bigger fish to fry'. So, an implication here is that if good smart devs suddenly got very cheap, then you might see a lot of software suddenly get very fast, as everyone might choose to purchase them and spend them on optimization. And why might good smart devs become suddenly available for cheap?
I think its a bad argument though. If we had to stop with the features for a little while and created some breathing room, the features would come roaring back. There'd be a downturn sure but not a continuous one.
> "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
We have self driving cars, amazing advancement in computer graphics, dead reckoning of camera position from visual input...
In the meantime, hardware has had to go wide on threads as single core performance has not improved. You could argue that's been a software gain and a hardware failure.
Single core performance has improved, but at a much slower rate than I experienced as a kid.
Over the last 10 years, we are something like 120% improvement in single core performance.
And, not for nothing, efficiency has become much more important. More CPU performance hasn't been a major driving factor vs having a laptop that runs for 12 hours. It's simply easier to add a bunch of cores and turn them all off (or slow them down) to gain power efficiency.
Not to say the performance story would be vastly different with more focus on performance over efficiency. But I'd say it does have an effect on design choices.
And I'd argue that we've seen tons of innovation in the past 18 years aside from just "the smartphone" but it's all too easy to take for granted and forget from our current perspective.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
> * VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
I used VS2005 a little bit in the past few years, and I was surprised to see that it contains most of the features that I want from an IDE. Honestly, I wouldn't mind working on a C# project in VS2005 - both C# 2.0 and VS2005 were complete enough that they'd only be a mild annoyance compared to something more modern.
> partial autopilot features aside from 1970's cruise control
Radar cruise control was a fairly common option on mid-range to high-end cars by 2007. It's still not standard in all cars today (even though it _is_ standard on multiple economy brands). Lane departure warning was also available in several cars. I will hand it to you that L2 ADAS didn't really exist the way it does today though.
I worked for a 3rd party food delivery service in the summer of 2007. Ordering was generally done by phone, then the office would text us (the drivers) order details for pickup & delivery. They provided GPS navigation devices, but they were stand-alone units that were slower & less accurate than modern ones, plus they charged a small fee for using it that came out of our pay.
Steam was selling games, even third party ones, for years by 2007.
I'm not sure what a "VS-Code style IDE" is, but I absolutely did appreciate Visual Studio ( and VB6! ) prior to 2007.
2007 was in fact the peak of TomTom's profit, although GPS navigation isn't really the same as general purpose mapping application.
Grocery delivery was well established, Tesco were doing that in 1996. And the idea of takeaways not doing delivery is laughable, every establishment had their own delivery people.
Yes, there are some things on that list that didn't exist, but the top half of your list is dominated by things that were well established by 2007.
We watched a stream of the 1994 World Cup. There was a machine at MIT which forwarded the incoming video to an X display window
xhost +machine.mit.edu
and we could watch it from several states away. (The internet was so trusting in those days.)
To be sure, it was only a couple of frames per second, but it was video, and an audience collected to watch it.
> EV Cars (that anyone wanted to buy)
People wanted to buy the General Motors EV1 in the 1990s. Quoting Wikipedia, "Despite favorable customer reception, GM believed that electric cars occupied an unprofitable niche of the automobile market. The company ultimately crushed most of the cars, and in 2001 GM terminated the EV1 program, disregarding protests from customers."
I know someone who managed to buy one. It was one of the few which had been sold rather than leased.
>Sublime (Of course ed, vim, emacs, sam, acme already existed for decades by 2007)
>No they weren't TomTom already existed for years, GPS existed for years
>You're right that they already existed
>Again, already existed, glad we agree
>Tech was already there just putting it in a phone doesn't count as innovation
>NASA was driving electric cars on the moon while Elon Musk was in diapers
>I was doing that in the early 80s, but Skype is a fine pre 2007 example thanks again
>Your right we didn't have 4k displays in 2007, not exactly a software innovation. This is a good example of a hardware innovation used to sell essentially the same product
>? Are you sure you didn't have a bad printer there have been good color printers since the 90s let alone 2007. The price to performance arguably hasn't changed since 2007 you are just paying more in running costs than upfront.
>This is definitely hardware.
Scripting language 3.0 or FOTM framework isn't innovative in that there is no problem being solved and no economic gain, if they didn't exist people would use something else and that would be that. With AI the big story was that there WASN'T a software innovation and that what few innovation do exist will die to the Bitter lesson
There has been a lot of innovation - but it is focused to some niche and so if you are not in a niche you don't see it and wouldn't care if you did. Most of the major things you need have already invented - I recall word processors as a kid, so they for sure date back to the 1970s - we still need word processors and there is a lot of polish that can be added, but all innovation is in niche things that the majority of us wouldn't have a use for if we knew about it.
Of course innovation is always in bits and spurts.
I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
Point 1 is why growth/debt is not a good economic model in the long run. We should have a care & maintenance focused economy and center our macro scale efforts on the overall good of the human race, not perceived wealth of the few.
If we focused on upkeep of older vehicles, re-use of older computers, etc. our landfills would be smaller proportional to 'growth'.
I'm sure there's some game theory construction of the above that shows that it's objectively an inferior strategy to be a conservationist though.
We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
> We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
Why can you not match orders in parallel using logarithmic reduction, the same way you would sort in parallel? Is it that there is not enough other computation being done other than sorting by time and price?
You are only able to do that because you are doing simple processing on each transaction. If you had to do more complex processing on each transaction it wouldn't be possible to do that many. Though it is hard for me to imagine what more complex processing would be (I'm not in your domain)
HFT would love to do more complex calculations for some of their trades. They often make the compromise of using a faster algorithm that is known to be right only 60% of the time vs the better but slower algorithm that is right 90% of the time.
That is a different problem from yours though and so it has different considerations. In some areas I/O dominates, in some it does not.
In a perfect world, maximizing (EV/op) x (ops/sec) should be done for even user software. How many person-years of productivity are lost each year to people waiting for Windows or Office to start up, finish updating, etc?
I work in card payments transaction processing and IO dominates. You need to have big models and lots of data to authorize a transaction. And you need that data as fresh as possible and as close to your compute as possible... but you're always dominated by IO. Computing the authorization is super cheap.
Tends to scale vertically rather than horizontally. Give me massive caches and wide registers and I can keep them full. For now though a lot of stuff is run on commodity cloud hardware so... eh.
One of the things I think about sometimes, a specific example rather than a rebuttal to Carmack.
The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.
So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
> So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
I would far, far rather have Windows-only software that is performant than the Electron slop we get today. With Wine there's a decent chance I could run it on Linux anyway, whereas Electron software is shit no matter the platform.
Wine doesn't even run Office, there's no way it'd run whatever native video stack Teams would use. Linux has Teams purely because Teams decided to go with web as their main technology.
Even the Electron version of Teams on Linux has a reduced feature set because there's no Office to integrate with.
Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.
Is there any realistic way to shift the payment of hard-to-trace costs like environmental clean-up, negative mental or physical health, and wasted time back to the companies and products/software that cause them?
It's the kind of economics that shifts the financial debt to accumulating waste, and technical debt, which is paid for by someone else. It's basically stealing. There are --of course-- many cases in which thorough optimizing doesn't make much sense, but the idea of just adding servers instead of rewriting is a sad state of affairs.
It doesn't seem like stealing to me? Highly optimised software generally takes more effort to create and maintain.
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
I feel like the argument is similar to that of all corporate externality pushes.
For example "polluting the air/water, requiring end-users to fill landfills with packaging and planned obscolescence" allows a company to more cheaply offer more products to you as a consumer.. but now everyone collectively has to live in a more polluted world with climate change and wasted source material converted to expensive and/or dangerous landfills and environmental damage from fracking and strip mining.
But that's still not different from theft. A company that sells you things that "Fell off the back of a truck" is in a position to offer you lower costs and greater variety, as well. Aren't they?
Our shared resources need to be properly managed: neither siphoned wastefully nor ruined via polution. That proper management is a cost, and it either has to be borne by those using the resources and creating the waste, or it is theft of a shared resource and tragedy of the commons.
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
> has been externalized to customers in search of ill-gotten profits.
'Externality' does not mean 'thing I dislike'. If it is the customers running the software or waiting the extra couple of seconds, that's not an externality. By definition. (WP: "In economics, an externality is an indirect cost (external cost) or benefit (external benefit) to an uninvolved third party that arises as an effect of another party's (or parties') activity.") That is just the customers picking their preferred point on the tradeoff curves.
It's like ignoring backwards compatibility. That is really cheap since all the cost is pushed to end-users (that have to relearn the UI) or second/third-party developers (that have to rewrite their client code to work with a new API). But it's OK since everyone is doing it and also without all those pointless rewrites many of us would not have a job.
> without all those pointless rewrites many of us would not have a job.
I hear arguments like this fairly often. I don't believe it's true.
Instead of having a job writing a pointless rewrite, you might have a job optimizing software. You might have a different career altogether. Having a job won't go away: what you do for your job will simply change.
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
From what I’m seeing people do on their computers, it barely changed from what they’ve been doing on their pentium 4 one. But now, with Electron-based software and the generals state of Windows, you can’t recommend something older than 4 years. It’s hard to not see it as stealing when you have to buy a 1000+ laptop, when a 400 one could easily do the job if the software were a bit better.
It’s only a tradeoff for the user if the user find the added features useful.
Increasingly, this is not the case. My favorite example here is the Adobe Creative Suite, which for many users useful new features became far and few between some time ~15 years ago. For those users, all they got was a rather absurd degree of added bloat and slowness for essentially the same thing they were using in 2010. These users would’ve almost certainly been happier had 80-90% of the feature work done in that time instead been bug fixes and optimization.
would you spend 100 years writing the perfect editor optimizing every single function and continueously optimizing and when will it ever be complete? No you wouldn't. Do you use python or java or C? Obviously, that can be optimized if you wrote in assembly. Practice what you preach, otherwise you'd be stealing.
Not really stealing. You could off course build software that is more optimized and with the same features but at a higher cost. Would most buyers pay twice the price for a webb app that loads in 1 sec instead of 2? Probably not.
> Do you have someone spend extra time optimising your software or do you have them produce more functionality
Depends. In general, I'd rather have devs optimize the software rather than adding new features just for the sake of change.
I don't use most of the new features in macOS, Windows, or Android. I mostly want an efficient environment to run my apps and security improvements. I'm not that happy about many of the improvements in macOS (eg the settings app).
Same with design software. I don't use most of the new features introduced by Adobe. I'd be happy using Illustrator or Photoshop from 10 years ago. I want less bloat, not more.
I also do audio and music production. Here I do want new features because the workflow is still being improved but definitely not at the cost of efficiency.
Regarding code editors I'm happy with VSCode in terms of features. I don't need anything else. I do want better LSPs but these are not part of the editor core. I wish VSCode was faster and consumed less memory though.
Efficiency is critical to my everyday life. For example, before I get up from my desk to grab a snack from the kitchen, I'll bring any trash/dishes with me to double the trip's benefits. I do this kind of thing often.
Optimizing software has a similar appeal. But when the problem is "spend hours of expensive engineering time optimizing the thing" vs "throw some more cheap RAM at it," the cheaper option will prevail. Sometimes, the problem is big enough that it's worth the optimization.
The market will decide which option is worth pursuing. If we get to a point where we've reached diminishing returns on throwing hardware at a problem, we'll optimize the software. Moore's Law may be slowing down, but evidently we haven't reached that point yet.
Ultimately it's a demand problem. If consumer demands more performant software, they would pay a premium for it. However, the opposite is more true. They would prefer an even less performant version if it came with a cheaper price tag.
"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less.
Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
Almost all of my nontechnical friends and family members have at some point complained about bloated and overly complicated software that they are required to use.
Also remember that Microsoft at this point has to drag their users kicking and screaming into using the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for their own sake, but definitely not users.
Every time I offer alternatives to slow hardware, people find a missing feature that makes them stick to what they're currently using. Other times the features are there but the buttons for it are in another place and people don't want to learn something new. And that's for free software, with paid software things become even worse because suddenly the hours they spend on loading times is worthless compared to a one-time fee.
Complaining about slow software happens all the time, but when given the choice between features and performance, features win every time. Same with workflow familiarity; you can have the slowest, most broken, hacked together spreadsheet-as-a-software-replacement mess, but people will stick to it and complain how bad it is unless you force them to use a faster alternative that looks different.
Agree WRT the tradeoff between features and elegance.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
If people could, no one would ever upgrade anything anymore. Look at how hard MS has to work to force anyone to upgrade. I have never heard of anyone who wanted a new version of Windows, Office, Slack, Zoom, etc.
This is also why everything (like Photoshop) is being forced into the cloud. The vast majority of people don't want the new features that are being offered. Including buyers at businesses. So the answer to keep revenue up is to force people to buy regardless of what features are being offered or not.
> You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
I think this is more a consumer perspective than a B2B one. I'm thinking about the business case. I.e. businesses purchase software (or has bespoke software developed). Then they pay for fixes/features/improvements. There is often a direct communication between the buyer and the developer (whether it's off-the shelf, inhouse or made to spec). I'm in this business and the dialog is very short "great work adding feature A. We want feature B too now. And oh the users say the software is also a bit slow can you make it go faster? Me: do you want feature B or faster first? Them (always) oh feature B. That saves us man-weeks every month". Then that goes on for feature C, D, E, ...Z.
In this case, I don't know how frustrated the users are, because the customer is not the user - it's the users' managers.
In the consumer space, the user is usually the buyer. That's one huge difference. You can choose the software that frustrates you the least, perhaps the leanest one, and instead have to do a few manual steps (e.g. choose vscode over vs, which means less bloated software but also many fewer features).
Perfectly put. People who try to argue that more time should be spent on making software perform better probably aren't thinking about who's going to pay for that.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
What cost? The hardware is dirt cheap. Programmers aren't cheap. The value of being able to use cheap software on cheap hardware is basically not having to spend a lot of time optimizing things. Time is the one thing that isn't cheap here. So there's a value in shipping something slightly sub optimal sooner rather than something better later.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
> The hardware is dirt cheap. Programmers aren't cheap.
That may be fine if you can actually improve the user experience by throwing hardware at the problem. But in many (most?) situations, you can't.
Most of the user-facing software is still single-threaded (and will likely remain so for a long time). The difference in single-threaded performance between CPUs in wide usage is maybe 5x (and less than 2x for desktop), while the difference between well optimized and poorly optimized software can be orders of magnitude easily (milliseconds vs seconds).
And if you are bottlenecked by network latency, then the CPU might not even matter.
I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.
It is made in lua using love2d. That helped the developers and comes with a cost in minimal requirements (even if they aren't much for a game released in 2024).
One way to think about it is: If we were coding all our games in C with no engine, they would run faster, but we would have far fewer games. Fewer games means fewer hits. Odds are Balatro wouldn't have been made, because those developer hours would've been allocated to some other game which wasn't as good.
Balatro was started in vacation time and underwent a ton of tweaking: https://localthunk.com/blog/balatro-timeline-3aarh So if it had to be written in C, probably neither of those would have happened.
I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
Sorry, don't want to go back to a time where I could only edit ASCII in a single font.
Do I like bloat? No. Do I like more software rather than less? Yes! Unity and Unreal are less efficient than custom engines but there are 100x more titles because that tradeoff of efficiency of the CPU vs efficiency of creation.
The same is true for website based app (both online and off). Software ships 10x faster as a web page than as a native app for (windows/mac/linux/android/ios). For most, that's all I need. Even for native like apps, I use photopea.com over photoshop/gimp/krita/affinity etc because it's available everywhere no matter which machine I use or who's machine it is. Is it less efficient running in JS in the browser? Probaby. Do I care? No
VSCode, now the most popular editor in the worlds (IIRC) is web-tech. This has so many benefits. For one, it's been integrated into 100s of websites, so this editor I use is available in more places. It's using tech more people know so more extension that do more things. Also, probably arguably because of JS's speed issues, it encouraged the creation of the Language Server Protocol. Before this, every editor rolled their own language support. The LSP is arguably way more bloat than doing it directly in the editor. I don't care. It's a great idea, way more flexible. Any language can write one LSP and then all editors get support for that language.
I'd be willing to bet that even a brand new iPhone has a surprising number of reasonably old pieces of hardware for Bluetooth, wifi, gyroscope, accelerometer, etc. Not everything in your phone changes as fast as the CPU.
My daily drivers at home are an i3-540 and and Athlon II X4. Every time something breaks down, I find it much cheaper to just buy a new part than to buy a whole new kit with motherboard/CPU/RAM.
I'm a sysadmin, so I only really need to log into other computers, but I can watch videos, browse the web, and do some programming on them just fine. Best ROI ever.
Can you watch H.265 videos? That's the one limitation I regularly hit on my computer (that I got for free from some company, is pretty old, but is otherwise good enough that I don't think I'll replace it until it breaks). I don't think I can play videos recorded on modern iPhones.
IBM PowerPC 750X apparently, which was the CPU the Power Mac G3 used back in the day. Since it's going into space it'll be one of the fancy radiation-hardened versions which probably still costs more than your car though, and they run four of them in lockstep to guard against errors.
Ha! What's special about rad-hard chips is that they're old designs. You need big geometries to survive cosmic rays, and new chips all have tiny geometries.
So there are two solutions:
1. Find a warehouse full of 20-year old chips.
2. Build a fab to produce 20-year old designs.
Both approaches are used, and both approaches are expensive. (Approach 1 is expensive because as you eventually run out of chips they become very, very valuable and you end up having to build a fab anyway.)
There's more to it than just big geometries but that's a major part of the solution.
I'm not sure what artemis or orion are, but you can blame defense contractors for this. Nobody ever got fired for hiring IBM or Lockheed, even if they deliver unimpressive results at massive cost.
I don't disagree that the engineering can be justified. But you don't need custom hardware to achieve radiation hardening, much less hiring fucking IBM.
And to be clear, I love power chips. I remain very bullish about the architecture. But as a taxpayer reading this shit just pisses me off. Pork-fat designed to look pro-humanity.
Modern planes do not, and many older planes have been retrofitted, in whole or in part, with more modern computers.
Some of the specific embedded systems (like the sensors that feed back into the main avionics systems) may still be using older CPUs if you squint, but it's more likely a modern version of those older designs.
If we're talking numbers, there are many, many more embedded systems than general purpose computers. And these are mostly built on ancient process nodes compared to the cutting edge we have today; the shiny octa-cores on our phones are supported by a myriad of ancilliary chips that are definitely not cutting edge.
We aren't talking numbers, though. Who cares about embedded? I mean that literally. This is computation invisible by design. If that were sufficient we wouldn't have smartphones.
.NET has made great strides in this front in recent years. Newer versions optimize cpu and ram usage of lots of fundamentals, and introduced new constructs to reduce allocations and cpu for new code. One might argue they were able because they were so bad, but it’s worth looking into if you haven’t in a while.
Unfortunately there is a distinct lack of any scientific investigation or rigorous analysis into the phenomenon allegedly occurring that is called “Wirth’s Law” unlike, say, Moore’s Law despite many anecdotal examples. Reasoning as if it were literally true leads to absurd conclusions that do not correspond with any reality that I can observe so I am tempted to say that as a broad phenomenon it is obviously false and that anyone who suggests otherwise is being disingenuous. For it to be true there would have to have been no progress whatever in computing-enabled technologies, yet the real manifestations of the increase in computing resources and the exploitation thereof permeates, alters and invades almost every aspect of society and of our personal day to day lives at a constantly increasing rate.
Call it the X-Windows factor --- software gets more capable/more complex and there's a constant leap-frogging (new hardware release, stuff runs faster, software is written to take advantage of new hardware, things run more slowly, software is optimized, things run more quickly).
The most striking example of this was Mac OS X Public Beta --- which made my 400MHz PowerPC G3 run at about the same speed as my 25 MHz NeXT Cube running the quite similar OPENSTEP 4.2 (just OS X added Java and Carbon and so forth) --- but each iteration got quicker until by 10.6.8, it was about perfect.
Is there or could we make an iPhone-like that runs 100x slower than conventional phones but uses much less energy, so it powers itself on solar? It would be good for the environment and useful in survival situations.
Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.
Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.
We have much hardware on the secondary market (resale) that's only 2-3x slower than pristine new primary market devices. It is cheap, it is reuse, and it helps people save in a hyper-consumerist society. The common complaint is that it doesn't run bloated software anymore. And I don't think we can make non-bloated software for a variety of reasons.
As a video game developer, I can add some perspective (N=1 if you will). Most top-20 game franchises spawned years ago on much weaker hardware, but their current installments demand hardware not even a few years old (as recommended/intended way to play the game). This is due to hyper-bloating of software, and severe downskilling of game programmers in the industry to cut costs. The players don't often see all this, and they think the latest game is truly the greatest, and "makes use" of the hardware. But the truth is that aside from current-generation graphics, most games haven't evolved much in the last 10 years, and current-gen graphics arrived on PS4/Xbox One.
Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Ironically, upgrading a graphics card one generation (RTX 3070 -> 4070) costs about $300 if the old card is sold and $500 if it isn't. So gamers end up paying ~$400 for the latest games every few years and then rebel against paying $30 extra per game instead, which could very well be cheaper than the GPU upgrade (let alone other PC upgrades), and would allow companies to spend much more time on optimization. Well, assuming it wouldn't just go into the pockets of publishers (but that is a separate topic).
It's an example of Scott Alexander's Moloch where it's unclear who could end this race to the bottom. Maybe a culture shift could, we should perhaps become less consumerist and value older hardware more. But the issue of bad software has very deep roots. I think this is why Carmack, who has a practically perfect understanding of software in games, doesn't prescribe a solution.
> Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Producing games doesn't cost anything on a per-unit basis. That's not at all the reason for low quality.
Games could cost $1000 per copy and big game studios (who have investors to worry about) would still release buggy slow games, because they are still going to be under pressure to get the game done by Christmas.
One only needs to look at Horizon: Zero Dawn to note that the truth of this is deeply uneven across the games industry. World streaming architectures are incredible technical achievements. So are moddable engines. There are plenty of technical limits being pushed by devs, it's just not done at all levels.
You are right, but you picked a game by a studio known for its technical expertise, with plenty of points to prove about quality game development. I'd like them to be the future of this industry.
But right now, 8-9/10 game developers and publishers are deeply concerned with cash and rather unconcerned by technical excellence or games as a form of interactive art (where, once again, Guerrilla and many other Sony studios are).
> Or could we make a phone that runs 100x slower but is much cheaper? I
Probably not - a large part of the cost is equipment and R&D. It doesn't cost much more to build the most complex CPU vs a 6502 - there is only a tiny bit more silicon and chemicals. What is costly is the R&D behind the chip, and the R&D behind the machines that make the chips. If intel fired all their R&D engineers who were not focused on reducing manufacturing costs they could greatly reduce the price of their CPUs - until AMD released a next generation that is much better. (this is more or less what Henry Ford did with the model-T - he reduced costs every year until his competition adding features were enough better that he couldn't sell his cars.
Yes, it's possible and very simple. Lower the frequency (dramatically lowers power usage), fewer cores, few threads, etc. The problem is, we don't know what we need. What if a great new apps comes out (think LLM); you'll be complaining your phone is too slow to run it.
Exactly. Yes I understand the meaning behind it, but the line gets drummed into developers everywhere, the subtleties and real meaning are lost, and every optimisation- or efficiency-related question on Stack Overflow is met with cries of "You're doing it wrong! Don't ever think about optimising unless you're certain you have a problem!" This habit of pushing it to extremes, inevitably leads to devs not even thinking about making their software efficient. Especially when they develop on high-end hardware and don't test on anything slower.
Perhaps a classic case where a guideline, intended to help, ends up causing ill effects by being religiously stuck to at all times, instead of fully understanding its meaning and when to use it.
A simple example comes to mind, of a time I was talking to a junior developer who thought nothing of putting his SQL query inside a loop. He argued it didn't matter because he couldn't see how it would make any difference in that (admittedly simple) case, to run many queries instead of one. To me, it betrays a manner of thinking. It would never have occurred to me to write it the slower way, because the faster way is no more difficult or time-consuming to write. But no, they'll just point to the mantra of "premature optimisation" and keep doing it the slow way, including all the cases where it unequivocally does make a difference.
We have customers with thousands of machines that are still using spinning, mechanical 54000 RPM drives. The machines are unbelievably slow and only get slower with every single update, its nuts.
Often, this is presented as a tradeoff between the cost of development and the cost of hardware. However, there is a third leg of that stool: the cost of end-user experience.
When you have a system which is sluggish to use because your skimped on development, it is often the case that you cannot make it much faster no matter how expensive is the hardware you throw at it. Either there is a single-threaded critical path, so you hit the limit of what one CPU can do (and adding more does not help), or you hit the laws of physics, such as with network latency which is ultimately bound by the speed of light.
And even when the situation could be improved by throwing more hardware at it, this is often done only to the extent to make the user experience "acceptable", but not "great".
In either case, the user experience suffers and each individual user is less productive. And since there are (usually) orders of magnitude more users than developers, the total damage done can be much greater than the increased cost of performance-focused development. But the cost of development is "concentrated" while the cost of user experience is "distributed", so it's more difficult to measure or incentivize for.
The cost of poor user experience is a real cost, is larger than most people seem to think and is non-linear. This was observed in the experiments done by IBM, Google, Amazon and others decades ago. For example, take a look at:
He and Richard P. Kelisky, Director of Computing Systems for IBM's Research Division, wrote about their observations in 1979, "...each second of system response degradation leads to a similar degradation added to the user's time for the following [command]. This phenomenon seems to be related to an individual's attention span. The traditional model of a person thinking after each system response appears to be inaccurate. Instead, people seem to have a sequence of actions in mind, contained in a short-term mental memory buffer. Increases in SRT [system response time] seem to disrupt the thought processes, and this may result in having to rethink the sequence of actions to be continued."
I generally believe that markets are somewhat efficient.
But somehow, we've ended up with the current state of Windows as the OS that most people use to do their job.
Something went terribly wrong. Maybe the market is just too dumb, maybe it's all the market distortions that have to do with IP, maybe it's the monopolístic practices of Microsoft. I don't know, but in my head, no sane civilization would think that Windows 10/11 is a good OS that everyone should use to optimize our economy.
I'm not talking only about performance, but about the general crappiness of the experience of using it.
The idea of a hand me down computer made of brass and mahogany still sounds ridiculous because it is, but we're nearly there in terms of Moore's law. We have true 2nm within reach and then the 1nm process is basically the end of the journey. I expect 'audiophile grade' PCs in the 2030s and then PCs become works of art, furniture, investments, etc. because they have nowhere to go.
The increasing longevity of computers has been impressing me for about 10 years.
My current machine is 4 years old. It's absolutely fine for what I do. I only ever catch it "working" when I futz with 4k 360 degree video (about which: fine). It's a M1 Macbook Pro.
I traded its predecessor in to buy it, so I don't have that one anymore; it was a 2019 model. But the one before that, a 2015 13" Intel Macbook Pro, is still in use in the house as my wife's computer. Keyboard is mushy now, but it's fine. It'd probably run faster if my wife didn't keep fifty billion tabs open in Chrome, but that's none of my business. ;)
The one behind that one, purchased in 2012, is also still in use as a "media server" / ersatz SAN. It's a little creaky and is I'm sure technically a security risk given its age and lack of updates, but it RUNS just fine.
Click the link and contemplate while X loads. First, the black background. Next it spends a while and you're ready to celebrate! Nope, it was loading the loading spinner. Then the pieces of the page start to appear. A few more seconds pass while the page is redrawn with the right fonts; only then can you actually scroll the page.
Having had some time to question your sanity for clicking, you're grateful to finally see what you came to see. So you dwell 10x as long, staring at a loaded page and contemplating the tweet. You dwell longer to scroll and look at the replies.
How long were you willing to wait for data you REALLY care about? 10-30 seconds; if it's important enough you'll wait even longer.
Software is as fast as it needs to be to be useful to humans. Computer speed doesn't matter.
If the computer goes too fast it may even be suspected of trickery.
Obviously, the world ran before computers. The more interesting part of this is what would we lose if we knew there were no new computers, and while I'd like to believe the world would put its resources towards critical infrastructure and global logistics, we'd probably see the financial sector trying to buy out whatever they could, followed by any data center / cloud computing company trying to lock all of the best compute power in their own buildings.
If the tooling had kept up. We went from RADs that built you fully native GUIs to abandoning ship and letting Electron take over. Anyone else have 40 web browsers installed and they are each some Chromium hack?
The priority should be safety, not speed. I prefer an e.g. slower browser or OS that isn't ridden with exploits and attack vectors.
Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.
Carmack is right to some extent, although I think it’s also worth mentioning that people replace their computers for reasons other than performance, especially smartphones. Improvements in other components, damage, marketing, and status are other reasons.
It’s not that uncommon for people to replace their phone after two years, and as someone who’s typically bought phones that are good but not top-of-the-line, I’m skeptical all of those people’s phones are getting bogged down by slow software.
You always optimise FOR something at the expense of something.
And that can, and frequently should, be lean resource consumption, but it can come at a price.
Which might be one or more of:
Accessibility.
Full internationalisation.
Integration paradigms (thinking about how modern web apps bring UI and data elements in from third parties).
Readability/maintainability.
Displays that can actually represent text correctly at any size without relying on font hinting hacks.
All sorts of subtle points around UX.
Economic/business model stuff (megabytes of cookie BS on every web site, looking at you right now.)
Etc.
The goal isn't optimized code, it is utility/value prop. The question then is how do we get the best utility/value given the resources we have. This question often leads to people believing optimization is the right path since it would use fewer resources and therefore the value prop would be higher. I believe they are both right and wrong. For me, almost universally, good optimization ends up simplifying things as it speeds things up. This 'secondary' benefit, to me, is actually the primary benefit. So when considering optimizations I'd argue that performance gains are a potential proxy for simplicity gains in many cases so putting a little more effort into that is almost always worth it. Just make sure you actually are simplifying though.
You're just replacing one favorite solution with another. Would users want simplicity at the cost of performance? Would they pay more for it? I don't think so.
You're right that the crux of it is that the only thing that matters is pure user value and that it comes in many forms. We're here because development cost and feature set provide the most obvious value.
This is Carmack's favorite observation over the last decade+. It stems from what made him successful at id. The world's changed since then. Home computers are rarely compute-bound, the code we write is orders of magnitude more complex, and compilers have gotten better. Any wins would come at the cost of a massive investment in engineering time or degraded user experience.
I work on a laptop from 2014. An i7 4xxx with 32 GB RAM and 3 TB SSD. It's OK for Rails and for Django, Vue, Slack, Firefox and Chrome. Browsers and interpreters got faster. Luckily there was pressure to optimize especially in browsers.
Perfect parallel to the madness that is AI. With even modest sustainability incentives, the industry wouldn't have pulverized a trillion dollar on training models nobody uses to dominate the weekly attention fight and fundraising game.
100% agree with Carmack. There was a craft in writing software that I feel has been lost with access to inexpensive memory and compute. Programmers can be inefficient because they have all that extra headroom to do so which just contributes to the cycle of needing better hardware.
Software development has been commoditized and is directed by MBA's and others who don't see it as a craft. The need for fast project execution is above the craft of programming, hence, the code is bug-riddled and slow.
There are some niche areas (vintage,pico-8, arduino...) where people can still practise the craft, but that's just a hobby now. When this topic comes up I always think about Tarkovsky's Andrei Rublev movie, the artist's struggle.
He mentions the rate of innovation would slow down which I agree with. But I think that even 5% slower innovation rate would delay the optimizations we can do or even figure out what we need to optimize through centuries of computer usage and in the end we'd be less efficient because we'd be slower at finding efficiencies. Low adoption rate of new efficiencies is worse than high adoption rate of old efficiencies is I guess how to phrase it.
If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?
Minimalism is excellent. As others have mentioned, using languages that are more memory safe (by assumption the language is wrote in such a way) may be worth the additional complexity cost.
But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.
Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.
Feels like half of this thread didn't read or ignored his last line: "Innovative new products would get much rarer without super cheap and scalable compute, of course."
Tell me about it. Web development has only become fun again at my place since upgrading from Intel Mac to M4 Mac.
Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.
Chat, editor in a browser and docker don't seem the most efficient thing if put all together.
I think optimizations only occur when the users need them. That is why there are so many tricks for game engine optimization and compiling speed optimization. And that is why MSFT could optimize the hell out of VSCode.
People simply do not care about the rest. So there will be as little money spent on optimization as possible.
Sadly software optimization doesn't offer enough cost savings for most companies to address consumer frustration. However, for large AI workloads, even small CPU improvements yield significant financial benefits, making optimization highly worthwhile.
I already run on older hardware and most people can if they chose to - haven't bought a new computer since 2005. Perhaps the OS can adopt a "serverless" model where high computational tasks are offloaded as long as there is sufficient bandwidth.
This is the story of life in a nutshell. It's extremely far from optimized, and that is the natural way of all that it spawns. It almost seems inelegant to attempt to "correct" it.
I'm already moving in this direction in my personal life. It's partly nostalgia but it's partly practical. It's just that work requires working with people who only use what hr and it hoists on them, then I need a separate machine for that.
It could also run on much less current hardware if efficiency was a priority. Then comes the AI bandwagon and everyone is buying loads of new equipment to keep up with the Jones.
Really no notes on this. Carmack hit both sides of the coin:
- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths
- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them
True for large corporations. But for individuals the ability to put what was previously an entire stacks in a script that doesn't call out to the internet will be a big win.
How many people are going to write and maintain shell scripts with 10+ curls? If we are being honest this is the main reason people use python.
As long as sufficient amounts of wealthy people are able to wield their money as a force to shape society, this is will always be the outcome.
Unfortunately,in our current society a rich group of people with a very restricted intellect, abnormal psychology, perverse views on human interaction and a paranoid delusion that kept normal human love and compassion beyond their grasp, were able to shape society to their dreadful imagination.
Hopefully humanity can make it through these times, despite these hateful aberrations doing their best to wield their economic power to destroy humans as a concept.
Carmack is a very smart guy and I agree with the sentiment behind his post, but he's a software guy. Unfortunately for all of us hardware has bugs, sometimes bugs so bad that you need to drop 30-40% of your performance to mitigate them - see Spectre, Meltdown and friends.
I don't want the crap Intel has been producing for the last 20 years, I want the ARM, RiscV and AMD CPUs from 5 years in the future. I don't want a GPU by Nvidia that comes with buggy drivers and opaque firmware updates, I want the open source GPU that someone is bound to make in the next decade. I'm happy 10gb switches are becoming a thing in the home, I don't want the 100 mb hubs from the early 2000s.
This is a double edge sword problem, but I think what people are glazing over with the compute power topic is power efficiency. One thing I struggle with home labing old gaming equipment is the consideration to the power efficiency of new hardware. Hardly a valid comparison, but I can choose to recycle my Ryzen 1700x with a 2080ti for a media server that will probably consume a few hundred watts, or I can get a M1 that sips power. The double edge sword part is that Ryzen system becomes considerably more power efficient running proxmox or ubuntu server vs a windows client. We as a society choose our niche we want to leverage and it swings with and like economics, strapped for cash, choose to build more efficient code; no limits, buy the horsepower to meet the needs.
I'm going to be pretty blunt. Carmack gets worshiped when he shouldn't be. He has several bad takes in terms of software. Further, he's frankly behind the times when it comes to the current state of the software ecosystem.
I get it, he's legendary for the work he did at id software. But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
He seems to have a perpetual view on the state of software. Interpreted stuff is slow, networks are slow, databases are slow. Everyone is working with Pentium 1s and 2MB of ram.
None of these are what he thinks they are. CPUs are wicked fast. Interpreted languages are now within a single digit multiple of natively compiled languages. Ram is cheap and plentiful. Databases and networks are insanely fast.
Good on him for sharing his takes, but really, he shouldn't be considered a "thought leader". I've noticed his takes have been outdated for over a decade.
I'm sure he's a nice guy, but I believe he's fallen into a trap that many older devs do. He's overestimating what the costs of things are because his mental model of computing is dated.
Let me specify that what I'm calling interpreted (and I'm sure carmack agrees) is languages with a VM and JIT.
The JVM and Javascript both fall into this category.
The proof is in the pudding. [1]
The JS version that ran in 8.54 seconds [2] did not use any sort of fancy escape hatches to get there. It's effectively the naive solution.
But if you look at the winning C version, you'll note that it went all out pulling every single SIMD trick in the book to win [3]. And with all that, the JS version is still only ~4x slower (single digit multiplier).
And if you look at the C++ version which is a near direct translation [4] which isn't using all the SIMD tricks in the books to win, it ran in 5.15. Bringing the multiple down to 1.7x.
Perhaps you weren't thinking of these JIT languages as being interpreted. That's fair. But if you did, you need to adjust your mental model of what's slow. JITs have come a VERY long way in the last 20 years.
I will say that languages like python remain slow. That wasn't what I was thinking of when I said "interpreted". It's definitely more than fair to call it an interpreted language.
fwiw There are a few naive un-optimised single-thread #8 n-body programs transliterated line-by-line literal style into different programming languages from the same original. [1]
> a single digit multiple
By which you mean < 10× ?
So not those Java -Xint, PHP, Ruby, Python 3 programs?
> interpreted
Roberto Ierusalimschy said "the distinguishing feature of interpreted languages is not that they are not compiled, but that any eventual compiler is part of the language runtime and that, therefore, it is possible (and easy) to execute code generated on the fly." [2]
A simple do-nothing for loop in JavaScript via my browser's web console will run at hundreds of MHz. Single-threaded, implicitly working in floating-point (JavaScript being what it is) and on 2014 hardware (3GHz CPU).
> But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
Why isn't it?
> Interpreted stuff is slow
Well, it is. You can immediately tell the difference between most C/C++/Rust/... programs and Python/Ruby/... Either because they're implicitly faster (nature) or they foster an environment where performance matters (nurture), it doesn't matter, the end result (adult) is what matters.
> networks are slow
Networks are fast(er), but they're still slow for most stuff. Gmail is super nice, but it's slower than almost desktop email program that doesn't have legacy baggage stretching back 2-3 decades.
hes a webdev hurt by the simple observation interpreted languages will always be slower than native ones. the static analysis comment being particularly odd
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
Imagine software engineering was like real engineering, where the engineers had licensing and faced fines or even prison for negligence. How much of the modern worlds software would be tolerated?
Very, very little.
If engineers handled the Citicorp center the same way software engineers did, the fix would have been to update the documentation in Confluence to not expose the building to winds and then later on shrug when it collapsed.
Developers over 50ish (like me) grew up at a time when CPU performance and memory constraints affected every application. So you had to always be smart about doing things efficiently with both CPU and memory.
Younger developers have machines that are so fast they can be lazy with all algorithms and do everything 'brute force'. Like searching thru an array every time when a hashmap would've been 10x faster. Or using all kinds of "list.find().filter().any().every()" chaining nonsense, when it's often smarter to do ONE loop, and inside that loop do a bunch of different things.
So younger devs only optimize once they NOTICE the code running slow. That means they're ALWAYS right on the edge of overloading the CPU, just thru bad coding. In other words, their inefficiencies will always expand to fit available memory, and available clock cycles.
Yeah, having browsers the size and complexities of OSs is just one of many symptoms. I intimate at this concept in a grumbling, helpless manner somewhat chronically.
There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.
I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.
While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.
I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]
The world will seek out software optimization only after hardware reaches its physical limits.
We're still in Startup Land, where it's more important to be first than it is to be good. From that point onward, you have to make a HUGE leap and your first-to-market competitor needs to make some horrendous screwups in order to overtake them.
The other problem is that some people still believe that the masses will pay more for quality. Sometimes, good enough is good enough. Tidal didn't replace iTunes or Spotify, and Pono didn't exactly crack the market for iPods.
I mean, if you put win 95 on a period appropriate machine, you can do office work easily. All that is really driving computing power is the web and gaming. If we weren't doing either of those things as much, I bet we could all quite happily use machines from the 2000s era
I do have Windows 2000 installed with IIS (and some office stuff) in a ESXi for fun and nostalgia. It serves some static html pages within my local network. The host machine is some kind of i7 machine that is about 7-10 years old.
That machine is SOOOOOO FAST. I love it. To be honest, that tasks that I was doing back in the day are identical to today
Well, it is a point. But also remember the horrors of the monoliths he made. Like in Quake (123?) where you have hacks like if level name contains XYZ then do this magic. I think the conclusion might be wrong.
Let's keep the CPU efficiency golf to Zachtronics games, please.
I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.
And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.
I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.
There's servers and there's all of the rest of consumer hardware.
I need to buy a new phone every few years simply because the manufacturer refuses to update it. Or they add progressively more computationally expensive effects that makes my old hardware crawl. Or the software I use only supports 2 old version of macOS. Or Microsoft decides that your brand new cpu is no good for win 11 because it's lacking a TPM. Or god help you if you try to open our poorly optimized electron app on your 5 year old computer.
People say this all the time, and usually it's just an excuse not to optimize anything.
First, I/O can be optimized. It's very likely that most servers are either wasteful in the number of requests they make, or are shuffling more data around than necessary.
Beyond that though, adding slow logic on top of I/O latency only makes things worse.
Also, what does I/O being a bottleneck have to do with my browser consuming all of my RAM and using 120% of my CPU? Most people who say "I/O is the bottleneck" as a reason to not optimize only care about servers, and ignore the end users.
I/O _can_ be optimized. I know someone who had this as their fulltime job at Meta. Outside of that nobody is investing in it though.
I'm a platform engineer for a company with thousands of microservices. I'm not thinking on your desktop scale. Our jobs are all memory hogs and I/O bound messes. Across all of the hardware we're buying we're using maybe 10% CPU. Peers I talk to at other companies are almost universally in the same situation.
I'm not saying don't care about CPU efficiency, but I encounter dumb shit all the time like engineers asking us to run exotic new databases with bad licensing and no enterprise features just because it's 10% faster when we're nowhere near experiencing those kinds of efficiency problems. I almost never encounter engineers who truly understand or care about things like resource contention/utilization. Everything is still treated like an infinite pool with perfect 100% uptime, despite (at least) 20 years of the industry knowing better.
There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
This is also the exact reason why all the bright-eyed pieces that some technology would increase worker's productivity and therefore allow more leisure time for the worker (20 hour workweek etc) are either hopelessly naive or pure propaganda.
Increased productivity means that the company has a new option to either reduce costs or increase output at no additional cost, one of which it has to do to stay ahead in the rat-race of competitors. Investing the added productivity into employee leisure time would be in the best case foolish and in the worst case suicidal.
Which is why government regulations that set the boundaries for what companies can and can't get away with (such as but not limited to labor laws) are so important. In absence of guardrails, companies will do anything to get ahead of the competition. And once one company breaks a norm or does something underhanded, all their competitors must do the same thing or they risk ceding a competitive advantage. It becomes a race to the bottom.
Of course we learned this all before a century ago, it's why we have things like the FDA in the first place. But this new generation of techno-libertarians and DOGE folks who grew up in a "move fast and break things" era, who grew up in the cleanest and safest times the world has ever seen, have no understanding or care of the dangers here and are willing to throw it all away because of imagined inefficiencies. Regulations are written in blood, and those that remove them will have new blood on their hands.
Some regulations are written in blood, a huge chunk are not. Shower head flow rate regulations were not written in blood.
Your post started out talking about labor laws but then switched to the FDA, which is very different. This is one of the reasons that people like the DOGE employees are tearing things apart. There are so many false equivalences on the importance of literally everything the government does that they look at things that are clearly useless and start to pull apart things they think might be useless.
The good will has been burned on the “trust me, the government knows best”, so now we’re in an era of cuts that will absolutely go too far and cause damage.
Your post mentioning “imagined inefficiencies” is a shining example of the issue of why they are there. Thinking the government doesn’t have inefficiencies is as dumb as thinking it’s pointless. Politicians are about as corrupt of a group as you can get and budget bills are filled with so much excess waste it’s literally called “pork”.
Efficiency related regulation like the energy star is THE reason why companies started caring.
Same with low flush toilets. I vaguely remember the initial ones had issues, but tbh less than the older use a ton of water toilets my family had before that were also super clog prone. Nowadays I can’t even remember the last time a low flush toilet clogged. Massive water saving that took regulation.
Efficiency regulations may not be directly written in blood, instead they are built on costly mountains of unaddressed waste.
I literally had a new toilet put in a couple of years ago. It clogs pretty easily. So you just end up flushing it more, so you don't actually save any water.
BTW the same thing happened with vacuum cleaners, you need to hover more to get the same amount of dust out because they capped the power in the EU. My old Vacuum Cleaner I managed to find, literally sticks to the carpet when hoovering.
My Philips Silentio vacuum cleaner is both quiet and powerful and is also within the EU limits on input power. It will stick to the floor if I turn up the power too high.
And the Norwegian made and designed low flow toilets in my house flush perfectly every time. Have the flush volumes reduced further in the last fifteen years?
And so we see the real outcome, on this axis, of these kinds of regulations, is to increase the quality gradient. A crappy old barebones water-hungry dishwasher with a phosphate-containing detergent worked just fine for me in an old apartment. Its comparably priced brand-new lower-water equivalent in a new house with phosphate-free detergent works awfully. Now you need a Bosch washer and premium detergent and so on. These exist and by all accounts are great. So we can say that the regulations didn't cause the quality problem, they just shifted the market.
Compliance with the regulations can be done both by the capable and the incapable, but caveat emptor rears its ugly head, and that assumes the end user is the buyer (right now, I'm renting). There's often quite a price gap between good enough and terrible too. A lot of people end up stuck with the crap and little recourse.
The government cares that your dishwasher uses less water and the detergent doesn't put phosphate into the water. It doesn't care that your dishwasher actually works well. We can layer more regulations to fix that problem too, but they will make things cost more, and they will require more expensive and competent civil servants to enforce, and so on. And I don't see any offer in that arrangement to replace my existing dishwasher, which is now just a sunk cost piece of future e-waste that neither the government nor the manufacturer have been made responsible for.
Nap, parent just bought a crappy toilet.
Which is the same as every other toilet.
> My Philips Silentio vacuum cleaner is both quiet and powerful and is also within the EU limits on input power. It will stick to the floor if I turn up the power too high.
I don't believe you and it besides the point because I suspect that it is an expensive vacuum cleaner. I don't want to put any thought into a vacuum cleaner. I just want to buy the most powerful (bonus points if it is really loud), I don't care about it being quiet or efficient. I want the choice to buy something that makes a dent in my electricity bill if I so choose to.
> And the Norwegian made and designed low flow toilets in my house flush perfectly every time. Have the flush volumes reduced further in the last fifteen years?
This reads as "I have some fancy bathroom that costs a lot, if you had this fancy bathroom you wouldn't have issues". I don't want to have to care whether my low flush toilet is some fancy Norwegian brand or not. I just want something to flush the shit down the hole. The old toilets never had the problems the newer ones have. I would rather buy the old design, but I can't. I am denied the choice because someone else I have never met thinks they know better than I.
> I want the choice to buy something that makes a dent in my electricity bill if I so choose to.
Have you considered that the market for such a thing is effectively zero? Why would anyone make this?
Dysons are fine, even if the founder is a total tool.
I was being hyperbolic throughout the entire post.
Every-time you have a conversation around older stuff being better than newer stuff (some of this is due to regulation), you will have someone say their boutique item that costs hundreds of pounds (or maybe 1000s) works perfectly well. Ignoring the fact that most people don't wish to buy these boutique items (the dude literally talked about some Norwegian toilet design). I buy whatever is typically on offer than is from a brand that I recognise. I don't care about the power consumption of my vacuum cleaner. I am not using it for the entire day. It is maybe 30 minutes to an hour twice a week. I just want to do this task (which I find tedious) as quickly as possible.
BTW Dysons count in this regard as boutique, they are expensive and kinda rubbish. They are rendered useless by cat fur (my mother had three cats and it constantly got clogged with it). Bagless vacuum cleaners are generally garbage anyway (this is a separate complaint) because when you try to empty them, you have to empty it into a bag typically.
Sorry to hear you got a bum toilet, luckily for you, there’s the other huge benefit of low flush toilets that I didn’t mention.
Even with a total clog, there’s a 1-2 flush bowl capacity before it over flows.
Who remembers the abject terror of watching the water rise in a clogged high flush toilet and just praying it didn’t overflow.
Also unless every usage is a big poop requiring extra flushes, it’s far fetched that more flushes occasionally are adding up to the same water usage. If the toilet clogs for #1, something is very wrong - likely installed wrong, plumbing issues, or user error. Your toilet might not have been seated right so the wax seal ring is partially blocking the sewer line.
I don't think regulations are enough. They're just a band-aid on the gaping wound that is a capitalist, market based economy. No matter what regulations you make, some companies and individuals become winners and over time will grow rich enough to influence the government and the regulations. We need a better economic system, one that does not have these problems built in.
> We need a better economic system
none has been found. The command economy is inefficient, and prone to corruption.
informal/barter systems are too small in scale and does not produce sufficient amounts to make the type of abundant lifestyle we enjoy today possible.
As the saying goes - free market capitalism is the worst economic system, except for all the others.
We haven't really been trying to find such a system. The technological progress that we've had since the last attempts at a different kind of a system has been huge, so what was once impossible might now be possible if we put some effort into it.
There is no system that fulfills your requirements.
It is even easy to explain why: Humans are part of all the moving pieces in such a system and they will always subvert it to their own agenda, no matter what rules you put into place. The more complex your rule set, the easier it is to break.
Look at games, can be a card game, a board game, some computer game. There is a fixed set of rules, and still humans try to cheat. We are not even talking adults here, you see this with kids already. Now with games there is either other players calling that out or you have a computer not allowing to cheat (maybe). Now imagine everyone could call someone else a cheater and stop them from doing something. This in itself is going to be misused. Humans will subvert systems.
So the only working system will be one with a non-human incorruptible game master, so to speak. Not going to happen.
With that out of the way, we certainly can ask the question: What is the next best thing to that? I have no answer to that, though.
Cheating happens in competition based systems. No one cheats in games where the point is to co-operate to achieve some common goal. We should aim to have a system based on recognizing those common goals and enabling large scale co-operation to achieve them.
> What is the next best thing to that? I have no answer to that, though.
i argue that what we have today is the so called next best thing - free market capitalism, with a good dose of democracy and strong gov't regulations (but not overbearing).
> we’ve tried three whole things and are all out of ideas!
Guess it’ll just have to be this way forever and ever.
Free market capitalism does not exist anywhere.
In fact, free market and capitalism are opposites.
Lol no they aren't, they're orthogonal, almost entirely unrelated.
I assume they are saying that in practice, if wealth gives one influence (if one lives in capitalism), one will use that influence to make one's market less free to one's benefit.
Gaping wound that lifted billions out of powerty and produced the greatest standard of living in human history.
Sure, but you can't ignore the negative sides like environmental destruction and wealth and power concentration. Just because we haven't yet invented a system that produces a good standard of living without these negative side effects doesn't mean it can't be done. But we aren't even trying, because the ones benefiting from this system the most, and have the most power, have no incentive to do so.
Capitalism is a good economic engine. Now put that engine in a car without steering wheel nor brakes and feed the engine with the thickest and ever-thickening pipe from the gas tank you can imagine, and you get something like USA.
But most of the world doesn't work like that. Countries like China and Russia have dictators that steer the car. Mexico have gangs and mafia. European countries have parliamentary democracies and "commie journalists" that do their job and reign political and corporate corruption--sometimes over-eagerly--and unions. In many of those places, wealth equals material well-being but not overt political power. In fact, wealth often employs stealth to avoid becoming a target.
USA is not trying to change things because people are numbed down[^1]. Legally speaking, there is nothing preventing that country from having a socialist party win control of the government with popular support and enact sweeping legislation to overcome economic inequality somewhat. Not socialist, but that degree of unthinkable was done by Roosevelt before and with the bare minimum of popular support.
[^1]: And, I'm not saying that's a small problem. It is not, and the capitalism of instant gratification entertainment is entirely responsible for this outcome. But the culprit is not capitalism at large. IMO, the peculiarities of American culture are, to a large extent, a historic accident.
You can't really separate wealth and power, they're pretty much the same thing. The process that is going on in the US is also happening in Europe, just at a slower pace. Media is consolidating in the hands of the wealthy, unions are being attacked and are slowly losing their power, etc. You can temporarily reverse the process by having someone steer the car into some other direction for a while, but wealth/power concentration is an unavoidable part of free market capitalism, so the problem will never go away completely. Eventually capital accumulates again, and will corrupt the institutions meant to control it.
A smart dictator is probably harder to corrupt, but they die and then if you get unlucky with the next dictator the car will crash and burn.
Those are all results of political corruption, not capitalism. It is the government's job to set the ground rules for the economy.
Political corruption is a consequence of capitalism. Taking over the political system provides a huge competitive advantage, so any entity rich enough to influence it has an incentive to do so in an competition based economy that incentivizes growth.
When did Political corruption not exist? In what system in history did the people in power have so few rotten apples that corruption was an anomally? Blaming corruption on capitalism is silly. As long has worldhas resources, people want control of reasources, and bad actors will do bad actors thingies.
You're right, political corruption is a problem in other systems as well, not just capitalism. I guess it would be more accurate to say that power concentration causes political corruption. We should try to figure out if it's possible to manage the economy in a way that limits the amount of power any individual can have to such an extent that corruption would be impossible.
I don't think there is exists a magical political system that we set up and it magically protects us from corruption. Forever. Just like any system (like surviving in an otherwise hostile nature) it needs maintenance. Maintenance in a political or any social structure is getting off your bottom and imposing some "reward" signal on the system.
Corruption mainly exists because people have low standards for enforcing eradication of it. This is observable in the smallest levels. In countries where corruption is deeply engraved, even university student groups will be corrupted. Elected officials of societies of any size will be prone to put their personal interests in front of the groups' and will appoint or employ friends instead of randomers based on some quality metrics. The question is what are the other people willing to do? Is anyone willing to call them out? Is anyone willing to instead put on the job themselves and do it right (which can be demanding)?
The real question is how far are the individuals willing to go and how much discomfort are they willing to embrace to impose their requirements, needs, moral expectations on the political leader? The outcomes of many situations you face in society (should that be a salary negotiation or someone trying to rip you off in a shop) depend on how much sacrifice (e.g. discomfort) you are willing to take on to get out as a "winner" (or at least non-loser) of the situation? Are you willing to quit your job if you cannot get what you want? Are you going to argue with the person trying to rip you off? Are you willing to go to a lawyer and sue them and take a long legal battle? If people keep choosing the easier way, there will always be people taking advantage of that. Sure, we have laws but laws also need maintenance and anyone wielding power needs active check! It doesn't just magically happen but the force that can keep it in check is every individual in the system. Technological advances and societal changes always lead to new ideas how to rip others off. What we would need is to truly punish the people trying to take advantage of such situations: no longer do business with them, ask others to boycott such behaviour (and don't vote for dickheads!, etc.) -- even in the smallest friends group such an issue could arise.
The question is: how much are people willing to sacrifice on a daily basis to put pressure on corrupt people? There is no magic here, just the same bare evolutionary forces in place for the past 100,000 years of humankind.
(Just think about it: even in rule of law, the ultimate way of enforcing someone to obey the rules is by pure physical force. If someone doesn't listen, ever, he will be picked up by other people and forced into a physical box and won't be allowed to leave. And I don't expect that to ever change, regardless of the political system. Similarly, we need to keep up an army at all times. If you simply go hard pacifist, someone will take advantage of that... Evolution. )
Democracy is an active game to be played and not just every 4 years. In society, people's everyday choices and standards are the "natural forces of evolution".
Actually, the system that produced the greatest standard of living increase in human history is whatever Communist China's been doing for the last century.
Not century.
Mao and communism brought famine and death to millions.
The move from that to "capitilism with Chinese characteristics" is what has brought about the greatest standard of living increase in human history.
What they're doing now is a mix of socialism, capitilism and CPP dominance. I'm not an American, but I understand FDR wielded socialism too, and that really catapulted the US towards its golden era.
Chinese do capitalism better than anyone else. Chinese companies ruthlessly compete within China to destroy their competition. Their firms barely have profits because everyone is competing so hard against others. Whereas US/EU is full of rent seeking monopolies that used regulatory capture to destroy competition.
Almost like they made a great leap forward during that century.
Capitalism.
...and they use money so it's capitalism.
> 20 hour workweek etc
We have that already. It's called part-time jobs. Usually they don't pay as much as full-time jobs, provide no health insurance or other benefits, etc.
> provide no health insurance
I am so glad to live in Germany...
… where your full time job pays less than the GP's part time job
It's a bad deal as a developer. I receive 50% of the money but still provide 70-80% value to the company.
As someone who straddles two fields (CS and Healthcare) and has careers/degrees in both -- the grass isn't always greener on the other side.
This could be said about most jobs in the 21st century these days in any career field given. That's a culture shift and business management/organization practice change that isn't likely to happen anytime soon.
Oh I'm not saying we have it worse. But there are jobs where time spent is more proportional to productive output, so working half the time for half the money is a fair deal.
Indeed, and I don't know why people keep saying that we ever thought the 20 hour workweek was feasible, because there is always more work to be done. Work expands to fill the constraints available, similar to Parkinson's Law.
Probably because the 40-hour workweek was feasible.
It became feasible because back when the workweek was "whenever you're not asleep", a lot of people set a lot of things on fire until it wasn't.
You're on the right track, but missing an important aspect.
In most cases the company making the inferior product didn't spend less. But they did spend differently. As in, they spent a lot on marketing.
You were focused on quality, and hoped for viral word of mouth marketing. Your competitors spent the same as you, but half their budget went to marketing. Since people buy what they know, they won.
Back in the day MS made Windows 95. IBM made OS/2. MS spend a billion $ on marketing Windows 95. That's a billion back when a billion was a lot. Just for the launch.
Techies think that Quality leads to sales. If does not. Marketing leads to sales. There literally is no secret to business success other than internalizing that fact.
Quality can lead to sales - this was the premise behind the original Google (they never spent a dime on advertising their own product until the Parisian Love commercial [1] came out in 2009, a decade after founding), and a few other tech-heavy startups like Netscape or Stripe. Microsoft certainly didn't spend a billion $ marketing Altair Basic.
The key point to understand is the only effort that matters is that which makes the sale. Business is a series of transactions, and each individual transaction is binary: it either happens or it doesn't. Sometimes, you can make the sale by having a product which is so much better than alternatives that it's a complete no-brainer to use it, and then makes people so excited that they tell all their friends. Sometimes you make the sale by reaching out seven times to a prospect that's initially cold but warms up in the face of your persistence. Sometimes, you make the sale by associating your product with other experiences that your customers want to have, like showing a pretty woman drinking your beer on a beach. Sometimes, you make the sale by offering your product 80% off to people who will switch from competitors and then jacking up the price once they've become dependent on it.
You should know which category your product fits into, and how and why customers will buy it, because that's the only way you can make smart decisions about how to allocate your resources. Investing in engineering quality is pointless if there is no headroom to deliver experiences that will make a customer say "Wow, I need to have that." But if you are sitting on one of those gold mines, capitalizing on it effectively is orders of magnitude more efficient than trying to market a product that doesn't really work.
[1] https://www.youtube.com/watch?v=nnsSUqgkDwU
> Investing in engineering quality is pointless if there is no headroom to deliver experiences that will make a customer say "Wow, I need to have that."
This. Per your example, this is exactly what it was like when most of us first used Google after having used AltaVista for a few years. Or Google Maps after having used MapQuest for a few years. Google invested their resources correctly in building a product that was head and shoulders above the competition.
And yes, if you are planning to sell beer, you are going to need the help of scantily clad women on the beach much more than anything else.
>> Or Google Maps after having used MapQuest for a few years. Google invested their resources correctly in building a product that was head and shoulders above the competition.
Except that they didn't: they bought a company that had been building a product that was head and shoulders above the competition (Where 2 Technologies), then they also bought Keyhole which became Google Earth.
Incidentally they also bought, not built, Youtube .. and Android.
So, yes, they had a good nose for "experiences that will make a customer say "Wow, I need to have that.""
They arguably did do a good job investing their resources but it was mostly in buying, not building.
.. and they are good at marketing :)
Google Maps as it launched was the integration of 3 pre-existing products: KeyHole (John Hanke, provided the satellite imagery), Where 2 (Lars & Jens Rasmussen, was a desktop-based mapping system), and Google Local (internal, PM was Bret Taylor, provided the local business data). Note that both KeyHole and Where 2 were C++ desktop apps; it was rewritten as browser-based Javascript internally. Soon after launch they integrated functionality from ZipDash (traffic data) and Waze (roadside events).
People read that YouTube or Android were acquisitions and don't realize just how much development happened internally, though. Android was a 6-person startup; basically all the code was written post-acquisition. YouTube was a pure-Python application at time of acquisition; they rewrote everything on the Google stack soon afterwards, and that was necessary for it to scale. They were also facing a company-ending lawsuit from Viacom that they needed Google's legal team to fight; the settlement to it hinged on ContentID, which was developed in-house at Google.
> They arguably did do a good job investing their resources but it was mostly in buying, not building.
They did build a large part of those products, Keyhole is just a part of Google earth google maps in general has many more features than that.
For example driving around cars in every country that allowed it to take street photos is really awesome and nobody else does that even today. Google did that, not some company they aquired, they built it.
Android was nothing like the Android today when it was bought. The real purchase was the talent that came with Android and not the product at the time.
YouTube now, well only someone with deep pockets could have made it what it is today(unlimited video uploads and the engineering to support it). It was nothing special.
After all they sell to marketing people...
Pure marketing doesn’t always win. There are counter examples.
Famously Toyota beat many companies that were basing their strategy on marketing rather than quality.
They were able to use quality as part of their marketing.
My father in law worked in a car showroom and talks about when they first installed carpet there.
No one did that previously. The subtle point to customers being that Toyotas didn’t leak oil.
It's not just software -- My wife owns a restaurant. Operating a restaurant you quickly learn the sad fact that quality is just not that important to your success.
We're still trying to figure out the marketing. I'm convinced the high failure rate of restaurants is due largely to founders who know how to make good food and think their culinary skills plus word-of-mouth will get them sales.
My wife ran a restaurant that was relatively successful due to the quality of its food and service. She was able to establish it as an upper-tier experience, by both some word of mouth, but also by catering to right events, taking part in shows, and otherwise influencing the influencers of the town, without any massive ad campaigns. As a result, there were many praises in the restaurant's visitor book, left by people from many countries visiting the city.
It was not a huge commercial success though, even though it wasn't a failure either; it generated just enough money to stay afloat.
If it paid for people's lives and sustained itself, that sounds like a huge success to me. There's a part of me that thinks, maybe we'd all be better off if we set the bar for success of a business at "sustains the lives of the people who work there and itself is sustainable."
> you quickly learn the sad fact that quality is just not that important to your success.
Doesn't that depend on your audience? Also, what do you mean by quality?
Where I live, the best food can lead to big success. New tiny restaurants open, they have great food, eventually they open their big successor (or their second restaurant, third restaurant, etc.).
In my experience, the landlord catches onto the restaurant’s success and starts increasing rents and usually that means cuts in quality.
Might well be why McDonald’s is more of a real estate company than it is a food company: https://www.wallstreetsurvivor.com/mcdonalds-beyond-the-burg...
I believe this is called something like the 'Michelin Curse' but my google is not returning hits for that phrase, though the sentiment seems roughly correct [0]
[0] https://www.wsj.com/style/michelin-star-removal-giglio-resta...
Interesting; thanks.
In the restaurant business, the keys are value and market fit.
There is a market for quality, but it's a niche. Several niches actually.
But you need to attract that customer. And the food needs to be interesting. And the drinks need to match. Because foodies care about quality but also want a certain experience.
Average Joe Blow who dines at McDonald's doesn't give a flying fuck about quality, that's true. Market quality to him and he'll probably think it tastes worse.
If you want to make quality food, everything else needs to match. And if you want to do it profitably, your business model needs to be very focused.
It can't just be the same as a chain restaurant but 20% more expensive...
IIRC, Microsoft was also charging Dell for a copy of Windows even if they didn't install it on the PC! And yeah OS/2 was ahead by miles.
How was that legal? They were charging dell for something they weren't using?
It wasn't; see U.S. vs Microsoft.
> I don’t think this leads to market collapse
You must have read that the Market for Lemons is a type of market failure or collapse. Market failure (in macroeconomics) does not yet mean collapse. It describes a failure to allocate resources in the market such that the overall welfare of the market participants decreases. With this decrease may come a reduction in trade volume. When the trade volume decreases significantly, we call it a market collapse. Usually, some segment of the market that existed ceases to exist (example in a moment).
There is a demand for inferior goods and services, and a demand for superior goods. The demand for superior goods generally increases as the buyer becomes wealthier, and the demand for inferior goods generally increases as the buyer becomes less wealthy.
In this case, wealthier buyers cannot buy the superior relevant software previously available, even if they create demand for it. Therefore, we would say a market fault has developed as the market could not organize resources to meet this demand. Then, the volume of high-quality software sales drops dramatically. That market segment collapses, so you are describing a market collapse.
> There’s probably another name for this
You might be thinking about "regression to normal profits" or a "race to the bottom." The Market for Lemons is an adjacent scenario to both, where a collapse develops due to asymmetric information in the seller's favor. One note about macroecon — there's never just one market force or phenomenon affecting any real situation. It's always a mix of some established and obscure theories.
The Wikipedia page for Market for Lemons more or less summarizes it as a condition of defective products caused by information asymmetry, which can lead to adverse selection, which can lead to market collapse.
https://en.m.wikipedia.org/wiki/The_Market_for_Lemons
The Market for Lemons idea seems like it has merit in general but is too strong and too binary to apply broadly, that’s where I was headed with the suggestion for another name. It’s not that people want low quality. Nobody actually wants defective products. People are just price sensitive, and often don’t know what high quality is or how to find it (or how to price it), so obviously market forces will find a balance somewhere. And that balance is extremely likely to be lower on the quality scale than what people who care about high quality prefer. This is why I think you’re right about the software market tolerating low quality; it’s because market forces push everything toward low quality.
Once upon a time, the price of a product was often a good indicator of its quality. If you saw two products side by side on the shelf and one was more expensive, then you might assume that it was less likely to break or wear out soon.
Now it seems that the price has very little to do with quality. Cheaply made products might be priced higher just to give the appearance of quality. Even well known brands will cut corners to save a buck or two.
I have purchased things at bargain prices that did everything I wanted and more. I have also paid a lot for things that disappointed me greatly.
This is a good point.
A big part of the drive towards lower prices is likely driven by companies exploiting that lack of information to deliver a low-quality product for a high price. Consumers rationally respond to this by just always picking the low-price product
Unless, of course, there's another factor (such as brand) that assures users they are receiving something worth spending extra on (and of course it's oh so easy for companies with such a reputation to temporarily juice returns if they are willing to make sacrifices)
What about furniture? From my childhood until now, it seems like furniture has really held out. Price is a pretty good indication of quality.
Within the (wide!) price tier in which most people buy furniture, almost everything is worse than IKEA but a lot of it’s 2-3x the price. You have to go even higher to get consistently-better-than-ikea, but most people won’t even see that kind of furniture when they go shopping for a new couch or kitchen table.
By the way, inferior goods are not necessarily poor-quality products, though there is a meaningful correlation, and I based my original comment on it. Still, a OnePlus Android phone is considered an inferior good; an iPhone (or a Samsung Galaxy Android phone) is considered superior. Both are of excellent quality and better than one another in key areas. It's more about how wealth, brand perception, and overall market sentiment affect their demand. OnePlus phones will be in more demand during recessions, and demand for iPhones and Samsung Galaxys will decrease.
No objection to your use/non-use of the Market for Lemons label. Just wanted to clarify a possible misconception.
P.S. Apologies for editing this comment late. I thought the original version wasn't very concise.
> A OnePlus Android phone is considered an inferior good; an iPhone (or a Samsung Galaxy Android phone) is considered superior. Both are of excellent quality
No, the inferior good is a device with 2GB RAM, a poor quality battery, easy to crack screen, a poor camera. poor RF design and thus less stable connectivity, and poor mechanical assembly. But it has its market segment because it costs like 15% of the cost of an iPhone. Some people just cannot afford the expensive high-quality goods at all. Some people, slightly better-off, sometimes don't see the point to "overpay" because they are used to the bottom-tier functionality and can't imagine how much higher quality may be materially beneficial in comparison.
In other words, many people have low expectations, and low resources to match. It is a large market to address once a product-market fit was demonstrated in the high-end segment.
I mean "inferior good" as a macroeconomics term: https://www.investopedia.com/terms/i/inferior-good.asp. And the point of my comment is to show that product quality alone doesn't determine whether it's an inferior good.
I see your point. But the choice between an iPhone and a Galaxy is mostly the ecosystem. And the choice between OnePlus and a Galaxy S is mostly about the quality of the camera. And the choice between a Galaxy and a Xioami is mostly about trusting a Chinese brand (not for its technical merits; they make excellent devices). The real quality / price differentiation, to my mind, lies farther down the scale.
That is, the choice between a $10 organic grass-fed milk and $8 organic grass-fed milk is literally a matter of taste, not the $2 price difference. The real price/quality choice is between the $10 fancy organic milk, $4.99 okay milk, and $2.49 bottom-shelf milk. They attract materially different customer segments.
There are many behavioral economics ideas about smartphone choices. There are various psychological aspects, such as lifestyle, status, social and personal values, and political influences. That is all true.
The strongest decider for whether a good will show positive or negative elastic demand (and be considered superior or inferior) is probably how it's branded, pricing strategy included. For example, wealthy people shop in boutiques more than large retail centers, though the items sold are often sourced from the same suppliers. The difference? Branding, including pricing.
You're right about basic goods, such as groceries. Especially goods that are almost perfectly identical and freely substitutable, like milk. What's a superior or inferior good becomes hard to guess when there is a high degree of differentiation (as you say, ecosystems, cameras, security). It's easier to measure than predict.
Anyway, this is all a "fun fact." My original comment really does make the assumption that software, which is relatively substitutable, is like the milk example — the price and the inferiority/superiority are strongly correlated. And the entire expensive software market has collapsed like the expensive secondary market for used cars.
My wife has a perfume business. She makes really high quality extrait de parfums [1] with expensive materials and great formulations. But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price. We've had so many conversations about whether she should dilute everything like the other companies do, but you lose so much of the beauty of the fragrance when you do that. She really doesn't want to go the route of mediocrity, but that does seem to be what the market demands.
[1] https://studiotanais.com/
> [1] https://studiotanais.com/
First, honest impression: At least on my phone (Android/Chromium) the typography and style of the website don't quite match that "high quality & expensive ingredients" vibe the parfums are supposed to convey. The banners (3 at once on the very first screen, one of them animated!), italic text, varying font sizes, and janky video header would be rather off-putting to me. Maybe it's also because I'm not a huge fan of flat designs, partially because I find they make it difficult to visually distinguish important and less important information, but also because I find them a bit… unrefined and inelegant. And, again, this is on mobile, so maybe on desktop it comes across differently.
Disclaimer: I'm not a designer (so please don't listen only to me and take everything with a grain of salt) but I did work as a frontend engineer for a luxury retailer for some time.
I am somewhat familiar with this market and would probably be turned off by this site mostly because it looks too slick and the ones I’ve seen that were this slick mostly weren’t for me (marketed to, and making perfume entirely or almost entirely for, women).
The ones for me usually look way shittier or just use Etsy.
[edit] the only exception I can come up with is Imaginary Authors, which is much slicker-looking than this, actually, but with a far darker palette—this one definitely says “this is feminine stuff” in the design. And actually I’d say IA leans far more feminine as far as overall vibe of their catalog than most others that’ve had at least one scent that worked out for me.
I'm hesitant to reply because it sounds pejorative and snarky, and I will be downvoted, but... you are not the target market for this. End of story.
This design is very 2025 and the rules you're judging by have long-since been thrown out the window. Most brands run on Shopify now, marketing is via myriad social channels in ways that feel insane and unintuitive, aesthetics are all over the map.
What's old is new is old is different is the same is good is bad, and what is garish to you (strangely, honestly) isn't to most; you'll see if you hang out with some young people lol, promise.
P.S. I am not young, I'm figuring this out by watching from afar HAHAHA
Yeah, her customer is gen z or millennial women and queer men. It doesn't look like where I shop, but I'm not the target demo. A lot of the beauty and fragrance world looks like this these days, particularly as you go down towards gen z.
> Most brands run on Shopify now
That site does run on Shopify.
To he blunt
this website looks like a scam website redirecter the one where you have to click on 49 ads and wait for 3 days before you get to your link the video playing immediately makes me think that's a Google ad unrelated to what the website is about the different font styles reminds me of the middle school HTML projects we had to do with each line in a different size and font face to prove that we know how to use <font face> and <font size>. All its missing is a jokerman font
She should double the price so customers wonder why hers costs so much more. Then have a sales pitch explaining the difference.
Some customers WANT to pay a premium just so they know they’re getting the best product.
Is that what the market demands, or is the market unable to differentiate?
From the site there's a huge assumption that potential customers are aware of what extrait de parfum is vs eau de parfum (or even eau de toilette!).
Might be worth a call out that these fragrances are in fact a standard above the norm.
"The highest quality fragrance money can buy" kind of thing.
Offer an eau de parfum line for price anchoring, and market segmentation. Win win.
For sure. I suggested having an eau de parfum option, but it does make things smell totally different -- much weaker, doesn't last long on the body, and can get overpowered by the alcohol carrier. Plus as a small business it'd mean having a dozen new formulations, with the associated packaging changes, inventory, etc. which makes it harder as a totally bootstrapped business. It's definitely still something to think about though, as even fragrances like a Tom Ford or Le Labo selling for $300-400 are just eau de parfums.
> But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price.
Has she tried raising prices? To signal that her product is highly quality and thus more expensive than her competition?
She has, these prices are actually lower than they were before, as most customers don't seem to care about things like concentration. Likely it's just that most aren't that informed about the differences. They'll pay more because it's Chanel or because a European perfumer made it, not because the quality is higher.
The market can’t tell high quality vs not it’s all signaling. Wine has the same problem
Funny, I was about to say the same about wine.
I’m a big coffee fan and the market has no ability toto price that either. Bad coffee can be expensive and good coffee cheap.
looks like they are trying native advertising first
That's actually been new for her, maybe the past two or so months after 10 years in business, and it seems to be working better than any other type of advertising she's done in the past.
I had the same realization but with car mechanics. If you drive a beater you want to spend the least possible on maintenance. On the other hand, if the car mechanic cares about cars and their craftsmanship they want to get everything to tip-top shape at high cost. Some other mechanics are trying to scam you and get the most amount of money for the least amount of work. And most people looking for car mechanics want to pay the least amount possible, and don't quite understand if a repair should be expensive or not. This creates a downward pressure on price at the expense of quality and penalizes the mechanics that care about quality.
Luckily for mechanics, the shortage of actual blue collar Hands-On labor is so small, that good mechanics actually can charge more.
The issue is that you have to be able to distinguish a good mechanic from a bad mechanic cuz they all get to charge a lot because of the shortage. Same thing for plumbing, electrical, HVAC, etc etc etc
But I understand your point.
Here in Atlanta Georgia, we have a ToyoTechs business. They perform maintenance on only Toyota-family automobiles. They have 2 locations, one for large trucks, one for cars, hybrids, and SUV-looking cars. Both are always filled up with customers. Some of whom drive hundreds of miles out of state to bring their vehicles exclusively there, whether the beater is a customized off-roader or a simple econobox with sentimental value.
Why? Because they are on a different incentive structure: non-comissioned payments for employees. They buy OEM parts, give a good warranty, charge fair prices, and they are always busy.
If this computer fad goes away, I'm going to open my own Toyota-only auto shop, trying to emulate them. They have 30 years of lead time on my hypothetical business, but the point stands: when people discover that high quality in this market, they stick to it closely.
People understand cars. Abstract data structures, not so much.
There are laws about what goes into a car, strict regulation. Software, not so much.
Until my boss can be prosecuted for selling untested bug ridden bad software that is what I am instructed to produce
With the introduction of insurance for covering the cost of a security breach, suddenly managers have an understanding of the value of at least the security aspect of software quality. As it impacts their premiums.
I really hope so. But I do not have much faith in insurance companies. I have seen what they have done to worker safety, made it a minefield for workers, a box ticking exercise for bosses, and done very little for worker safety.
What works for worker safety is regulation. I am afraid the same will be true for software.
The regulations are the reason the insurance policies exist. Otherwise, corporations would just ignore or cover up any breaches.
That's a particularly good strategy with Toyota, a company with both a good reputation and a huge market share.
Currently trading at a price to earnings ratio of about seven, compared to 150-800 for Tesla (depending on how you judge their book cooking)
Exactly. People on HN get angry and confused about low software quality, compute wastefulness, etc, but what's happening is not a moral crisis: the market has simply chosen the trade-off it wants, and industry has adapted to it
If you want to be rewarded for working on quality, you have to find a niche where quality has high economic value. If you want to put effort into quality regardless, that's a very noble thing and many of us take pleasure in doing so, but we shouldn't act surprised when we aren't economically rewarded for it
I actually disagree. I think that people will pay more for higher quality software, but only if they know the software is higher quality.
It's great to say your software is higher quality, but the question I have is whether or not is is higher quality with the same or similar features, and second, whether the better quality is known to the customers.
It's the same way that I will pay hundreds of dollars for Jetbrains tools each year even though ostensibly VS Code has most of the same features, but the quality of the implementation greatly differs.
If a new company made their IDE better than jetbrains though, it'd be hard to get me to fork over money. Free trials and so on can help spread awareness.
The Lemon Market exists specifically when customers cannot tell, prior to receipt and usage, whether they are buying high quality or low quality.
Wow, that's actually a good argument for some kind of trial or freemium setup. Interesting.
That must be why WinRAR became so popular. :-)
That does not describe the current subscription-based software market, then, because we do try it, and we can always stop paying, transaction costs aside.
There are two costs to software: what you pay for it, and the time needed to learn how to use it. That's a big different to the original Lemon paper. You don't need to invest time in learning how to use a car, so the only cost to replacing it is the upfront cost of a new car. Worse "Time needed to learn it" understates it, because the cost replacing lemon software is often far more than just training. For example: replacing your accounting system, where you need to keep the data it has for 7 years as a tax record. Replacing a piece of software will typically cost many times the cost of the software itself.
If you look around, notice people still use Microsoft yet ransomware almost universally attacks Windows installations. This is despite everyone knowing Windows is a security nightmare courtesy of the Sony hack 2014: https://en.wikipedia.org/wiki/2014_Sony_Pictures_hack
Mind you, when I say "everyone", Microsoft's marketing is very good. A firm I worked lost $500k to a windows keyboard logger stealing banking credentials. They had virus scanners for firewalls installed of course, but they aren't a sure deference. As the technical lead for many years, I was asked about my opinion of what they could do. The answer is pretty simple: don't use Windows for banking. Buy an iPad of Android tablet, and do you safety critical stuff on there. The CEO didn't believe a tablet could be more secure than a several thousand dollar laptop when copy of Windows cost more than the tablet. Sigh.
So the answer to why don't people move away from poor quality subscription software is by the time they've figure out it's crap, the cost of moving isn't just the subscription. It's much larger than that.
The transaction costs are generally significant.
> but only if they know the software is higher quality.
I assume all software is shit in some fashion because every single software license includes a clause that has "no fitness for any particular purpose" clause. Meaning, if your word processor doesn't process words, you can't sue them.
When we get consumer protection laws that require that software does what is says on the tin quality will start mattering.
It can depend on the application/niche.
I used to write signal processing software for land mobile radios. Those radios were used by emergency services. For the most part, our software was high quality in that it gave good quality audio and rarely had problems. If it did have a problem, it would recover quickly enough that the customer would not notice.
Our radios got a name for reliability: such as feedback from customers about skyscrapers in New York being on fire and the radios not skipping a beat during the emergency response. Word of mouth traveled in a relatively close knit community and the "quality" did win customers.
Oddly we didn't have explicit procedures to maintain that quality. The key in my mind was that we had enough time in the day to address the root cause of bugs, it was a small enough team that we knew what was going into the repository and its effect on the system, and we developed incrementally. A few years later, we got spread thinner onto more products and it didn't work so well.
Don't know. The customer ran a radio network which was used by fire brigade(s?) in NY, so we weren't on the "coal face". It was about 15 years ago.
It was an interesting job. Among other things, our gear ran stage management for a couple of Olympic opening ceremonies. Reliability was key given the size of the audience. We also did gear for the USGC, covering the entire US coastline. If you placed an emergency call at sea, it was our radios that were receiving that signal and passing it into the USCG's network.
I kind of see this in action when I'm comparing products on Amazon. When comparing two products on Amazon that are substantially the same, the cheaper one will have way more reviews. I guess this implies that it has captured the majority of the market.
I think this honestly has more to do with moslty Chinese sellers engaging in review fraud, which is a rampant problem. I'm not saying non-Chinese sellers don't engage in review fraud, but I have noticed a trend that around 98% of fake or fraudulently advertised products are of Chinese origin.
If it was just because it was cheap, we'd also see similar fraud from Mexican or Vietnamese sellers, but I don't really see that.
You have to have bought the item om Amazon to review right? So these reviewers buy and return, or how does it work?
There are various ways to do the trick, sometimes they ship out rocks to create a paper trail, sometimes they take a cheap/light product and then replace the listing with something more expensive and carry over all the reviews (which is just stupid that Amazon allows but apparently they do)
If you think about it there is basically no scalable way for Amazon to ensure a seller is providing the same product over time - and to all customers.
Random sampling can make sure a product matching the description arrives. But someone familiar with it would have to carefully compare over time. And that process doesn’t scale.
One thing Walmart does right is having “buyers” in charge of each department in the store. For example fishing - and they know all the gear and try it out. And they can walk into any store and audit and know if something is wrong.
I’m sure Amazon has responsible parties on paper - but the size and rate at which the catalog changes makes this a lower level of accountability.
Luxury items however seem to buck this trend, but this is all about conspicuous consumption.
There's an analogy with evolution. In that case, what survives might be the fittest, but it's not the fittest possible. It's the least fit that can possibly win. Anything else represents an energy expenditure that something else can avoid, and thus outcompete.
I had the exact same experience trying to build a startup. The thing that always puzzled me was Apple: they've grown into one of the most profitable companies in the world on the basis of high-quality stuff. How did they pull it off?
They focused heavily on the quality of things you can see, i.e. slick visuals, high build quality, even fancy cardboard boxes.
Their software quality itself is about average for the tech industry. It's not bad, but not amazing either. It's sufficient for the task and better than their primary competitor (Windows). But, their UI quality is much higher, and that's what people can check quickly with their own eyes and fingers in a shop.
This is one of the best descriptions I have seen of Apple, very well put
"Market comes first, marketing second, aesthetic third, and functionality a distant fourth" ― Rob Walling in "Start Small, Stay Small"
Apple's aesthetic is more important than the quality (which has been deteriorating lately)
Not on Macintosh. On iPod, iPhone and iPad.
All of those were marketed as just-barely-affordable consumer luxury goods. The physical design and the marketing were more important than the specs.
By being a luxury consumer company. There is no luxury (quality) enterprise software. There is lock-in-extortion enterprise software.
These economic forces exist in math too. Almost every mathematician publishes informal proofs. These contain just enough discussion in English (or other human language) to convince a few other mathematicians in the same field that they their idea is valid. But it is possible to make errors. There are other techniques: formal step-by-step proof presentations (e.g. by Leslie Lamport) or computer-checked proofs that would be more reliable. But almost no mathematician uses these.
This is a really succinct analysis, thanks.
I'm thinking out loud but it seems like there's some other factors at play. There's a lower threshold of quality that needs to happen (the thing needs to work) so there's at least two big factors, functionality and cost. In the extreme, all other things being equal, if two products were presented at the exact same cost but one was of superior quality, the expectation is that the better quality item would win.
There's always the "good, fast, cheap" triangle but with Moore's law (or Wright's law), cheap things get cheaper, things iterate faster and good things get better. Maybe there's an argument that when something provides an order of magnitude quality difference at nominal price difference, that's when disruption happens?
So, if the environment remains stable, then mediocrity wins as the price of superior quality can't justify the added expense. If the environment is growing (exponentially) then, at any given snapshot, mediocrity might win but will eventually be usurped by quality when the price to produce it drops below a critical threshold.
You're laying it out like it's universal, in my experience there are products where people will seek for the cheapest good enough but there are also other product that people know they want quality and are willing to pay more.
Take cars for instance, if all people wanted the cheapest one then Mercedes or even Volkswagen would be out of business.
Same for professional tools and products, you save more by buying quality product.
And then, even in computer and technology. Apple iPhone aren't cheap at all, MacBook come with soldered ram and storage, high price, yet a big part of people are willing to buy that instead of the usual windows bloated spyware laptop that run well enough and is cheap.
> the cheapest one then Mercedes or even Volkswagen would be out of business
I would argue this is a bad example - most luxury cars aren't really meaningfully "better", they just have status symbol value. A mid range Honda civic or Toyota corolla is not "worse" than a Mercedes for most objective measurements.
As someone who drove both, I vehemently disagree. Stripped of logos, one is delightful, the other just nominally gets the job done.
The Mercedes has superior suspension that feels plush and smooth. Wonderful materials in the cabin that feel pleasant to the touch. The buttons press with a deep, satisfying click. The seats hug you like a soft cloud.
All of that isn’t nothing. It is difficult to achieve, and it is valuable.
All of that make the Mercedes better than a Corolla, albeit at a higher cost.
Not everyone wants the cheapest, but lemons fail and collapse the expensive part of the market with superior goods.
To borrow your example, it's as if Mercedes started giving every 4th customer a Lada instead (after the papers are signed). The expensive Mercedes market would quickly no longer meet the luxury demand of wealthy buyers and collapse. Not the least because Mercedes would start showing super-normal profits, and all other luxury brands would get in on the same business model. It's a race to the bottom. When one seller decreases the quality, so must others. Otherwise, they'll soon be bought out, and that's the best-case scenario compared to being outcompeted.
There is some evidence that the expensive software market has collapsed. In the 00s and 90s, we used to have expensive and cheap video games, expensive and cheap video editing software, and expensive and cheap office suites. Now, we have homogeneous software in every niche — similar features and similar (relatively cheap) prices. AAA game companies attempting to raise their prices back to 90s levels (which would make a AAA game $170+ in today's money) simply cannot operate in the expensive software market. First, there was consumer distrust due to broken software, then there were no more consumers in that expensive-end market segment.
Hardware you mention (iPhones, Androids, Macs, PCs) still have superior and inferior hardware options. Both ends of the market exist. The same applies to most consumer goods - groceries, clothes, shoes, jewelry, cars, fuel, etc. However, for software, the top end of the market is now non-existent. It's gone the way of expensive secondary market (resale) cars, thanks to how those with hidden defects undercut their price and destroyed consumer trust.
If by "top end" you mean "built to spec, hardened, and close to bug-free", it's alive and well in heavy manufacturing, telecommunication, automotive, aerospace, military, and medical industries. The technologies used there are not sexy (ask anyone working at Siemens or Nokia), the code wouldn't delight you, the processes are likely glacial, but there you will find software that works because it absolutely has to.
If by "top end" you mean "serves the implied user need in the best way imaginable", then modern LLMs systems are a good example. Despite the absolute mess and slop that those systems are built of, very few people come to ChatGPT and leave unsatisfied with its results.
If by "top end" you mean "beautifully engineered and maintained", think SQLite, LLVM and some OS kernels, like seL4. Those are well-written, foundational pieces of software that are not end-products in themselves, but they're built to last, studied by developers, and trusted everywhere. This is the current forefront in our knowledge of how to write software.
If by "top end" you mean "maximising profit through code", then the software in the top trading firms match this description. All those "hacker-friendly" and "tech-driven" firms run on the same sloppy code as everyone else, but they are ruthlessly optimised to make money. That's performance too.
You can carry on. For each definition of "top end", there is a real-life example of software matching it.
One can moan about the market rewarding mediocrity, but we, as technologists, all have better things to do instead of endless hand-wringing, really.
I feel your realization and still hope my startup will have an competitive edge through quality.
In this case quality also means code quality which in my coding believe should lead to faster feature development
> We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
Relevant apocrypha: https://www.youtube.com/watch?v=UFcb-XF1RPQ
If you’re trying to sell a product to the masses, you either need to make it cheap or a fad.
You cannot make a cheap product with high margins and get away with it. Motorola tried with the RAZR. They had about five or six good quarters from it and then within three years of initial launch were hemorrhaging over a billion dollars a year.
You have to make premium products if you want high margins. And premium means you’re going for 10% market share, not dominant market share. And if you guess wrong and a recession happens, you might be fucked.
Yes, I was in this place too when I had a consulting company. We bid on projects with quotes for high quality work and guaranteed delivery within the agreed timeframe. More often than not we got rejected in favor of some students who submitted a quote for 4x less. I sometimes asked those clients how the project went, and they'd say, well, those guys missed the deadline and asked for more money several times
> People want cheap
There is an exception: luxury goods. Some are expensive, but people don't mind them being overpriced because e.g. they are social status symbols. Is there such a thing like "luxury software"? I think Apple sort of has this reputation.
It depends on who is paying versus using the product. If the buyer is the user, they tend value quality more so than otherwise.
Do you drive the cheapest car, eat the cheapest food, wear the cheapest clothes, etc.?
> What I realized is that lower costs, and therefore lower quality,
This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.
I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.
For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.You can do this in Scala[0], and you'll get type inference and compile time type checking, informational messages (like the compiler prints an INFO message showing the SQL query that it generates), and optional schema checking against a database for the queries your app will run. e.g.
This integrates with a high-performance functional programming framework/library that has a bunch of other stuff like concurrent data structures, streams, an async runtime, and a webserver[1][2]. The tools already exist. People just need to use them.[0] https://github.com/zio/zio-protoquill?tab=readme-ov-file#sha...
[1] https://github.com/zio
[2] https://github.com/zio/zio-http
Notice how you're still specifying List types? That's not what I'm describing.
You're also just describing a SQL mapping tool, which is also not really it either, though maybe that would be part of the runtime invisible to the user. Define a temporary table whose shape is inferred from another query, that's durable and garbage collected when it's no longer in use, and make it look like you're writing code against any other collection type, and declaratively specify the time complexity of insert, delete and lookup operations, then you're close to what I'm after.
The explicit annotation on people is there for illustration. In real code it can be inferred from whatever the expression is (as the other lines are).
I don't think it's reasonable to specify the time complexity of insert/delete/lookup. For one, joins quickly make you care about multi-column indices and the precise order things are in and the exact queries you want to perform. e.g. if you join A with B, are your results sorted such that you can do a streaming join with C in the same order? This could be different for different code paths. Simply adding indices also adds maintenance overhead to each operation, which doesn't affect (what people usually mean by) the time complexity (it scales with number of indices, not dataset size), but is nonetheless important for real-world performance. Adding and dropping indexes on the fly can also be quite expensive if your dataset size is large enough to care about performance.
That all said, you could probably get at what you mean by just specifying indices instead of complexity and treating an embedded sqlite table as a native mutable collection type with methods to create/drop indices and join with other tables. You could create the table in the constructor (maybe using Object.hash() for the name or otherwise anonymously naming it?) and drop it in the finalizer. Seems pretty doable in a clean way in Scala. In some sense, the query builders are almost doing this, but they tend to make you call `run` to go from statement to result instead of implicitly always using sqlite.
Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:
Done. Don't need any fancy support for that. Or if you want to load from a database, using the repository pattern and Kotlin this time instead of Java: That would turn into an efficient SQL query that does a WHERE ... AND ... clause. But you can also compose queries in a type safe way client side using something like jOOQ or Criteria API.> Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:
But now you've hard-coded this selection, why can't the performance characteristics also be easily parameterized and combined, eg. insert is O(1), delete is O(log(n)), or by defining indexes in SQL which can be changed at any time at runtime? Or maybe the performance characteristics can be inferred from the types of queries run on a collection elsewhere in the code.
> That would turn into an efficient SQL query that does a WHERE ... AND ... clause.
For a database you have to manually construct, with a schema you have to manually and poorly to an object model match, using a library or framework you have to painstakingly select from how many options?
You're still stuck in this mentality that you have to assemble a set of distinct tools to get a viable development environment for most general purpose programming, which is not what I'm talking about. Imagine the relational model built-in to the language, where you could parametrically specify whether collections need certain efficient operations, whether collections need to be durable, or atomically updatable, etc.
There's a whole space of possible languages that have relational or other data models built-in that would eliminate a lot of problems we have with standard programming.
There are research papers that examine this question of whether runtime optimizing data structures is a win, and it's mostly not outside of some special cases like strings. Most collections are quite small. Really big collections tend to be either caches (which are often specialized anyway), or inside databases where you do have more flexibility.
A language fully integrated with the relational model exists, that's PL/SQL and it's got features like classes and packages along with 'natural' SQL integration. You can do all the things you ask for: specify what operations on a collection need to be efficient (indexes), whether they're durable (temporary tables), atomically updatable (LOCK TABLE IN EXCLUSIVE MODE) and so on. It even has a visual GUI builder (APEX). And people do build whole apps in it.
Obviously, this approach is not universal. There are downsides. One can imagine a next-gen attempt at such a language that combined the strengths of something like Java/.NET with the strengths of PL/SQL.
Funnily enough, the combination of .NET and PL/SQL already exists today, albeit in a literal sense:
https://pldotnet.brickabode.com/cms/uploads/pldotnet_v0_99_b...
> There are research papers that examine this question of whether runtime optimizing data structures is a win
If you mean JIT and similar tech, that's not really what I'm describing either. I'm talking about lifting the time and space complexity of data structures to parameters so you don't have to think about specific details.
Again, think about how tables in a relational database work, where you can write queries against sets without regard for the underlying implementation, and you have external/higher level tools to tune a running program's data structures for better time or space behavior.
> A language fully integrated with the relational model exists, that's PL/SQL
Not a general purpose language suitable for most programming, and missing all of the expressive language features I described, like type/shape inference, higher order queries and query composition and so on. See my previous comments. The tool you mentioned leaves a lot to be desired.
Why aren’t you building these languages?
Clojure, friend. Clojure.
Other functional languages, too, but Clojure. You get exactly this, minus all the <'s =>'s ;'s and other irregularities, and minus all the verbosity...
Isn't this comprehension in Python https://www.w3schools.com/python/python_lists_comprehension.... ?
Your argument makes sense. I guess now it's your time to shine and to be the change you want to see in the world.
I wish I had the time... always "some day"...
Thus the answer to your question of why those languages don’t exist.
That would be an explanation if new object/functional/procedural languages weren't coming out every year.
I consider functional thinking and ability to use list comprehensions/LINQ/lodash/etc. to be fundamental skills in today's software world. The what, not the how!
Agreed, but it doesn't go far enough IMO. Why not add language/runtime support for durable list comprehensions, and also atomically updatable ones so they can be concurrently shared, etc. Bring the database into the language in a way that's just as easily to use and query as any other value.
Well, you can do that with LINQ + EF and embedded databases like SQL Lite or similar.
LINQ is on the right track but doesn't quite go far enough with query composition. For instance, you can't "unquote" a query within another query (although I believe there is a library that tries to add this).
EF code-first is also on the right track, but the fluent and attribute mapping are awkward, foreign key associations often have to be unpacked directly as value type keys, there's no smooth transition between in-memory native types and durable types, and schema migration could be smoother.
Lots of the bits and pieces of what I'm describing are around but they aren't holistically combined.
>lower costs, and therefore lower quality,
Many high-quality open-source designs suggest this is a false premise, and as a developer who writes high-quality and reliable software for much much lower rates than most, cost should not be seen as a reliable indicator of quality.
I see another dynamic "customer value" features get prioritized and eventually product reaches a point of crushing tech debt. It results in "customer value" features delivery velocity grinding to a halt. Obviously subject to other forces but it is not infrequent for someone to come in and disrupt the incumbents at this point.
"Quality is free"[1], luxury isn't.
Also, one should not confuse the quality of the final product and the quality of the process.
[1] https://archive.org/details/qualityisfree00cros
But do you think you could have started with a bug laden mess? Or is it just the natural progression down the quality and price curve that comes with scale
> People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality).
Sure, but what about the people who consider quality as part of their product evaluation? All else being equal everyone wants it cheaper, but all else isn't equal. When I was looking at smart lighting, I spent 3x as much on Philips Hue as I could have on Ikea bulbs: bought one Ikea bulb, tried it on next to a Hue one, and instantly returned the Ikea one. It was just that much worse. I'd happily pay similar premiums for most consumer products.
But companies keep enshittifying their products. I'm not going to pay significantly more for a product which is going to break after 16 months instead of 12 months. I'm not going to pay extra for some crappy AI cloud blockchain "feature". I'm not going to pay extra to have a gaudy "luxury" brand logo stapled all over it.
Companies are only interested in short-term shareholder value these days, which means selling absolute crap at premium prices. I want to pay extra to get a decent product, but more and more it turns out that I can't.
>There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
It's the same concept as the age old "only an engineer can build a bridge that just barely doesn't fall down" circle jerk but for a more diverse set of goods than just bridges.
I’d argue this exists for public companies, but there are many smaller, private businesses where there’s no doctrine of maximising shareholder value
These companies often place a greater emphasis on reputation and legacy Very few and far between, Robert McNeel & Associates (American) is one that comes to mind (Rhino3D), as his the Dutch company Victron (power hardware)
The former especially is not known for maximising their margins, they don’t even offer a subscription-model to their customers
Victron is an interesting case, where they deliberately offer few products, and instead of releasing more, they heavily optimise and update their existing models over many years in everything from documentation to firmware and even new features. They’re a hardware company mostly so very little revenue is from subscriptions
Maybe you could compete by developing new and better products? Ford isn't selling the same car with lower and lower costs every year.
It's really hard to reconcile your comment with Silicon Valley, which was built by often expensive innovation, not by cutting costs. Were Apple, Meta, Alphabet, Microsoft successful because they cut costs? The AI companies?
Microsoft yes, the PC market made it very hard for Apple to compete on price.
Meta and Alphabet had zero cost products (to consumers) that they leveraged to become near monopolies.
Aren’t all the AI companies believed to be providing their products below cost for now to grab market share?
> Apple
Apple's incredible innovation and attention to detail is what made them legendary and successful. Steve Jobs was legendary for both.
> Meta and Alphabet had zero cost products (to consumers) that they leveraged to become near monopolies.
What does zero cost have to do with it? The comment I responded to spoke of cutting the business's costs - quality inputs, labor, etc. - not their customers' costs. Google made a much better search engine than competitors and then better advertising engine; Facebook made the best social media network.
> Aren’t all the AI companies believed to be providing their products below cost for now to grab market share?
Again, what does that have to do with cutting costs rather than innovating to increase profit?
There’s probably another name for this
Capitalism? Marx's core belief was that capitalists would always lean towards paying the absolute lowest price they could for labor and raw materials that would allow them to stay in production. If there's more profit in manufacturing mediocrity at scale than quality at a smaller scale, mediocrity it is.
Not all commerce is capitalistic. If a commercial venture is dedicated to quality, or maximizing value for its customers, or the wellbeing of its employees, then it's not solely driven by the goal of maximizing capital. This is easier for a private than a public company, in part because of a misplaced belief that maximizing shareholder return is the only legally valid business objective. I think it's the corporate equivalent of diabetes.
In the 50s and 60s, capitalism used to refer to stakeholder capitalism. It was dedicated to maximize value for stakeholders, such as customers, employees, society, etc.
But that shifted later, with Milton Friedman, who pushed the idea of shareholder capitalism in the 70s. Where companies switched to thinking the only goal is to maximize shareholder value.
In his theory, government would provide regulation and policies to address stakeholder's needs, and companies therefore needed focus on shareholders.
In practice, lobbying, propaganda and corruption made it so governments dropped the ball and also sided to maximize shareholder value, along with companies.
The problem with your thesis is that software isn't a physical good, so quality isn't tangible. If software does the advertised thing, it's good software. That's it.
With physical items, quality prevents deterioration over time. Or at least slows it. Improves function. That sort of thing.
Software just works or doesn't work. So you want to make something that works and iterate as quickly as possible. And yes, cost to produce it matters so you can actually bring it to market.
> There’s probably another name for this
Race to the bottom
I'm a layman, but in my opinion building quality software can't really be a differentiator because anyone can build quality software given enough time and resources. You could take two car mechanics and with enough training, time, assistance from professional dev consultants, testing, rework, so and so forth, make a quality piece of software. But you'd have spent $6 million to make a quality alarm clock app.
A differentiator would be having the ability to have a higher than average quality per cost. Then maybe you're onto something.
I'm proud of you, it often takes people multiple failures before they learn to accept their worldview that regulations aren't necessary and the tragedy of Commons is a myth are wrong.
> the market sells as if all goods were high-quality
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
Disagree, the main reason so many apps are using "slow" languages/frameworks is precisely that it allows them to develop way more features way quicker than more efficient and harder languages/frameworks.
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
I never said performance wasn't an important quality metric, just that it's not the only quality metric. If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
> How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Like any other feature, whether or not performance is important depends on the user and context. Chrome being faster than IE8 at general browsing (rendering pages, opening tabs) was very noticeable. uv/ruff being faster than pip/poetry is important because of how the tools integrate into performance-sensitive development workflows. Does Slack taking 5-10 seconds to load on startup matter? -- to me not really, because I have it come up on boot and forget about it until my next system update forced reboot. Do I use LibreOffice or Word and Excel, even though LibreOffice is faster? -- I use Word/Excel because I've run into annoying compatibility issues enough times with LO to not bother. LibreOffice could reduce their startup and file load times to 10 picoseconds and I would still use MS Office, because I just want my damn documents to keep the same formatting my colleagues using MS Office set on their Windows computers.
Now of course I would love the best of all worlds; programs to be fast and have all the functionality I want! In reality, though, companies can't afford to build every feature, performance included, and need to pick and choose what's important.
> If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
That’s irrelevant here, the fully featured product can also be fast. The overwhelming majority of software is slow because the company simply doesn’t care about efficiency. Google actively penalized slow websites and many companies still didn’t make it a priority.
> That’s irrelevant here, the fully featured product can also be fast.
So why is it so rarely the case? If it's so simple, why hasn't anyone recognized that Teams, Zoom, etc are all bloated and slow and made a hyper-optimized, feature-complete competitor, dominating the market?
Software costs money to build, and performance optimization doesn't come for free.
> The overwhelming majority of software is slow because the company simply doesn’t care about efficiency.
Don't care about efficiency at all, or don't consider it as important as other features and functionality?
> Software costs money to build, and performance optimization doesn't come for free.
Neither do caching, operational/architectural overhead, slow builds and all the hoops we jump through in order to satisfy stylistic choices. All of this stuff introduces complexity and often demands specialized expertise on top.
And it's typically not about optimization, but about not doing things that you don't necessarily have to do. A little bit of frugality goes a long way. Often leading to simpler code and fewer dependencies.
The hardware people are (actually) optimizing, trying hard to make computers fast, to a degree that it introduces vulnerabilities (like the apple CPU cache prefetching memory from arrays of pointers, which opened it up for timing attacks, or the branch prediction vulnerability on intel chips). Meanwhile we software people are piling more and more stuff into programs that aren't needed, from software patterns/paradigms to unnecessary dependencies etc.
There's also the issue of programs feeling entitled to resources. When I'm running a video game or a data migration, I obviously want to give it as many resources as possible. But it shouldn't be necessary to provide gigabytes of memory for utility programs and operative applications.
Not being free upfront isn’t the same thing as expensive.
Zoom’s got 7,412 employees a small team of say 7 employees could make a noticeable difference here and the investment wouldn’t disappear, it would help drive further profits.
> Don't care about efficiency at all
Doesn’t care beyond basic functionality. Obviously they care if something takes an hour to load, but rarely do you see considerations for people running on lower hardware than the kind of machines you see at a major software company etc.
> Zoom’s got 7,412 employees a small team of say 7 employees could make a noticeable difference here
What would those 7 engineers specifically be working on? How did you pick 7? What part of the infrastructure would they be working on, and what kind of performance gains, in which part of the system, would be the result of their work?
What consumers care about is the customer facing aspects of the business. As such you’d benchmark Zoom on various clients/plugins (Windows, Max, Android, iOS) and create a never ending priority list of issues weighted by marketshare.
7 people was roughly chosen to be able to cover the relevant skills while also being a tiny fraction of the workforce. Such efforts run into diminishing returns, but the company is going to keep creating low hanging fruit.
If you're being honest, compare Slack and Teams not with weechat, but with Telegram. Its desktop client (along with other clients) is written by an actually competent team that cares about performance, and it shows. They have enough money to produce a native client written in C++ that has fantastic performance and is high quality overall, but these software behemoths with budgets higher than most countries' GDP somehow never do.
This; "quality" is such an unclear term here.
In an efficient market people buy things based on a value which in the case of software, is derived from overall fitness for use. "Quality" as a raw performance metric or a bug count metric aren't relevant; the criteria is "how much money does using this product make or save me versus its competition or not using it."
In some cases there's a Market of Lemons / contract / scam / lack of market transparency issue (ie - companies selling defective software with arbitrary lock-ins and long contracts), but overall the slower or more "defective" software is often more fit for purpose than that provided by the competition. If you _must_ have a feature that only a slow piece of software provides, it's still a better deal to acquire that software than to not. Likewise, if software is "janky" and contains minor bugs that don't affect the end results it provides, it will outcompete an alternative which can't produce the same results.
That's true. I meant it in a broader sense. Quality = {speed, function, lack of bugs, ergonomics, ... }.
[dead]
I don't think it's necessarily a market for lemons. That involves information asymmetry.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
I do think there is an information problem in many cases.
It is easy to get information of features. It is hard to get information on reliability or security.
The result is worsened because vendors compete on features, therefore they all make the same trade off of more features for lower quality.
There's likely some, although it depends on the environment. The more users of the system there are, the more there are going to be reviews and people will know that it's kind of buggy. Most people seem more interested in cost or features though, as long as they're not losing hours of work due to bugs.
Some vendors even make it impossible to get information. See Oracle and Microsoft forbidding publishing benchmarks for their SQL databases.
I have worked for large corporations that have foisted awful HR, expense reporting, time tracking and insurance "portals" that were so awful I had to wonder if anyone writing the checks had ever seen the product. I brought up the point several times that if my team tried to tell a customer that we had their project all done but it was full of as many bugs and UI nightmares as these back office platforms, I would be chastised, demoted and/or fired.
> I had to wonder if anyone writing the checks had ever seen the product
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
If they think it is unimportant talk as if it is. It could be more polished. Do we want to impress them or just satisfy their needs?
The job it’s paid to do is satisfy regulation requirements.
What you're describing is Enterprise(tm) software. Some consultancy made tens of millions of dollars building, integrating, and deploying those things. This of course was after they made tens of millions of dollars producing reports exploring how they would build, integrate, and deploy these things and all the various "phases" involved. Then they farmed all the work out to cheap coders overseas and everyone went for golf.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
I used to work at a large company that had a lousy internal system for doing performance evals and self-reviews. The UI was shitty, it was unreliable, it was hard to use, it had security problems, it would go down on the eve of reviews being due, etc. This all stressed me out until someone in management observed, rather pointedly, that the reason for existence of this system is that we are contractually required to have such a system because the rules for government contracts mandate it, and that there was a possibility (and he emphasized the word possibility knowingly) that the managers actully are considering their personal knowledge of your performance rather than this performative documentation when they consider your promotions and comp adjustments. It was like being hit with a zen lightning bolt: this software meets its requirements exactly, and I can stop worrying about it. From that day on I only did the most cursory self-evals and minimal accomplishents, and my career progressed just fine.
You might not think about this as “quality” but it does have the quality of meeting the perverse functional requirements of the situation.
Across three jobs, I have now seen three different HR systems from the same supplier which were all differently terrible.
> the market buys bug-filled, inefficient software about as well as it buys pristine software
In fact, the realization is that the market buy support.
And that includes google and other companies that lack much of human support.
This is the key.
Support is manifested in many ways:
* There is information about it (docs, videos, blogs, ...)
* There is people that help me ('look ma, this is how you use google')
* There is support for the thing I use ('OS, Browser, Formats, ...')
* And for my way of working ('Excel let me do any app there...')
* And finally, actual people (that is the #1 thing that keep alive even the worst ERP on earth). This also includes marketing, sales people, etc. This are signal of having support even if is not exactly the best. If I go to enterprise and only have engineers that will be a bad signal, because well, developers then to be terrible at other stuff and the other stuff is support that matters.
If you have a good product, but there is not support, is dead.
And if you wanna fight a worse product, is smart to reduce the need to support for ('bugs, performance issues, platforms, ...') for YOUR TEAM because you wanna reduce YOUR COSTS but you NEED to add support in other dimensions!
The easiest for a small team, is just add humans (that is the MOST scarce source of support). After that, it need to be creative.
(also, this means you need to communicate your advantages well, because there is people that value some kind of support more than others 'have the code vs propietary' is a good example. A lot prefer the proprietary with support more than the code, I mean)
So you're telling me that if companies want to optimize profitability, they’d release inefficient, bug-ridden software with bad UI—forcing customers to pay for support, extended help, and bug fixes?
Suddenly, everything in this crazy world is starting to make sense.
Afaik, SAS does exactly that (haven't any experience with them personally, just retelling gossips). Also Matlab. Not that they are BAD, it's just that 95% of matlab code could be python or even fortran with less effort. But matlab have really good support (aka telling the people in charge how they are tailored to solve this exact problem).
Suddenly, Microsoft makes perfect sense!
This really focuses on the single metric that can be used try ought lifetime of a product … a really good point that keeps unfolding.
Starting an OSS product - write good docs. Got a few enterprise people interested - “customer success person” is most important marketing you can do …
I worked in a previous job on a product with 'AI' in the name. It was a source of amusement to many of us working there that the product didn't, and still doesn't use any AI.
Even if end-users had the data to reasonably tie-break on software quality and performance, as I scroll my list of open applications not a single one of them can be swapped out with another just because it were more performant.
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
I’m surprised in your list because it contains 3 apps that I’ve replaced specifically due to performance issues (docker, iterm and notes). I don’t consider myself particularly performance sensitive (at home) either. So it might be true that the world is even _less_ likely to pay for resource efficiency than we think.
What did you replace Docker with?
Podman
Podman might have some limited API compatibility, but it's a completely different tool. Just off the bat it's not compatible with Skaffold, apparently.
That an alternate tool might perform better is compatible with the claim that performance alone is never the only difference between software.
Podman might be faster than Docker, but since it's a different tool, migrating to it would involve figuring out any number of breakage in my toolchain that doesn't feel worth it to me since performance isn't the only thing that matters.
Except you’ve already swapped terminal for iterm, and orbstack already exists in part because docker left so much room for improvement, especially on the perf front.
I swapped Terminal for iTerm2 because I wanted specific features, not because of performance. iTerm2 is probably slower for all I care.
Another example is that I use oh-my-zsh which is adds weirdly long startup time to a shell session, but it lets me use plugins that add things like git status and kubectl context to my prompt instead of fiddling with that myself.
> But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies
I'd take this one step further, 99% of the software written isn't being done with performance in mind. Even here in HN, you'll find people that advocate for poor performance because even considering performance has become a faux pas.
That means you L4/5 and beyond engineers are fairly unlikely to have any sort of sense when it comes to performance. Businesses do not prioritize efficient software until their current hardware is incapable of running their current software (and even then, they'll prefer to buy more hardware is possible.)
The user tolerance has changed as well because the web 2.0 "perpetual beta" and SaaS replacing other distribution models.
Also Microsoft has educated now several generations to accept that software fails and crashes.
Because "all software is the same", customers may not appreciate good software when they're used to live with bad software.
Is this really tolerance and not just monopolistic companies abusing their market position? I mean workers can't even choose what software they're allowed to use, those choices are made by the executive/management class.
The used car market is market for lemons because it is difficult to distinguish between a car that has been well maintained and a car close to breaking down. However, the new car market is decidedly not a market for lemons because every car sold is tested by the state, and reviewed by magazines and such. You know exactly what you are buying.
Software is always sold new. Software can increase in quality the same way cars have generally increased in quality over the decades. Creating standards that software must meet before it can be sold. Recalling software that has serious bugs in it. Punishing companies that knowingly sell shoddy software. This is not some deep insight. This is how every other industry operates.
A hallmark of well-designed and well-written software is that it is easy to replace, where bug-ridden spaghetti-bowl monoliths stick around forever because nobody wants to touch them.
Just through pure Darwinism, bad software dominates the population :)
That's sorta the premise of the tweet, though.
Right now, the market buys bug-filled, inefficient software because you can always count on being able to buy hardware that is good enough to run it. The software expands to fill the processing specs of the machine it is running on - "What Andy giveth, Bill taketh away" [1]. So there is no economic incentive to produce leaner, higher-quality software that does only the core functionality and does it well.
But imagine a world where you suddenly cannot get top-of-the-line chips anymore. Maybe China invaded Taiwan and blockaded the whole island, or WW3 broke out and all the modern fabs were bombed, or the POTUS instituted 500% tariffs on all electronics. Regardless of cause, you're now reduced to salvaging microchips from key fobs and toaster ovens and pregnancy tests [2] to fulfill your computing needs. In this world, there is quite a lot of economic value to being able to write tight, resource-constrained software, because the bloated stuff simply won't run anymore.
Carmack is saying that in this scenario, we would be fine (after an initial period of adjustment), because there is enough headroom in optimizing our existing software that we can make things work on orders-of-magnitude less powerful chips.
[1] https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law
[2] https://www.popularmechanics.com/science/a33957256/this-prog...
I have that washing machine btw. I saw the AI branding and had a chuckle. I bought it anyway because it was reasonably priced (the washer was $750 at Costco).
In my case I bought it because LG makes appliances that fit under the counter if you don't have much space.
It bothered me the AI BS, but the price was good and the machine works fine.
A big part of why I like shopping at Costco is that they generally don't sell garbage. Their filter doesn't always match mine, but they do have a meaningful filter.
> The AI label itself commands a price premium.
These days I feel like I'd be willing to pay more for a product that explicitly disavowed AI. I mean, that's vulnerable to the same kind of marketing shenanigans, but still. :-)
Ha! You're totally right.
You must be referring only to security bugs because you would quickly toss Excel or Photoshop if it were filled with performance and other bugs. Security bugs are a different story because users don't feel the consequences of the problem until they get hacked and even then, they don't know how they got hacked. There are no incentives for developers to actually care.
Developers do care about performance up to a point. If the software looks to be running fine on a majority of computers why continue to spend resources to optimize further? Principle of diminishing returns.
I wouldn't be so sure. People will rename genes to work around Excel bugs.
> This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI.
The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
> The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
In the end it all hinges on the users ability to assess the quality of the product. Otherwise, the user cannot judge whether an assistant recommends quality products and the assistant has an incentive to suggest poorly (e.g. sellout to product producers).
> In the end it all hinges on the users ability to assess the quality of the product
The AI can use tools to extract various key metrics from the product that is analysed. Even if we limit such metrics down to those that can be verified in various "dumb" ways we should be able to verify products much further than today.
We're currently operate in a world where new features are pushed that don't interest consumers. While they can't tell the difference between slop and not at purchase they sure can between updates. People constantly complain about stuff getting slower. But they also do get excited when things get faster.
Imo it's in part because we turned engineers into MBAs. Wherever I ask why can't we solve a problem some engineer always responds "well it's not that valuable". The bug fix is valuable to the user but they always clarify they mean money. Let's be honest, all those values are made up. It's not the job of the engineer to figure out how much profit a big fix will result in, it's their job to fix bugs.
Famously Coke doesn't advertise to make you aware of Coke. They advertise to associate good feelings. Similarly, car companies advertise to get their cars associated with class. Which is why sometimes they will advertise to people who have no chance of buying the car. What I'm saying is that brand matters. The problem right now is that all major brands have decided brand doesn't matter or brand decisions are always set in stone. Maybe they're right, how often do people switch? But maybe they're wrong, switching seems to just have the same features but a new UI that you got to learn from scratch (yes, even Apple devices aren't intuitive)
That's generally what I think as well. Yes, the world could run on older hardware, but you keep making faster and adding more CPU's so, why bother making the code more efficient?
> The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
That's where FOSS or even proprietary "shared source" wins. You know if the software you depend on is generally badly or generally well programmed. You may not be able to find the bugs, but you can see how long the functions are, the comments, and how things are named. YMMV, but conscientiousness is a pretty great signal of quality; you're at least confident that their code is clean enough that they can find the bugs.
Basically the opposite of the feeling I get when I look at the db schemas of proprietary stuff that we've paid an enormous amount for.
IME, the problem is that FOSS consumer facing software is just about the worst in UX and design.
Technically correct, since you know it's bad because it's FOSS.
At least when talking about software that has any real world use case, and not development for developments sake.
Bad software is not cheaper to make (or maintain) in the long-term.
There are many exceptions.
1. Sometimes speed = money. Being the first to market, meeting VC-set milestones for additional funding, and not running out of runway are all things cheaper than the alternatives. Software maintenance costs later don't come close to opportunity costs if a company/project fails.
2. Most of the software is disposable. It's made to be sold, and the code repo will be chucked into a .zip on some corporate drive. There is no post-launch support, and the software's performance after launch is irrelevant for the business. They'll never touch the codebase again. There is no "long-term" for maintenance. They may harm their reputation, but that depends on whether their clients can talk with each other. If they have business or govt clients, they don't care.
3. The average tenure in tech companies is under 3 years. Most people involved in software can consider maintenance "someone else's problem." It's like the housing stock is in bad shape in some countries (like the UK) because the average tenure is less than 10 years. There isn't a person in the property's owner history to whom an investment in long-term property maintenance would have yielded any return. So now the property is dilapidated. And this is becoming a real nationwide problem.
4. Capable SWEs cost a lot more money. And if you hire an incapable IC who will attempt to future-proof the software, maintenance costs (and even onboarding costs) can balloon much more than some inefficient KISS code.
5. It only takes 1 bad engineering manager in the whole history of a particular piece of commercial software to ruin its quality, wiping out all previous efforts to maintain it well. If someone buys a second-hand car and smashes it into a tree hours later, was keeping the car pristinely maintained for that moment (by all the previous owners) worth it?
And so forth. What you say is true in some cases (esp where a company and its employees act in good faith) but not in many others.
The other factor here is that in the number-go-up world that many of the US tech firms operate in, your company has to always be growing in order to be considered successful, and as long as your company is growing, future engineer time will always be cheaper than current engineering time (and should you stop growing, you are done for anyway, and you won't need those future engineers).
Thanks for the insightful counter-argument.
"In the long run, we are all dead." -- Keynes
In my experiences, companies can afford to care about good software if they have extreme demands (e.g. military, finance) or amortize over very long timeframes (e.g. privately owned). It's rare for consumer products to fall into either of these categories.
That’s true - but finding good engineers who know how to do it is more expensive, at least in expenditures.
What does "make in the long-term" even mean? How do you make a sandwich in the long-term?
Bad things are cheaper and easier to make. If they weren't, people would always make good things. You might say "work smarter," but smarter people cost more money. If smarter people didn't cost more money, everyone would always have the smartest people.
Maybe not, but that still leaves the question of who ends up bearing the actual costs of the bad software.
Therefore brands as guardians of quality .
the thing is - countries have set down legal rules preventing selling of food that actively harms the consumer(expired, known poisonous, addition of addictive substances(opiates) etc) to continue your food analogy.
in software the regulations can be boiled down to 'lol lmao' in pre-GDPR era. and even now i see GDPR violations daily.
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
I like to point out that since ~1980, computing power has increased about 1000X.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.
On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.
> Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch.
> On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional
I just launched IntelliJ (first time since reboot). Took maybe 2 seconds to the projects screen. I clicked a random project and was editing it 2 seconds after that.
I tried Twitter, Reddit, AirBnB, and tried to count the loading time. Twitter was the slowest at about 3 seconds.
I have a 4 year old laptop. If you're seeing 10 second load times for every website and 20 second launch times for every app, you have something else going on. You mentioned corporate VPN, so I suspect you might have some heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.
> heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.
Ugh, I personally witnessed this. I would wait to take my break until I knew the unavoidable, unkillable AV scans had started and would peg my CPU at 100%. I wonder how many human and energy resources are wasted checking for non-existant viruses on corp hardware.
In a previous job, I was benchmarking compile times. I came in on a Monday and everything was 10-15% slower. IT had installed carbon black on my machine over the weekend, which was clearly the culprit. I sent WPA traces to IT but apparently the sales guys said there was no overhead so that was that.
I used to think that was the worst, but then my org introduced me to pegging HDD write at 100% for half an hour at a time. My dad likes to talk about how he used to turn on the computer, then go get coffee; in my case it was more like turn on machine, go for a run, shower, check back, coffee, and finally... maybe.
Every Wednesday my PC becomes so slow it is barely usable. It is the Windows Defender scans. I tried doing a hack to put it on a lower priority but my hands are tied by IT.
Same. I had nearly full administrative privs on the laptop, yet I get "Access denied" trying to deprioritize the scan. We got new hardware recently, so we should be good until the scanners catch up and consume even more resources...
I'm on a four year old mid-tier laptop and opening VS Code takes maybe five seconds. Opening IDEA takes five seconds. Opening twitter on an empty cache takes perhaps four seconds and I believe I am a long way from their servers.
On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.
I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.
VS Code defers a lot of tasks to the background at least. This is a bit more visible in intellij; you seem to measure how long it takes to show its window, but how long does it take for it to warm up and finish indexing / loading everything, or before it actually becomes responsive?
Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Also keep in mind that desktop computers haven't gotten significantly faster for tasks like opening applications in the past years; they're more efficient (especially the M line CPUs) and have more hardware for specialist workloads like what they call AI nowadays, but not much innovation in application loading.
You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.
I wish the big desktop app builders would invest in native applications. I understand why they go for web technology (it's the crossplatform GUI technology that Java and co promised and offers the most advanced styling of anything anywhere ever), but I wish they invested in it to bring it up to date.
Sublime Text isn't an IDE though so comparing it to VS Code is comparing grapes and apples. VS Code is doing a lot more.
I disagree. Vs code uses plugins for all its heavy lifting. Even a minimal plugin setup is substantially slower to load than sublime is, which can also have an LSP plugin.
VScode isn't an IDE either, visual studio is one. After that it all depends what plugins you loaded in both of them.
>Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Do any of those do the indexing that cause the slowness? If not it's comparing apples to oranges.
Riders startup time isn’t including indexing. Indexing my entire project takes minutes but it does it in the background.
> You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.
11 years ago I put in a ticket to slack asking them about their resource usage. Their desktop app was using more memory than my IDE and compilers and causing heap space issues with visual studio. 10 years ago things were exactly the same. 15 years ago, my coworkers were complaining that VS2010 was a resource hog compared to 10 years ago. My memory of loading photoshop in the early 2000’s was that it took absolutely forever and was slow as molasses on my home PC.
I don’t think it’s necessarily gotten worse, I think it’s always been pathetically bad.
Photoshop for windows 3.11 loads in a couple seconds on a 100mhz pentium. Checked two days ago.
That was 30 years ago, not 10.
"early 2000s" was at least 22 years ago, as well. Sorry if this ruins your night. 100mhz 1994 vs 1000mhz in 2000, that's the only parallel i was drawing. 10x faster yet somehow adobe...
Ah sorry - I’m in my mid 30s so my early pc experiences as a “power user” were win XP, by which point photoshop had already bolted on the kitchen sink and autodesk required a blood sacrifice to start up.
5 seconds is a lot for a machine with an M4 Pro, and tons of RAM and a very fast SSD.
There's native apps just as, if not more, complicated than VSCode that open faster.
The real problem is electron. There's still good, performant native software out there. We've just settled on shipping a web browser with every app instead.
There is snappy electron software out there too, to be fair. If you create a skeleton electron app it loads just fine. A perceptible delay but still quick.
The problem is when you load it and then react and all its friends, and design your software for everything to be asynchronous and develop it on a 0 latency connection over localhost with a team of 70 people where nobody is holistically considering “how long does it take from clicking the button to doing the thing I want it to do”
It's probably more so that any corporate Windows box has dozens of extra security and metrics agents interrupting and blocking every network request and file open and OS syscall installed by IT teams while the Macs have some very basic MDM profile applied.
This is exactly it. My Debian Install on older hardware than my work machine is relatively snappy. The real killer is the Windows Defender Scans once a week. 20-30% CPU usage for the entire morning because it is trying to scan some CDK.OUT directory (if I delete the directory, the scan doesn't take nearly as long).
This is my third high end workstation computer in the last 5 years and my experience has been roughly consistent with.
My corporate vpn app is a disaster on so many levels, it’s an internally developed app as opposed to Okta or anything like that.
I would likewise say that your experience is not universal, and that in many circumstances the situation is much worse. My wife is running an i5 laptop from 2020 and her work intranet is a 60 second load time. Outlook startup and sync are measured in minutes including mailbox fetching. You can say this is all not the app developers fault, but the crunch that’s installed on her machine is slowing things down by 5 or 10x and that slowdown wouldn’t be a big deal if the apps had reasonable load times in the first place.
> are all applications I use every day that take 20+ seconds to launch.
I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).
IMO they just don't think of "initial launch speed" as a meaningful performance stat to base their entire tech stack upon. Most of these applications and even websites, once opened, are going to be used for several hours/days/weeks before being closed by most of their users
For all the people who are doubting that applications are slow and that it must just be me - here [0] is a debugger that someone has built from the ground up that compiles, launches, attaches a debugger and hits a breakpoint in the same length of time that visual studio displays the splash screen for.
[0] https://x.com/ryanjfleury/status/1747756219404779845
Odd, I tested two news sides (tagesschau.de and bbc.com) and both load in 1 - 2 seconds. Airbnb in about 4 - 6 seconds though. My reddit never gets stuck, or if it does it's on all tabs because something goes wrong on their end.
That sounds like a corporate anti-virus slowing everything down to me. vscode takes a few seconds to launch for me from within WSL2, with extensions. IntelliJ on a large project takes a while I'll give you that, but just intelliJ takes only a few seconds to launch.
Vscode is actually 10 seconds, you’re right.
I have no corp antivirus or MDM on this machine, just windows 11 and windows defender.
> This is on an i9
On which OS?
How does your vscode take 20+ seconds to launch? Mine launches in 2 seconds.
All those things takes 4 seconds to launch or load on my M1. Not great, not bad.
Even 4-5 seconds is long enough for me to honestly get distracted. That is just so much time even on a single core computer from a decade ago.
On my home PC, in 4 seconds I could download 500MB, load 12GB off an SSD, perform 12 billion cycles (before pipelining ) per core (and I have 24 of them) - and yet miro still manages to bring my computer to its knees for 15 seconds just to load an empty whiteboard.
HOW does Slack take 20s to load for you? My huge corporate Slack takes 2.5s to cold load.
I'm so dumbfounded. Maybe non-MacOS, non-Apple silicon stuff is complete crap at that point? Maybe the complete dominance of Apple performance is understated?
I use Windows alongside my Mac Mini, and I would say they perform pretty similarly (but M-chip is definitely more power efficient).
I don't use Slack, but I don't think anything takes 20 seconds for me. Maybe XCode, but I don't use it often enough to be annoyed.
I have an i9 windows machine with 64GB ram and an M1 Mac. I’d say day to day responsiveness the Mac is heads and tails above the windows machine, although getting worse. I’m not sure if the problem is the arm electron apps are getting slower or if my machine is just aging
It's Windows. I'm on Linux 99% of the time and it's significantly more responsive on hardware from 2014 than Windows is on a high end desktop from 2023. I'm not being dramatic.
(Yes, I've tried all combinations of software to hardware and accounted for all known factors, it's not caused by viruses or antiviruses).
XP was the last really responsive Microsoft OS, it went downhill from then and never recovered.
My current machine I upgraded from win10 to win11 and I noticed an across the board overnight regression in everything. I did a clean install so if anything it should have been quicker but boot times, app launch times, compile times all took a nosedive on that update.
I still think there’s a lot of blame to go around for the “kitchen sink” approach to app development where we have entire OS’s that can boot faster than your app can get off a splash screen.
Unfortunately, my users are on windows and work has no Linux vpn client so a switch isn’t happening any time soon.
Most likely the engineers at many startups only use apple computers themselves and therefore only optimize performance for those systems. It's a shame but IMO result of their incompetence and not result of some magic apple performance gains.
Yes it is and the difference isn't understated, I think everyone knows by now that Apple has run away with laptop/desktop performance. They're just leagues ahead.
It's a mix of better CPUs, better OS design (e.g. much less need for aggressive virus scanners), a faster filesystem, less corporate meddling, high end SSDs by default... a lot of things.
Qualcomm CPUs outperform Apple now, Apple was just early and had exclusivity for manufacturing 3nm at TSMC.
What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.
The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.
I guess you don't need to wrestle with Xcode?
Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.
E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.
After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).
Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):
On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.
E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.
I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.
You're comparing the art(!) of two different games, that targeted two different sets of hardware while using the ideal hardware for one and not the other. Kind of a terrible example.
> You're comparing the art(!)
The art direction, modelling and animation work is mostly fine, the worse look results from the lack of dynamic lighting and ambient occlusion in the Oblivion Remaster when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
Basically, the best art will always look bad without good lighting (and even baked or faked ambient lighting like in Skyrim looks better than no ambient lighting at all.
Digital Foundry has an excellent video about the issues:
https://www.youtube.com/watch?v=p0rCA1vpgSw
TL;DR: the 'ideal hardware' for the Oblivion Remaster doesn't exist, even if you get the best gaming rig money can buy.
> …when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
This also happens to many other UE5 games like S.T.A.L.K.E.R. 2 where they try to push the graphics envelope with expensive techniques and most people without expensive hardware have to turn the settings way down (even use things like upscaling and framegen which further makes the experience a bit worse, at least when the starting point is very bad and you have to use them as a crutch), often making these modern games look worse than something a decade old.
Whatever UE5 is doing (or rather, how so many developers choose to use it) is a mistake now and might be less of a mistake in 5-10 years when the hardware advances further and becomes more accessible. Right now it feels like a ploy by the Big GPU to force people to upgrade to overpriced hardware if they want to enjoy any of these games; or rather, sillyness aside, is an attempt by studios to save resources by making the artists spend less time on faking and optimizing effects and detail that can just be brute forced by the engine.
In contrast, most big CryEngine and idTech games run great even on mid range hardware and still look great.
It's like (usable) realtime global illumination is the fusion power of rendering, always just 10 years away ;)
I remember that UE4 also hyped a realtime GI solution which then was hardly used in realworld games because it had a too big performance hit.
I haven't really played it myself but it sounds like from the video you posted the remasters a bit of an outlier in terms of bad performance. Again it seems like a bad example to pull from.
I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.
It's strange, it visibly loading the buttons is indicative they use async technology that can use multithreaded CPUs effectively... but it's slower than the old synchronous UI stuff.
I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.
Windows 3.1 wasn't checking WiFi, Bluetooth, energy saving profile, night light setting, audio devices, current power status and battery level, audio devices, and more when clicking the non-existent icon on the non-existent taskbar. Windows XP didn't have this quick setting area at all. But I do recall having the volume slider take a second to render on XP from time to time, and that was only rendering a slider.
And FWIW this stuff is then cached. I hadn't clicked that setting area in a while (maybe the first time this boot?) and did get a brief gray box that then a second later populated with all the buttons and settings. Now every time I click it again it appears instantly.
For a more balanced comparison, observe how long it takes for the new "Settings" app to open and how long interactions take, compared to Control Panel, and what's missing from the former that the latter has had for literally decades.
I'm far faster changing my default audio device with the new quick settings menu than going Start > Control Panel > Sound > Right click audio device > Set as Default. Now I just click the quick settings > the little sound device icon > chosoe a device.
I'm far faster changing my WiFi network with the new quick settings menu than going Start > Control Panel > Network and Sharing Center (if using Vista or newer) > Network Devices > right click network adapter > Connect / Disconnect > go through Wizard process to set up new network. Now I just click the quick settings, click the little arrow to list WiFi networks, choose the network, click connect. Way faster.
I'm also generally far faster finding whatever setting in the Settings menu over trying to figure out which tab on which little Control Panel widget some obscure setting is, because there's a useful search box that will pull up practically any setting these days. Sure, maybe if you had every setting in Control Panel memorized you could be faster, but I'm far faster just searching for the setting I'm looking for at the moment for anything I'm not regularly changing.
The new Settings area, now that it actually has most things, is generally a far better experience unless you had everything in Control Panel committed to muscle memory. I do acknowledge though there are still a few things that aren't as good, but I imagine they'll get better. For most things most users actually mess with on a regular basis, it seems to me the Settings app is better than Control Panel. The only thing that really frustrates me with Settings now on a regular basis is only being able to have one instance of the app open at a time, a dumb limitation.
Every time I'm needing to mess with something in ancient versions of Windows these days is now a pain despite me growing up with it. So many things nested in non-obvious areas, things hidden behind tab after tab of settings and menus. Right click that, go to properties, click that, go to properties on that, click that button, go to the Options tab, click Configure, and there you go that's where you set that value. Easy! Versus typing something like the setting you want to set into the search box in Settings and have it take you right to that setting.
But is this cache trustworthy or will it eventually lead you to click in the wrong place because the situation changed and now there's a new button making everything change place?
And even if every information takes a bit to figure out, it doesn't excuse taking a second to even draw the UI. If checking bluetooth takes a second, then draw the button immediately but disable interaction and show a loading icon, and when you get the blutooth information update the button, and so on for everything else.
As someone who routinely hops between WiFi networks, I've never seen a wrong value here.
And OK, we'll draw a tile with all the buttons with greyed out status for that half second and then refresh to show the real status. Did that really make things better, or did it make it worse?
And if we bothered keeping all that in memory, and kept using the CPU cycles to make sure it was actually accurate and up to date on the click six hours later, wouldn't people then complain about how obviously bloated it was? How is this not a constant battle of being unable to appease any critics until we're back at the Win 3.1 state of things with no Bluetooth devices, no WiFi networks, no dynamic changing or audio devices, etc?
And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.
Rendering a volume slider or some icons shouldn't take half a second, regardless. e.g. speaking of Carmack, Wolfenstein: Enemy Territory hits a consistent 333 FPS (the max the limiter allows) on my 9 year old computer. That's 3 ms/full frame for a 3d shooter that's doing considerably more work than a vector wifi icon.
Also, you could keep the status accurate because it only needs to update on change events anyway, events that happen on "human time" (e.g. you plugged in headphones or moved to a new network location) last for a practical eternity in computer time, and your pre-loaded icon probably takes a couple kB of memory.
It seems absurd to me that almost any UI should fail to hit your monitor's refresh rate as its limiting factor in responsiveness. The only things that make sense for my computer to show its age are photo and video editing with 50 MB RAW photos and 120 MB/s (bytes, not bits) video off my camera.
It's not the drawing an icon to a screen that takes the half second, it's querying out to hardware on driver stacks designed for PCI WiFi adapters from the XP era along with all the other driver statuses.
It's like how Wi-Fi drivers would cause lag from querying their status, lots of poorly designed drivers and archaic frameworks for them to plug in.
And I doubt any hardware you had when Wolfenstein:ET came out rendered the game that fast. I remember it running at less than 60fps back in '03 on my computer. So slow, poorly optimized, I get better frame rates in Half Life. Why would anyone write something so buggy, unoptimized, and slow?!
You don't need to query the hardware to know the network interface is up. A higher level of the stack already knows that along with info like addresses, routes, DNS servers, etc.
IIRC it ran at 76 fps (higher than monitor refresh, one of the locally optimal frame rates for move speed/trick jumps) for me back then on something like an GeForce FX 5200? As long as you had a dedicated GPU it could hit 60 just fine. I think it could even hit 43 (another optimal rate) on an iGPU, which were terrible back then.
In any case, modern software can't even hit monitor refresh latency on modern hardware. That's the issue.
It's not just showing "is the interface up", it's showing current signal strength, showing current ssid, showing results from the recent poll of stations, etc.
And then doing the same for Bluetooth.
And then doing the same for screen rotation and rotation lock settings. And sound settings, And then another set of settings. And another set of settings. All from different places of the system configuration while still having the backwards compatibility of all those old systems.
It's not a slowness on painting it. It can do that at screen refresh rates no problem. It's a question of querying all these old systems which often result in actual driver queries to get the information.
43fps? Sure sounds slow to me. Why not 333fps on that hardware? So bloated, so slow.
You're just listing mechanisms for how it might be slow, but that doesn't really make it sensible. Why would the OS query hardware for something like screen rotation or volume? It knows these things. They don't randomly change. It also knows the SSID it's connected to and the results of the last poll (which it continuously does to see if it should move).
And yes it should cache that info. We're talking bytes. Less than 0.0001% of the available memory.
Things were different on old hardware because old hardware was over 1000x slower. On modern hardware, you should expect everything to be instantaneous.
And yet doing an ipconfig or netsh wlan show interfaces isn't always instantaneous depending on your hardware and the rest of your configuration. I can't tell you what all it's actually doing under the hood, but I've definitely seen variations of performance on different hardware.
Sometimes the devices and drivers just suck. Sometimes it's not the software's fault it's running at 43fps.
I'm hitting the little quick settings area on my exceptionally cheap and old personal laptop. I haven't experienced that slowness once. Once again I imagine the other stuff running interrupting all the OS calls and what not loading this information causes it to be slow.
I don't know what operating system you're talking about, but the bottleneck on my linux machine for asking for interfaces is the fact that stdout is write blocking.
I routinely have shy of 100 network interfaces active and `ip a` is able to query everything in nanoseconds.
Considering this whole conversation is about sometimes some people have a little bit of slowness drawing the quick settings area in Windows 11 and I gave commands like "netsh" it should be pretty dang obvious which OS we're talking about. But I guess some people have challenges with context clues.
And once again, on some Linux machines I've had over the years, doing an ip a command could hang or take a while if the device is in a bad state or being weird. It normally returns almost instantly, but sometimes has been slow to give me the information.
> And OK, we'll draw a tile with all the buttons with greyed out status for that half second and then refresh to show the real status. Did that really make things better, or did it make it worse?
Clearly better. Most of the buttons should also work instantly, most of the information should also be available instantly. The button layout is rendered instantly, so I can already figure out where I want to click without having to wait one second even if the button is not enabled yet, and by the time my mouse reaches it it will probably be enabled.
> And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.
I've never seen the volume slider in Windows 98 take one second to render. Not even the start menu, which is much more complex, and which in Windows 11 often takes a second, and search results also show up after a random amount of time and shuffle the results around a few times, leading to many misclicks.
It doesn't even know if the devices are still attached (as it potentially hasn't tried interfacing them for hours) but should instantly be able to allow input to control them and fully understand their current status. Right. Makes sense.
And if you don't remember the volume slider taking several seconds to render on XP you must be much wealthier than me or have some extremely rose colored glasses. I play around with old hardware all the time and get frustrated with the unresponsiveness of old equipment with period accurate software, and had a lot of decent hardware (to me at least) in the 90s and 00s. I've definitely experienced lots of times of the start menu painting one entry after the other at launch, taking a second to roll out, seeking on disk for that third level menu in 98, etc.
Rose colored glasses, the lot of you. Go use an old 386 for a month. Tell me how much more productive you are after.
You hit on something there, I could type faster than my 2400 baud connection but barring a bad connection those connections were pretty reliable.
XP had gray boxes and laggy menus like you wouldn't believe. It didn't even do search in the start menu, and maybe that was for the best because even on an SSD its search functionality was dog slow.
A clean XP install in a VM for nostalgia's sake is fine, but XP as actually used by people for a while quickly ground to a halt because of all the third party software you needed.
The task bar was full of battery widgets, power management icons, tray icons for integrated drivers, and probably at least two WiFi icons, and maybe two Bluetooth ones as well. All of them used different menus that are slow in their own respect, despite being a 200KiB executable that looks like it was written in 1995.
And the random crashes, there were so many random crashes. Driver programmes for basic features crashed all the time. Keeping XP running for more than a day or two by using sleep mode was a surefire way to get an unusual OS.
Modern Windows has its issues but the olden days weren't all that great, we just tolerated more bullshit.
Honestly it behaves like the interface is some Electron app that has to load the visual elements from a little internal webserver. That would be a very silly way to build an OS UI though, so I don't know what Microsoft is doing.
This one drives me nuts.
I have to stay connected to VPN to work, and if I see VPN is not connected I click to reconnect.
If the VPN button hasn't loaded you end up turning on Airplane mode. Ouch.
Windows 11 shell partly uses React Native in the start button flyout. It's not a heavily optimized codebase.
That's the point. It's so bloated that an entirely local operation that should be instantaneous takes over 1 second.
No, it's a heavily pessimized codebase.
Yep. I suspect GP has just gotten used to this and it is the new “snappy” to them.
I see this all the time with people who have old computers.
“My computer is really fast. I have no need to upgrade”
I press cmd+tab and watch it take 5 seconds to switch to the next window.
That’s a real life interaction I had with my parents in the past month. People just don’t know what they’re missing out on if they aren’t using it daily.
Yeah, I play around with retro computers all the time. Even with IO devices that are unthinkably performant compared to storage hardware actually common at the time these machines are often dog slow. Just rendering JPEGs can be really slow.
Maybe if you're in a purely text console doing purely text things 100% in memory it can feel snappy. But the moment you do anything graphical or start working on large datasets its so incredibly slow.
I still remember trying to do photo editing on a Pentium II with a massive 64MB of RAM. Or trying to get decent resolutions scans off a scanner with a Pentium III and 128MB of RAM.
64MB is about the size of (a big) L3 cache. Today's L3 caches have a latency of 3-12ns and throughput measured in hundreds of gigabytes per second. And yet we can't manage to get responsive UIs because of tons of crud.
My modern machine running a modern OS is still way snappier while actually loading the machine and doing stuff. Sure, if I'm directly on a tty and just running vim on a small file its super fast. The same on my modern machine. Try doing a few things at once or handle some large dataset and see how well it goes.
My older computers would completely lock up when given a large task to do, often for many seconds. Scanning an image would take over the whole machine for like a minute per page! Applying a filter to an image would lock up the machine for several seconds even for a much smaller image a much simpler filter. The computer cannot even play mp3's and have a responsive word processor, if you really want to listen to music while writing a paper you better have it pass through the audio from a CD, much less think about streaming it from some remote location and have a whole encrypted TCP stream and decompression.
These days I can have lots of large tasks running at the same time and still have more responsiveness.
I have fun playing around with retro hardware and old applications, but "fast" and "responsive" are not adjectives I'd use to describe them.
I struggle because everything you're saying is your subjective truth, and mine differs.
Aside from the seminal discussion about text input latency from Dan Luu[0] there's very little we can do to disprove anything right now.
Back in the day asking my computer to "do" something was the thing I always dreaded, I could navigate, click around, use chat programs like IRC/ICQ and so on, and everything was fine, until I opened a program or "did" something that caused the computer to think.
Now it feels like there's no distinction between using a computer and asking it to do something heavy. The fact that I can't hear the harddisk screaming or the fan spin up (and have it be tied to something I asked the computer to do) might be related.
It becomes expectation management at some point, and nominally a "faster computer" in those days meant that those times I asked the computer to do something the computer would finish it's work quicker. Now it's much more about how responsive the machine will be... for a while, until it magically slows down over time again.
[0]: https://danluu.com/input-lag/
> Back in the day asking my computer to "do" something was the thing I always dreaded, I could navigate, click around, use chat programs like IRC/ICQ and so on, and everything was fine, until I opened a program or "did" something that caused the computer to think.
This is exactly what I'm talking about. When I'm actually using my computer, its orders of magnitude faster. Things where I'd do one click and then practically have to walk away and come back to see if it worked happen in 100ms now. This is the machine being way faster and far more responsive.
Like, OK, some Apple IIe had 30ms latency on a key press compared to 50ms on a Haswell desktop with a decent refresh rate screen or 100ms on some Thinkpad from 2017, assuming these machines aren't doing anything.
But I'm not usually doing nothing when I want to press the key. I've got dozens of other things I want my computer to do. I want it listening for events on a few different chat clients. I want it to have several dozen web pages open. I want it to stream music. I want it to have several different code editors open with linters examining my code. I want it paying attention if I get new mail. I want it syncing directories from this machine to other machines and cloud storage. I want numerous background agents handling tons of different things. Any one of those tasks would cause that Apple IIe to crawl instantly and it doesn't even have the memory to render a tiny corner of my screen.
The computer is orders of magnitude "faster", in that it is doing many times as much work much faster even when it's seemingly just sitting there. Because that's what we expect from our computers these days.
Tell me how fast a button press is when you're on a video call on your Apple IIe while having a code linter run while driving a 4K panel and multiple virtual desktops. How's its Unicode support?
The newish windows photo viewer in Win 10 is painfully slow and it renders a lower res preview first, but then the photo seems to move when the full resolution is shown. The photo viewer in windows 7 would prerender the next photo so the transition to the next one would be instant. The is for 24 megapixel photos, maybe 4mb jpegs.
So the quality has gone backwards in the process of rewriting the app into the touch friendly style. A lot of core windows apps are like that.
Note that the windows file system is much slower than the linux etx4, I don't know about Mac filesystems.
There's a problem when people who aren't very sensitive to latency and try and track it, and that is that their perception of what "instant" actually means is wrong. For them, instant is like, one second. For someone who cares about latency, instant is less than 10 milliseconds, or whatever threshold makes the difference between input and result imperceptible. People have the same problem judging video game framerates because they don't compare them back to back very often (there are perceptual differences between framerates of 30, 60, 120, 300, and 500, at the minimum, even on displays incapable of refreshing at these higher speeds), but you'll often hear people say that 60 fps is "silky smooth," which is not true whatsoever lol.
If you haven't compared high and low latency directly next to each other then there are good odds that you don't know what it looks like. There was a twitter video from awhile ago that did a good job showing it off that's one of the replies to the OP. It's here: https://x.com/jmmv/status/1671670996921896960
Sorry if I'm too presumptuous, however; you might be completely correct and instant is instant in your case.
Sure, but there's not limit to what people can decide to care about. There will always be people who want more speed and less latency, but the question is: are they right to do so?
I'm with the person you're responding. I use the regular suite of applications and websites on my 2021 M1 Macbook. Things seem to load just fine.
> For someone who cares about latency, instant is less than 10 milliseconds
Click latency of the fastest input devices is about 1ms and with a 120Hz screen you're waiting 8.3ms between frames. If someone is annoyed by 10ms of latency they're going to have a hard time in the real world where everything takes longer than that.
I think the real difference is that 1-3 seconds is completely negligible launch time for an app when you're going to be using it all day or week, so most people do not care. That's effectively instant.
The people who get irrationally angry that their app launch took 3 seconds out of their day instead of being ready to go on the very next frame are just never going to be happy.
I think you're right, maybe the disconnect is UI slowness?
I am annoyed at the startup time of programs that I keep closed and only open infrequently (Discord is one of those, the update loop takes a buttload of time because I don't use it daily), but I'm not annoyed when something I keep open takes 1-10s to open.
But when I think of getting annoyed it's almost always because an action I'm doing takes too long. I grew up in an era with worse computers than we have today, but clicking a new list was perceptibly instant- it was like the computer was waiting for the screen to catch up.
Today, it feels like the computer chugs to show you what you've clicked on. This is especially true with universal software, like chat programs, that everyone in an org is using.
I think Casey Muratori's point about the watch window in visual studio is the right one. The watch window used to be instant, but someone added an artificial delay to start processing so that the CPU wouldn't work when stepping fast through the code. The result is that, well, you gotta wait for the watch window to update... Which "feels bad".
https://www.youtube.com/watch?v=GC-0tCy4P1U
I fear that such comments are similar to the old 'a monster cable makes my digital audio sound more mellow!'
The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.
> The eye perceives at about 10 hz.
Not sure what this means; the eye doesn’t perceive anything. Maybe you’re thinking of saccades or round-trip response times or something else? Those are in the ~100ms range, but that’s different from whether the eye can see something.
This paper shows pictures can be recognized at 13ms, which is faster than 60hz, and that’s for full scenes, not even motion tracking or small localized changes. https://link.springer.com/article/10.3758/s13414-013-0605-z
Well if you believe that, start up a video game with a framerate limiter and set your game's framerate limit to 10 fps and tell me how much you enjoy the experience. By default your game will likely be running at either 60 fps or 120 fps if you're vertical synced (depends on your monitor's refresh rate). Make sure to switch back and forth between 10 and 60/120 to compare.
Even your average movie captures at 24 hz. Again, very likely you've never actually just compared these things for yourself back to back, as I mentioned originally.
>The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.
It takes effectively no effort to conduct such a study yourself. Just try re-encoding a video at different frame rates up to your monitor refresh rate. Or try looking at a monitor that has a higher refresh rate than the one you normally use.
Modern operating systems run at 120 or 144 hz screen refresh rates nowadays, I don't know if you're used to it yet but try and go back to 60, it should be pretty obivous when you move your mouse.
It really depends at what you look.
You say snappy, but what is snappy? I right now have a toy project in progress in zig that uses users perception as a core concept.
Rarely one can react to 10ms jank. But when you get to bare metal development 10ms becomes 10 million of reasonably high level instructions that can be done. Now go to website, click. If you can sense a delay from JS this means that jank is approximately 100ms; does clicking that button, really should be 100 million instructions?
When you look close enough you will find that not only it’s 100 million instructions but your operating system along with processor made tens of thousands of tricks in the background to minimize the jank and yet you still can sense it.
Today even writing in non optimized, unpopular languages like Prolog is viable because hardware is mindblowing fast, and yet some things are slow, because we utilize that speed to decrease development costs.
I notice a pattern in the kinds of software that people are complaining about. They tend to be user-facing interactive software that is either corporate, proprietary, SaaS, “late-stage” or contains large amounts of telemetry. Since I tend to avoid such software, the vast majority of software I use I have no complaints about with respect to speed and responsiveness. The biggest piece of corporate bloatware I have is Chromium which (only) takes 1-2 seconds to launch and my system is not particularly powerful. In the corporate world bloat is a proxy for sophistication, for them it is a desirable feature so you should expect it. They would rather you use several JavaScript frameworks when the job could be done with plain HTML because it shows how rich/important/fashionable/relevant/high-tech they are.
I'd wager that a 2021 MacBook, like the one I have, is stronger than the laptop used by majority of people in the world.
Life on an entry or even mid level windows laptop is a very different world.
Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.
A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.
> Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.
Same thing happens with UI & Website design. When the designers and front-end devs all have top-spec MacBooks, with 4k+ displays, they design to look good in that environment.
Then you ship to the rest of the world which are still for the most part on 16:9 1920x1080 (or god forbid, 1366x768), low spec windows laptops and the UI looks like shit and is borderline unstable.
Now I don't necessarily think things should be designed for the lowest common denominator, but at the very least we should be taking into consideration that the majority of users probably don't have super high end machines or displays. Even today you can buy a brand new "budget" windows laptop that'll come with 8GB of RAM, and a tiny 1920x1080 display, with poor color reproduction and crazy low brightness - and that's what the majority of people are using, if they are using a computer at all and not a phone or tablet.
I've found so many performance issues at work by booting up a really old laptop or working remotely from another continent. It's pretty straightforward to simulate either poor network conditions or generally low performance hardware, but we just don't generally bother to chase down those issues.
Oh yeah, I didn't even touch on devs being used to working on super faster internet.
If you're on Mac, go install Network Link Conditioner and crank that download an upload speed way down. (Xcode > Open Developer Tools > More Developer Tools... > "Additional Tools for Xcode {Version}").
When I bought my current laptop, it was the cheapest one Costco had with 8 gigs of memory, which was at the time plenty for all but specialized uses. I've since upgraded it to 16, which feels like the current standard for that.
But...why? Why on earth do I need 16 gigs of memory for web browsing and basic application use? I'm not even playing games on this thing. But there was an immediate, massive spike in performance when I upgraded the memory. It's bizarre.
Most cheap laptops these days ship with only one stick of RAM, and thus are only operating in single-channel mode. By adding another memory module, you can operate in dual-channel mode which can increase performance a lot. You can see the difference in performance by running a full memory test in single-channel mode vs multi-channel mode with a program like memtest86 or memtest86+ or others.
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency. I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
To note, people will have wildly different tolerance to delays and lag.
On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.
Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.
Spotify takes 7 seconds from clicking on its icon to playing a song on a 2024 top-of-the-range MacBook Pro. Navigating through albums saved on your computer can take several seconds. Double clicking on a song creates a 1/4sec pause.
This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.
And on RhythmBox, on a 2017 laptop it works instantaneously. These big monetized apps were a huge mistake.
> These big monetized apps were a huge mistake.
It's electron. Electron was a mistake.
You're a pretty bad sample, that machine you're talking about probably cost >$2,000 new; and if it's an M-series chip; well that was a multi-generational improvement.
I (very recently I might add) used a Razer Blade 18, with i9 13950HX and 64G of DDR5 memory, and it felt awfully slow, not sure how much of that is Windows 11's fault however.
My daily driver is an M2 Macbook Air (or a Threadripper 3970x running linux); but the workers in my office? Dell Latitudes with an i5, 4 real cores and 16G of RAM if they're lucky... and of course, Windows 11.
Don't even ask what my mum uses at home, it cost less than my monthly food bill; and that's pretty normal for people who don't love computers.
One example is Office. Microsoft is going back to preloading office during Windows Boot so that you don't notice it loading. With the average system spec 25 years ago it made sense to preload office. But today, what is Office doing that it needs to offload its startup to running at boot?
How long did your computer take to start up, from power off (and no hibernation, although that presumably wasn't a thing yet), the first time you got to use a computer?
How long did it take the last time you had to use an HDD rather than SSD for your primary drive?
How long did it take the first time you got to use an SSD?
How long does it take today?
Did literally anything other than the drive technology ever make a significant difference in that, in the last 40 years?
> Almost everything loads instantly on my 2021 MacBook
Instantly? Your applications don't have splash screens? I think you've probably just gotten used to however long it does take.
> 5 year old mobile CPUs load modern SPA web apps with no problems.
"An iPhone 11, which has 4GB of RAM (32x what the first-gen model had), can run the operating system and display a current-day webpage that does a few useful things with JavaScript".
This should sound like clearing a very low bar, but it doesn't seem to.
It depends. Can Windows 3.11 be faster than Windows 11? Sure, maybe even in most cases: https://jmmv.dev/2023/06/fast-machines-slow-machines.html
I think it's a very theoretical argument: we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
> All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Would we? Really? I don't think giving up performance needs to be a compromise for the number of features or speed of delivering them.
People make higher-order abstractions for funzies?
we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?
The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.
These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.
> These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning assembly and sticking to C++ or PERL because that's what they learned first.
Why stop at C++? Is that what you happen to be comfortable with? Couldn't you create even faster software if you went down another level? Why don't you?
Couldn't you create even faster software if you went down another level? Why don't you?
No and if you understood what makes software fast you would know that. Most software is allocating memory inside hot loops and taking that out is extremely easy and can easily be a 7x speedup. Looping through contiguous memory instead of chasing pointers through heap allocated variables is another 25x - 100x speed improvement at least. This is all after switching from a scripting language, which is about a 100x in itself if the language is python.
It isn't about the instructions it is about memory allocation and prefetching.
Sorry but it is absolutely the case that there are optimizations available to someone working in assembly that are not available to someone working in C++.
You are probably a lazy or inexperienced engineer if you choose to work in C++.
In fact, there are optimizations available at the silicon level that are not available in assembly.
You are probably a lazy or inexperienced engineer if you choose to work in assembly.
Go ahead and give me examples of what you mean.
I'm talking about speeding software up by 10x-100x by language choice, then 7x with extremely minimal adjustments (allocate memory outside of hot loops), then 25x - 100x with fairly minimal design changes (use vectors, loop through them straight).
I'm also not saying people are lazy, I'm saying they don't know that with something like modern C++ and a little bit of knowledge of how to write fast software MASSIVE speed gains are easy to get.
You are helping make my point here, most programmers don't realize that huge speed gains are low hanging fruit. They aren't difficult, they don't mean anything is contorted or less clear (just the opposite), they just have to stop rationalizing not understanding it.
I say this with knowledge of both sides of the story instead of guessing based on conventional wisdom.
So you agree there’s a trade off between developer productivity and optimization (coding in assembly isn’t worth it, but allocating memory outside of hot loops is)
You agree with my original point then?
Are you seriously replying and avoiding everything we both said? I'll simplify it for you:
Writing dramatically fast software that is 1,000x or even 10,000 times faster than a scripting language takes basically zero effort once you know how to do it and these assembly optimization are a myth that you would have already shown me if you could.
“Zero effort once you know how to do it” is another way of saying “time and effort.”
Congratulations you’ve discovered the value of abstractions!
I mean, you’re the one who started this off with the insane claim that there’s no tradeoff, then claimed there are no optimizations available below C++ (i.e. C++ is the absolute most optimized code a person can write). Not my fault you stake out indefensible positions.
Your original comment was saying you have to give up features and development speed to have faster software. I've seen this claim before many times, but it's always from people rationalizing not learning anything beyond the scripting languages they learned when they got in to programming.
I explained to you exactly why this is true, and it's because writing fast software just means doing some things slightly differently with a basic awareness of what makes programs fast, not because it is difficult or time consuming. Most egregiously bad software is probably not even due to optimization basics but from recomputing huge amounts of unnecessary results over and over.
What you said back is claims but zero evidence or explanation of anything. You keep talking about assembly language, but it has nothing to do with getting huge improvements for no time investment, because things like instruction count are not where the vast majority of speed improvements come from.
I mean, you’re the one who started this off with the insane claim that there’s no tradeoff, then claimed there are no optimizations available below C++ (i.e. C++ is the absolute most optimized code a person can write).
This is a hallucination that has nothing to do with your original point. The vast majority of software could be sped up 100x to 1000x easily if they were written slightly different. Asm optimizations are extremely niche with modern CPUs and compilers and the gains are minuscule compared to C++ that is already done right. This is an idea that permeates through inexperienced programmers, that asm is some sort of necessity for software that runs faster than scripting languages.
Go ahead and show me what specifically you are talking about with C++, assembly or any systems language or optimization.
Show me where writing slow software saves someone so much time, show me any actual evidence or explanation of this claim.
So again, what you're saying is there is a tradeoff. You just think it should be made in a different place than where the vast majority of engineers in the world choose to make it. That's fine! It's probably because they're idiots and you're really smart, but it's obviously not because there's no tradeoff.
> that asm is some sort of necessity for software that runs faster than scripting languages.
It seems you're not tracking the flow of the conversation if you believe this is what I'm saying. I am saying there is always a way to make things faster by sacrificing other things developer productivity, feature sets, talent pool, or distribution methods. You agree with me, it turns out!
So again, what you're saying is there is a tradeoff. You just think it should be made in a different place than where the vast majority of engineers in the world choose to make it.
Show me what it is I said that makes you think that.
That's fine! It's probably because they're idiots and you're really smart, but it's obviously not because there's no tradeoff.
Where did I say any of this? I could teach anyone to make faster software in an hour or two, but myths like the ones you are perpetuating make people think it's difficult or faster software is more complicated.
You originally said that making software faster 'decreases velocity and sacrifices features' but you can't explain or backup any of that.
You agree with me, it turns out!
I think what actually happened is that you made some claims that get repeated but they aren't from your actual experience and you're trying to avoid giving real evidence or explanations so you keep trying to shift what you're saying to something else.
The truth is that if someone just learns to program with types and a few basic techniques they can get away from writing slow software forever and it doesn't come at any development speed, just a little learning up front that used to be considered the basics.
Next time you reply show me actual evidence of the slow software you need to write to save development time. I think the reality is that this is just not something you know a lot about, but instead of learning about it you want to pretend there is any truth to what you originally said. Show me any actual evidence or explanation instead of just making the same claims over and over.
> I could teach anyone to make faster software in an hour or two,
Is one or two hours of two engineers' time more than zero hours, or no?
> just a little learning up front
Is a little learning more than zero learning, or no?
IMO your argument would hold a lot more weight if people felt like their software (as users) is slow, but many people do not. Save for a few applications, I would prefer they keep their same performance profile and improve their feature set than spend any time doing the reverse. And as you have said multiple times now: it does indeed take time!
If your original position was what it is now, which is "there's low hanging fruit," I wouldn't disagree. But what you said is there's no tradeoff. And of course now you are saying there is a tradeoff... so now we agree! Where any one person should land on that tradeoff is super project-specific, so not sure why you're being so assertive about this blanket statement lol.
Now learning something new for a few hours means we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity." ?
You made up stuff I didn't say, you won't back up your claims with any sort of evidence, you keep saying things that aren't relevant, what is the point of this?
This thread is john carmack saying the world could get by with cheaper computers if software wasn't so terrible and you are basically trying to argue with zero evidence that software needs to be terrible.
Why can't you give any evidence to back up your original claim? Why can't you show a single program fragment or give a single example?
Okay let's do it this way.
It's obviously true the world could get by with cheaper computers if software was more performant.
So why don't we?
For what it is worth, there is room for improvement in how people use scripting languages. I have seen Kodi extensions run remarkably slowly and upon looking at their source code to see why, I saw that everything was being done in a single thread with blocking on relatively slow network traffic. There was no concurrency being attempted at all, while all of the high performance projects I have touched in either C or C++ had concurrency. The plugin would have needed a major rewrite to speed things up, but it would have made things that took minutes take a few seconds if it were done. Unfortunately, doing the rewrite was on the wrong side of a simple “is it worth the time” curve, so I left it alone:
https://xkcd.com/1205/
Just today, I was thinking about the slow load times of a bloated Drupal site that heard partially attributable to a YouTube embed. I then found this, which claims to give a 224x performance increase over YouTube’s stock embed (and shame on YouTube for not improving it):
https://github.com/paulirish/lite-youtube-embed
In the past, I have written electron applications (I had tried Qt first, but had trouble figuring out how what I wanted after 20 hours of trying, and got what I needed from electron in 10). The electron applications are part of appliances that are based on the Raspberry Pi CM4. The electron application loads in a couple seconds on the CM4 (and less than 1 second on my desktop). Rather than using the tools web developers often use that produce absurd amounts of HTML and JS, I wrote nearly every line of HTML and JavaScript by hand (as I would have done 25 years ago) such that it was exactly what I needed and there was no waste. I also had client side JavaScript code running asynchronously after the page loaded. To be fair, I did use a few third party libraries like express and an on screen keyboard, but they were relatively light weight ones.
Out of curiosity, I did a proof of concept port of one application from electron to WebKitGTK with around 100 lines of C. The proof of concept kept nodejs running as a local express server that was accessed by the client side JavaScript running in the WebKitGTK front end via HTTP requests. This cut memory usage in half and seemed to launch slightly faster (although I did not measure it). I estimated that memory usage would be cut in half again if I rewrote the server side JavaScript in C. Memory usage would likely have dropped even more and load times would have become even quicker if I taught myself how to use a GUI toolkit to eliminate the need for client side HTML and JavaScript, but I had more important things to do than spend many more hours to incrementally improve what already worked (and I suspect many are in the same situation).
To give a final example, I had a POSIX shell script that did a few tasks, such as polling a server on its LAN for configuration updates to certain files and doing HA failover of another system were down, among other things. I realized the script iterated too slowly, so I rewrote it to launch a subshell as part of its main loop that does polling (with file locking to prevent multiple sub shells from doing polling at the same time). This allowed me to guarantee HA failover always happens within 5 seconds of another machine going down, and all it took were using concepts from C (threading and locking). They were not as elegant as actual C code (since subshells are not LWPs and thus need IPC mechanisms like file locks), but they worked. I know polling is inefficient, but it is fairly foolproof (no need to handle clients being offline when it is time for a push), robustness was paramount and development time was needed elsewhere.
In any case, using C (or if you must, C++) is definitely better than a scripting language, provided you use it intelligently. If you use techniques from high performance C code in scripting languages, code written in them often becomes many times faster. I only knew how to do things in other languages relatively efficiently because I was replicating what I would be doing in C (or if forced, C++). If I could use C for everything, I would, but I never taught myself how to do GUIs in C, so I am using my 90s era HTML skills as a crutch. However, reading this exchange (and writing this reply) has inspired me to make an effort to learn.
I mean do you think JavaScript and Python aren't easier than C++? Then why do they exist?
Javascript was made in a few weeks so that some sort of programmability could be built in to web pages. Python was made in the 90s as a more modern competition to perl for scripting.
Modern C++ and systems language didn't exist and neither was made with the intention that people would write general purpose interactive programs that leveraged computers 1,000x faster so that software could run 1,000x slower.
People conflat the insanity of running a network cable through every application with the poor performance of their computers.
Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.
Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.
Presumably they wanted the telemetry. It's not clear that this was a dev-initiated switch.
Perhaps we can blame the 'statistical monetization' policies of adtech and then AI for all this -- i'm not entirely sold on developers.
What, after all, is the difference between an `/etc/hosts` set of loop'd records vs. an ISP's dns -- as far as the software goes?
> Presumably they wanted the telemetry
Why not log them to a file and cron a script to upload the data? Even if the feature request is nonsensical, you can architect a solution that respect the platform's constraints. It's kinda like when people drag in React and Next.js just to have a static website.
someone out there now has a cool resume line item about doing real time cloud microservices on the edge
You’re right, and I shouldn’t necessarily blame devs for the idea, though I do blame their CTO for not standing up to it if nothing else.
Though it’s also unclear to me in this particular case why they couldn’t collect commands being issued, and then batch-send them hourly, daily, etc. instead of having each one route through the cloud.
We (probably) can guess the why - tracking and data opportunities which companies can eventually sell or utilize for profit is some way.
I think it’s a little more nuanced than the broad takes make it seem.
One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.
Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.
The proliferation of Electron apps is one of the main things. Discord, Teams, Slack, all dogshit slow. Uses over a gigabyte of RAM, and uses it poorly. There's a noticeable pause any time you do user input; type a character, click a button, whatever it is, it always takes just barely too long.
All of Microsoft's suite is garbage. Outlook, Visual Studio, OneNote.
Edge isn't slow, (shockingly) but you know what is? Every webpage. The average web page has 200 dependencies it needs to load--frameworks, ads, libraries, spyware--and each of those dependencies has a 99% latency of 2 seconds, which means on average, at least two of those dependencies takes 2 seconds to load, and the page won't load until they do.
Steam is slow as balls. It's 2025 and it's a 32 bit application for some reason.
At my day job, our users complain that our desktop application is slow. It is slow. We talk about performance a lot and how it will be a priority and it's important. Every release, we get tons of new features, and the software gets slower.
My shit? My shit's fast. My own tiny little fiefdom in this giant rat warrens is fast. It could be faster, but it's pretty fast. It's not embarrassing. When I look at a flamegraph of our code when my code is running, I really have to dig in to find where my code is taking up time. It's fine. I'm--I don't feel bad. It's fine.
I love this industry. We are so smart. We are so capable of so many amazing things. But this industry annoys me. We so rarely do any of them. We're given a problem, and the solution is some god forsaken abomination of an electron app running javascript code on the desktop and pumping bytes into and out of a fucking DOM. The most innovative shit we can come up with is inventing a virtual dumbass and putting it into everything. The most value we create is division, hate, fear, and loathing on social media.
I'm not mad. I'm just disappointed.
Your 2021 MacBook and 2020 iPhone are top of the line devices. They'll be fine.
Buy something for half that price or less, like most people would be able to, and see if you can still get the same results.
This is also why I'd recommend people with lower budgets to buy high-end second hand rather than recent mid/low tier hardware.
Online Word (or Microsoft 365, or whatever it is called) regularly took me 2 minutes to load a 120 page document. I'm being very literal here. You could see it load in real time approximately 1 page a second. And it wasn't a network issue, mind you. It was just that slow.
Worse, the document strained my laptop so much as I used it, I regularly had to reload the web-page.
Try forcefully closing VSCode and your browser, and see how long it takes to open them again. The same is true for most complex webpages/'webapps' (Slack, Discord, etc).
A lot of other native Mac stuff is also less than ideal. Terminal keeps getting stuck all the time, Mail app can take a while to render HTML emails, Xcode is Xcode, and so on.
The Nintendo Switch on a chipset that was outdated a decade ago can run Tears of the Kingdom. It's not sensible that modern hardware is anything less than instant.
That's because TOTK is designed to run on it, with careful compromises and a lot of manual tuning.
Nintendo comes up with a working game first and then adds the story - BotW/TotK are post-apocalyptic so they don't have to show you too many people on screen at once.
The other way you can tell this is that both games have the same story even though one is a sequel! Like Ganon takes over the castle/Hyrule and then Link defeats him, but then they go into the basement and somehow Ganon is there again and does the exact same thing again? Makes no sense.
The framing device for The Legend of Zelda games is that it's a mythological cycle in which Link, Ganon, and Zelda are periodically reborn and the plot begins anew with new characters. It lets them be flexible with the setting, side quests, and characters as the series progresses and it's been selling games for just shy of forty years.
2021 MacBook and 2020 iPhone are not "old". Still using 2018 iPhone. Used a 2021 Macbook until a month ago.
In Carmack's Lex Fridman interview he says he knows C++ devs who still insist on using some ancient version of MSVC because it's *so fast* compared to the latest, on the latest hardware.
It really depends on the software. I have the top-of-the-line M4 Max laptop with 128GB of memory. I recently switched from Zotero [1] to using papis [2] at the command line.
Zotero would take 30 seconds to a minute to start up. papis has no startup time as it's a cli app and searching is nearly instantaneous.
There is no reason for Zotero to be so slow. In fact, before switching I had to cut down on the number of papers it was managing because at one point it stopped loading altogether.
It's great you haven't run into poorly optimized software, but but not everyone is so lucky.
[1]: https://www.zotero.org/ [2]: https://github.com/papis/papis
You live in the UNIX world, where this insanity is far less prevalent. Here is an example of what you are missing:
https://www.pcworld.com/article/2651749/office-is-too-slow-s...
It vastly depends on what software you're forced to use.
Here's some software I use all the time, which feels horribly slow, even on a new laptop:
Slack.
Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.
Outlook
Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.
Teams
Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.
These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.
Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?
Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.
Some of this is due to the adoption of React. GUI optimization techniques that used to be common are hard to pull off in the React paradigm. For instance, pre-rendering parts of the UI that are invisible doesn't mesh well with the React model in which the UI tree is actually being built or destroyed in response to user interactions and in which data gets loaded in response to that, etc. The "everything is functional" paradigm is popular for various legitimate reasons, although React isn't really functional. But what people often forget is that functional languages have a reputation for being slow...
I don't get any kind of spinner on Outlook opening emails. Especially emails which are pure text or only lightly stylized open instantly. Even emails with calendar invites load really fast, I don't see any kind of spinner graphic at all.
Running latest Outlook on Windows 11, currently >1k emails in my Inbox folder, on an 11th gen i5, while also on a Teams call a ton of other things active on my machine.
This is also a machine with a lot of corporate security tools sapping a lot of cycles.
I guess I shall screen record it, this is new-ish windows 11 laptop.
( This might also be a "new outlook" vs "out outlook" thing? )
I am using New Outlook.
I don't doubt it's happening to you, but I've never experienced it. And I'm not exactly using bleeding edge hardware here. A several year old i5 and a Ryzen 3 3200U (a cheap 2019 processor in a cheap Walmart laptop).
Maybe your IT team has something scanning every email on open. I don't know what to tell you, but it's not the experience out of the box on any machine I've used.
I’d take 50ms but in my experience it’s more like 250.
You're probably right, I'm likely massively underestimating the time, it's long enough to be noticable, but not so long that it feels instantly frustrating the first time, it just contributes to an overall sluggishness.
Watch this https://www.youtube.com/watch?v=GC-0tCy4P1U
I’m sure you know this, but a reminder that modern devices cache a hell of a lot, even when you “quit” such that subsequent launches are faster. Such is the benefit of more RAM.
I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).
A better example is Visual Studio [0], since it’s apples to apples.
[0]: https://youtu.be/MR4i3Ho9zZY
Compare it to qutecom, or any other xmpp client.
A lot of nostalgia is at work here. Modern tech is amazing. If the old tools were actually better people would actually use them. Its not like you can't get them to work.
As a regular user of vim, tmux and cscope for programming in C, may I say that not only do I prefer the old tools, but I use them regularly.
I can never tell if all of these comments are exaggerations to make a point, or if some people really have computers so slow that everything takes 20 seconds to launch (like the other comment claims).
I'm sure some of these people are using 10 year old corporate laptops with heavy corporate anti-virus scanning, leading to slow startup times. However, I think a lot of people are just exaggerating. If it's not instantly open, it's too long for them.
I, too, can get programs like Slack and Visual Studio Code to launch in a couple seconds at most, in contrast to all of these comments claiming 20 second launch times. I also don't quit these programs, so the only time I see that load time is after an update or reboot. Even if every program did take 20 seconds to launch and I rebooted my computer once a week, the net time lost would be measured in a couple of minutes.
It's not an exaggeration.
I have a 12 core Ryzen 9 with 64GB of RAM, and clicking the emoji reaction button in Signal takes long enough to render the fixed set of emojis that I've begun clicking the empty space where I know the correct emoji will appear.
For years I've been hitting the Windows key, typing the three or four unique characters for the app I want and hitting enter, because the start menu takes too long to appear. As a side note, that no longer works since Microsoft decided that predictability isn't a valuable feature, and the list doesn't filter the same way every time or I get different results depending on how fast I type and hit enter.
Lots of people literally outpace the fastest hardware on the market, and that is insane.
I have a 16 core Ryzen 9 with 128GB of RAM. I have not noticed any slowness in Signal. This might be caused by differences in our operating systems. It sounds like you run Windows. I run Gentoo Linux.
Mine open instantly, as long as I only have one open at a time. The power users on HN likely encounter a lot of slow loading apps, like I do.
Apple unlike the other Silicon Valley giants has figured out that latency >>> throughput. Minimizing latency is much more important for making a program "feel" fast than maximizing latency. Some of the apps I interact with daily are Slack, Teams (ugh), Gmail, and YouTube and they are all slow as dogshit.
I have a 2019 Intel MacBook and Outlook takes about five seconds to load and constantly sputters
Lightroom non-user detected
You are using a relatively high end computer and mobile device. Go and find a cheap laptop x86 and try doing the same. It will be extremely painful. Most of this is due to a combination of Windows 11 being absolute trash and JavaScript being used extensively in applications/websites. JavaScript is memory hog and can be extremely slow depending on how it is written (how you deal with loops massively affects the performance).
What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.
They're comparing these applications to older applications that loaded instantly on much slower computers.
Both sides are right.
There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:
- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.
- Better font rendering is the same.
- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.
- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.
- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...
- The data sets we deal with today are often a lot larger.
- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.
- More memory means more data structure overhead to manage that memory.
- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.
... and so on.
Then you install Linux and get all that without the mess that is Win11. Inefficient software is inefficient software.
[dead]
Yup, people run software on shitty computers and blame all the software.
The only slow (local) software I know is llvm and cpp compilers
Other are pretty fast
You have stories of people running 2021 MacBooks and complaining about performance. Those are not shitty computers.
The major slowdown of modern applications is network calls. Spend 50-500ms a pop for a few kilos of data. Many modern applications will spin up a half dozen blocking network calls casually.
This is something I've wished to eliminate too. Maybe we just cast the past 20 years as the "prototyping phase" of modern infrastructure.
It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?
Call it a power saving initiative and get environmentally-minded folks involved.
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.
I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.
Use of underpowered databases and abstractions that don't eliminate round-trips is a big one. The hardware is fast but apps take seconds to load because on the backend there's a lot of round-trips to the DB and back, and the query mix is unoptimized because there are no DBAs anymore.
It's the sort of thing that can be handled via better libraries, if people use them. Instead of Hibernate use a mapper like Micronaut Data. Turn on roundtrip diagnostics in your JDBC driver, look for places where they can be eliminated by using stored procedures. Have someone whose job is to look out for slow queries and optimize them, or pay for a commercial DB that can do that by itself. Also: use a database that lets you pipeline queries on a connection and receive the results asynchronously, along with server languages that make it easy to exploit that for additional latency wins.
> on countless layers of abstractions
Even worse, our bottom most abstraction layers pretend that we are running on a single core system from the 80s. Even Rust got hit by that when it pulled getenv from C instead of creating a modern and safe replacement.
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
I don’t think abundance vs speed is the right lens.
No user actually wants abundance. They use few programs and would benwfit if those programs were optimized.
Established apps could be optimized to the hilt.
But they seldom are.
> They use few programs
Yes but it's a different 'few programs' than 99% of all other users, so we're back to square one.
>No user actually wants abundance.
No, all users just want the few programs which they themselves need. The market is not one user, though. It's all of them.
But each vendor only develop a few software and generally supports only three platforms -/+ one. It’s so damning when I see projects reaching out for electron, when they only support macOS and Windows. And software like Slack has no excuse for being this slow on anything other than latest gen cpu and 1gb internet connection.
slack is shit along all sorts of dimensions (not just speed and bloat) because you're not the customer.
Users only want 5% of the features of the few programs they use. However everyone has a different list of features and a different list of programs. And so to get a market you need all the features on all the programs.
Did people make this exchange or did __the market__? I feel like we're assigning a lot of intention to a self-accelerating process.
You add a new layer of indirection to fix that one problem on the previous layer, and repeat it ad infinitum until everyone is complaining about having too many layers of indirection, yet nobody can avoid interacting with them, so the only short-term solution is a yet another abstraction.
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves
I think they’re referring to Electron.
Edit: and probably writing backends in Python or Ruby or JavaScript.
The backend programming language usually isn't a significant bottleneck; running dozens of database queries in sequence is the usual bottleneck, often compounded by inefficient queries, inappropriate indexing, and the like.
Yep. I’m a DBRE, and can confirm, it’s almost always the DB, with the explicit caveat that it’s also rarely the fault of the DB itself, but rather the fault of poor schema and query design.
Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.
The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!
It seems to me most developers don't want to learn much about the database and would prefer to hide it behind the abstractions used by their language of choice. I can relate to a degree; I was particularly put off by SQL's syntax (and still dislike it), but eventually came to see the value of leaning into the database's capabilities.
> ORMs
Certain ORMs such as Rails's ActiveRecord are part of the problem because they create the illusion that local memory access and DB access are the same thing. This can lead to N+1 queries and similar issues. The same goes for frameworks that pretend that remote network calls are just a regular method access (thankfully, such frameworks seem to have become largely obsolete).
The fact that this was seen as an acceptable design decision both by the creators, and then taken up by the industry is in an of itself a sign of a serious issue.
And text that is not a pixely or blurry mess. And Unicode.
Unicode worked since Plan9. And antialiasing it's from the early 90's.
I made a vendor run their buggy and slow software on a Sparc 20 against their strenuous complaints to just let them have an Ultra, but when they eventually did optimize their software to run efficiently (on the 20) it helped set the company up for success in the wider market. Optimization should be treated as competitive advantage, perhaps in some cases one of the most important.
> Optimization should be treated as competitive advantage
That's just so true!
The right optimizations at the right moment can have a huge boost for both the product and the company.
However the old tenet regarding premature optimization has been cargo-culted and expanded to encompass any optimization, and the higher-ups would rather have ICs churn out new features instead, shifting the cost of the bloat to the customer by insisting on more and bigger machines.
It's good for the economy, surely, but it's bad for both the users and the eventual maintainers of the piles of crap that end up getting produced.
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
You’re describing a situation where I - or a very smart compiler can choose when to bounds check or not to make that intelligent realization.
> It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.
A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.
Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.
> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
Do you have any examples at all? Or is this just speculation?
> Your understanding of how bounds checking works in modern languages and compilers is not up to date.
One I am familiar with is Swift - which does exactly this because it’s a library feature of Array.
Which languages will always be able to determine through function calls, indirect addressing, etc whether it needs to bounds check or not?
And how will I know if it succeeded or whether something silently failed?
> if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong
I agree. And note this is an example of a scenario you can encounter in other forms.
> Do you have any examples at all? Or is this just speculation?
Yes. Java and python are not competitive for graphics and audio processing.
> Java and python
Java and Python are not on the same body of water, let alone the same boat.
You can see some comparisons here:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Your argument is exactly why we ended up with the abominations of C and C++ instead of the safety of Pascal, Modula-2, Ada, Oberon, etc. Programmers at the time didn't realize how little impact safety features like bounds checking have. The bounds only need to be checked once for a for loop, not on each iteration.
> The bounds only need to be checked once for a for loop, not on each iteration.
This is a theoretical argument. It depends on the compiler being able to see that’s what you’re doing and prove that there is no other mutation.
> abominations of C and C++
Sounds like you don’t understand the design choices that made this languages successful.
I understand the design choices and they're crap. Choosing a programming language shouldn't be a popularity contest.
There are inevitably those who don't know how to program but are responsible for hiring those that can. Language popularity is an obvious metric with good utility for that case.
Even so you haven't provided any compelling evidence that C or C++ made it's decisions to be more appealing or more popular.
Maybe since 1980.
I recently watched a video that can be summarised quite simply as: "Computers today aren't that much faster than the computers of 20 years ago, unless you specifically code for them".
https://www.youtube.com/watch?v=m7PVZixO35c
It's a little bit ham-fisted, as the author was shirking decades of compile optimisations also, and it's not apples to apples as he's comparing desktop class hardware with what is essentially laptop hardware; but it's also interesting to see that a lot of the performance gains really weren't that great actually. he observes a doubling of performance in 15 years! Truth be told most people use laptops now, and truth be told 20 years ago most people used desktops, so it's not totally unfair.
Maybe we've bought a lot into marketing.
The cost of bounds checking, by itself, is low. The cost of using safe languages generally can be vastly higher.
Garbage collected languages often consume several times as much memory. They aren't immediately freeing memory no longer being used, and generally require more allocations in the first place.
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
> However these days the majority of the bad attacks are social
You're going to have to cite a source for that.
Bounds checking is one mechanism that addresses memory safety vulnerabilities. According to MSFT and CISA[1], nearly 70% of CVEs are due to memory safety problems.
You're saying that we shouldn't solve one (very large) part of the (very large) problem because there are other parts of the problem that the solution wouldn't address?
[1] https://www.cisa.gov/news-events/news/urgent-need-memory-saf...
While I do not have data comparing them, I have a few remarks:
1. Scammer Payback and others are documenting on-going attacks that involve social engineering that are not getting the attention that they deserve.
2. You did not provide any actual data on the degree to which bounds checks are “large”. You simply said they were because they are a subset of a large group. There are diseases that only affect less than 100 people in the world that do not get much attention. You could point out that the people affected are humans, which is a group that consists of all people in the world. Thus, you can say that one of these rare diseases affects a large number of people and thus should be a priority. At least, that is what you just did with bounds checks. I doubt that they are as rare as my analogy would suggest, but the point is that the percentage is somewhere between 0 and 70% and without any real data, your claim that it is large is unsubstantiated. That being said, most C software I have touched barely uses arrays for bound checks to be relevant, and when it does use arrays, it is for strings. There are safe string functions available for use like strlcpy() and strlcat() that largely solve the string issues by doing bounds checks. Unfortunately, people keep using the unsafe functions like strcpy() and strcat(). You would have better luck if you suggested people use safe string handling functions rather than suggest compilers insert bounds checks.
3. Your link mentions CHERI, which a hardware solution for this problem. It is a shame that AMD/Intel and ARM do not modify their ISAs to incorporate the extension. I do not mean the Morello processor, which is a proof of concept. I mean the ISA specifications used in all future processors. You might have more luck if you lobby for CHERI adoption by those companies.
CVEs are never written for social attacks. Which is fair what they are trying to do. However attacking the right humans and not software is easier.
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.
It's about 5%.
Cost of cyberattacks globally[1]: O($trillions)
Cost of average data breach[2][3]: ~$4 million
Cost of lost developer productivity: unknown
We're really bad at measuring the secondary effects of our short-sightedness.
[1] https://iotsecurityfoundation.org/time-to-fix-our-digital-fo...
[2] https://www.internetsociety.org/resources/doc/2023/how-to-ta...
[3] https://www.ibm.com/reports/data-breach
> Cost of cyberattacks globally[1]: O($trillions)
That's a fairly worthless metric. What you want is "Cost of cyberattacks / Revenue from attacked systems."
> We're really bad at measuring the secondary effects of our short-sightedness.
We're really good at it. There's an entire industry that makes this it's core competency... insurance. Which is great because it means you can rationalize risk. Which is also scary because it means you can rationalize risk.
But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.
Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.
depends on whether the fact that software can be finished will ever be accepted. If you're constantly redeveloping the same thing to "optimize and streamline my experience" (please don't) then yes, the advantage is dubious. But if not, then the saved value in operating costs keeps increasing as time goes on. It won't make much difference in my homelab, but at datacenter scale it does
Even the fact that value keeps increasing doesn't mean it's a good idea. It's a good idea if it keeps increasing more than other value. If a piece of software is more robust against attacks then the value in that also keeps increasing over time, possibly more than the cost in hardware. If a piece of software is easier to add features to, then that value also keeps increasing over time.
If what we're asking is whether value => X, i.e. to get the most value we should do X, you cannot answer that in the positive by proving X => value. If optimising something is worth a gazillion dollars, you still should not do it if doing something else is worth two gazillion dollars.
The first reply is essentially right. This isn't what happened at all, just because C is still prevalent. All the inefficiency is everything down the stack, not in C.
Most programming languages have array bounds checking now.
Most programming languages are written in C, which doesn't.
Fairly sure that was OP's point.
I don't trust that shady-looking narrator. 5% of what exactly? Do you mean that testing for x >= start and < end is only 5% as expensive as assigning an int to array[x]?
Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.
You only need to bounds check once before a for loop starts, not every iteration.
If they're all being inserted contiguously.
Anyway that's a form of saying "I know by reasoning that none of these will be outside the bounds, so let's not check".
>Personally I think the 1000Xers kinda ruined things for the rest of us.
Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.
It's more like 100,000X.
Just the clockspeed increased 1000X, from 4 MHz to 4 GHz.
But then you have 10x more cores, 10x more powerful instructions (AVX), 10x more execution units per core.
The problem is 1000xers are a rarity.
The software desktop users have to put up with is slow.
You can always install DOS as your daily driver and run 1980's software on any hardware from the past decade, and then tell me how that's slow.
1000x referred to the hardware capability, and that's not a rarity that is here.
The trouble is how software has since wasted a majority of that performance improvement.
Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.
But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.
The sad thing is that even running DOS software in DOSBox (or in QEMU+FreeDOS), or Amiga software in UAE, is much faster than any native software I have run in many years on any modern systems. They also use more reasonable amounts of storage/RAM.
Animations is part of it of course. A lot of old software just updates the screen immediately, like in a single frame, instead of adding frustrating artificial delays to every interaction. Disabling animations in Android (an accessibility setting) makes it feel a lot faster for instance, but it does not magically fix all apps unfortunately.
The realization that a string could hold a gigabyte of text might have killed off null terminated strings, and saved us all a f@ckton of grief
I don't think it's that deep. We are just stuck with browsers now, for better and worse. Everything else trails.
We're stuck with browsers now until the primary touch with the internet is assistants / agent UIs / chat consoles.
That could end up being Electron (VS Code), though that would be a bit sad.
I think it'd be pretty funny if to book travel in 2035 you need to use a travel agent that's objectively dumber than a human. We'd be stuck in the eighties again, but this time without each other to rely on.
Of course, that would be suicide for the industry. But I'm not sure investors see that.
I don't think we are gonna go there. Talking is cumbersome. There's a reason, besides social anxiety that people prefer to use self-checkout and electronically order fastfood. There are easier ways to do a lot of things than with words.
I'd bet on maybe ad hoc ai designed ui-s you click but have a voice search when you are confused about something.
If you know what you want then not talking to a human is faster. However if you are not sure a human can figure out. I'm not sure I'd trust a voice assistant - the value in the human is an informed opinion which is hard to program, but it is easy to program a recommendation for whatever makes the most profit. Of course humans often don't have an informed opinion either, but at least sometimes they do, and they will also sometimes admit it when they don't.
> the value in the human is an informed opinion which is hard to program
I don't think I ever used a human for that. They are usually very uninformed about everything that's not their standard operational procedure or some current promotional materials.
20 years ago when I was at McDonalds there would be several customers per shift (so many 1 in 500?) who didn't know what they wanted and asked for a recommendation. Since I worked there I ate there often enough to know if the special was something I liked or not.
Bless your souls. I'm not saying it doesn't happen. I just personally had only bad experiences so I actively avoid human interactive input in my commercial activity.
> I actively avoid human interactive input in my commercial activity
Not to mention that the "human input" can be pre-scripted to urge you to purchase more, so it's not genuinely a human interaction, it's only a human delivering some bullshit "value add" marketing verbiage.
Search is being replaced by LLM chat. Agent workflows are going to get us to a place where people can rally software to their own purposes. At that point, they don't have to interact with the web front end, they can interact with their own personal front-end that is able to navigate your backend.
Today a website is easier. But just like there's a very large percentage of people doing a great many things from their phone instead of tying themselves to a full-blown personal computer, there will be an increasing number of people who send their agents off to get things done. In that scenario, the user interface is further up the stack than a browser, if there's a browser as typically understood in the stack at all.
Clock speeds are 2000x higher than the 80s.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
Since 1980 maybe. But since 2005 it increased maybe 5x and even that's generous. And that's half of the time that passed and two decades.
https://youtu.be/m7PVZixO35c?si=px2QKP9-80hDV8Ui
2005 was Pentium 4 era.
For comparison: https://www.cpubenchmark.net/compare/1075vs5852/Intel-Pentiu...
That's about a 168x difference. That was from before Moores law started petering out.
For only a 5x speed difference you need to go back to the 4th or 5th generation Intel Core processors from about 10 years ago.
It is important to note that the speed figure above is computed by adding all of the cores together and that single core performance has not increased nearly as much. A lot of that difference is simply from comparing a single core processor with one that has 20 cores. Single core performance is only about 8 times faster than that ancient Pentium 4.
As the other guy said, top of the line CPUs today are roughly ~100x faster than 20 years ago. A single core is ~10x faster (in terms of instructions per second) and we have ~10x the number of cores.
And the memory quantity, memory speed, disk speed are also vastly higher now than 20 years ago.
> the 1000Xers kinda ruined things for the rest of us
Robert Barton (of Burroughs 5000 fame) once referred to these people as “high priests of a low cult.”
I think on year 2001 GHz CPU should be a performance benchmark that every piece of basic non-high performance software should execute acceptably on.
This is kind of been a disappointment to me of AI when I've tried it. This has kind of been a disappointment to me of AI when I've tried it. Llm should be able to Port things. It should be able to rewrite things with the same interface. It should be able to translate from inefficient languages to more efficient ones.
It should even be able to optimize existing code bases automatically, or at least diagnose or point out poor algorithms, cache optimization, etc.
Heck I remember powerbuilder in the mid 90s running pretty well on 200 mhz CPUs. It doesn't even really interpreted stuff. It's just amazing how slow stuff is. Do rounded corners and CSS really consume that much CPU power?
My limited experience was trying to take the unix sed source code and have AI port it into a jvm language, and it could do the most basic operations, but utterly failed at even the intermediate sed capabilities. And then optimize? Nope
Of course there's no desire for something like that. Which really shows what the purpose of all this is. It's to kill jobs. It's not to make better software. And it means AI is going to produce a flood of bad software. Really bad software.
I've pondered this myself without digging into the specifics. The phrase "sufficiently smart compiler" sticks in my head.
Shower thoughts include whether there are languages that have features, other than through their popularity and representation in training corpuses, help us get from natural language to efficient code?
I was recently playing around with a digital audio workstation (DAW) software package called Reaper that honestly surprised me with its feature set, portability (Linux, macOS, Windows), snappiness etc. The whole download was ~12 megabytes. It felt like a total throwback to the 1990s in a good way.
It feels like AI should be able to help us get back to small snappy software, and in so doing maybe "pay its own way" with respect to CPU and energy requirements. Spending compute cycles to optimize software deployed millions of times seems intuitively like a good bargain.
And this is JavaScript. And you. are. going. to. LOVE IT!
So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.
More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.
Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.
Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.
I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.
Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.
The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.
I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.
I don't know how to solve that problem or even if it will ever be "solved".
You can run a thought experiment imagining an alternative universe where human resource were directed towards optimization, and that alternative universe would look nothing like ours. One extra engineer working on optimization means one less engineer working on features. For what exactly? To save some CPU cycles? Don’t make me laugh.
Except you’re self selecting for a company that has high engineering costs, big fat margins to accommodate expenses like additional hardware, and lots of projects for engineers to work on.
The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
The problem is that almost no one is doing it, because the way we make these decisions has nothing to do with the economical calculus behind, most people just do “what Google does”, which explains a lot of the disfunction.
I think the parent's point is that if Google with millions of servers can't make performance optimization worthwhile, then it is very unlikely that a smaller company can. If salaries dominate over compute costs, then minimizing the latter at the expense of the former is counterproductive.
> The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
That's debatable. Performance optimization almost always lead to complexity increase. Doubled performance can easily cause quadrupled complexity. Then one has to consider whether the maintenance burden is worth the extra performance.
> it is very unlikely that a smaller company can.
I think it's the reverse: a small company doesn't have the liquidity, buying power or ability to convert more resource into more money like Google.
And of course a lot of small companies will be paying Google with a fat margin to use their cloud.
Getting by with less resources, or even on-premise reduced hardware will be a way bigger win. That's why they'll pay a DBA full time to optimize their database needs to reduce costs 2 to 3x the salary. Or have full team of infra guys mostly dealing with SRE and performance.
> If salaries dominate over compute costs, then minimizing the latter at the expense of the former is counterproductive.
And with client side software, compute costs approach 0 (as the company isn’t paying for it).
I worked there too and you're talking about performance in terms of optimal usage of CPU on a per-project basis.
Google DID put a ton of effort into two other aspects of performance: latency, and overall machine utilization. Both of these were top-down directives that absorbed a lot of time and attention from thousands of engineers. The salary costs were huge. But, if you're machine constrained you really don't want a lot of cores idling for no reason even if they're individually cheap (because the opportunity cost of waiting on new DC builds is high). And if your usage is very sensitive to latency then it makes sense to shave milliseconds off because of business metrics, not hardware $ savings.
The key part here is "machine utilization" and absolutely there was a ton of effort put into this. I think before my time servers were allocated to projects but even early on in my time at Google Borg had already adopted shared machine usage and therew was a whole system of resource quota implemented via cgroups.
Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.
I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.
> I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands.
I think this probably holds true for outfits like Google because 1) on their scale "a core" is much cheaper than average, and 2) their salaries are much higher than average. But for your average business, even large businesses? A lot less so.
I think this is a classic "Facebook/Google/Netflix/etc. are in a class of their own and almost none of their practices will work for you"-type thing.
Maybe not to the same extent, but an AWS EC2 m5.large VM with 2 cores and 8 GB RAM costs ~$500/year (1 year reserved). Even if your engineers are being paid $50k/year, that's the same as 100 VMs or 200 cores + 800 GB RAM.
Google doesn't come up with better compression and binary serialization formats just for fun--it improves their bottom line.
The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
A subtext here may be his current AI work. In OP, Carmack is arguing, essentially, that 'software is slow because good smart devs are expensive and we don't want to pay for them to optimize code and systems end-to-end as there are bigger fish to fry'. So, an implication here is that if good smart devs suddenly got very cheap, then you might see a lot of software suddenly get very fast, as everyone might choose to purchase them and spend them on optimization. And why might good smart devs become suddenly available for cheap?
It's related to a thread from yesterday, I'm guessing you haven't seen it:
https://news.ycombinator.com/item?id=43967208 https://threadreaderapp.com/thread/1922015999118680495.html
I think its a bad argument though. If we had to stop with the features for a little while and created some breathing room, the features would come roaring back. There'd be a downturn sure but not a continuous one.
> "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
We have self driving cars, amazing advancement in computer graphics, dead reckoning of camera position from visual input...
In the meantime, hardware has had to go wide on threads as single core performance has not improved. You could argue that's been a software gain and a hardware failure.
> single core performance has not improved.
Single core performance has improved, but at a much slower rate than I experienced as a kid.
Over the last 10 years, we are something like 120% improvement in single core performance.
And, not for nothing, efficiency has become much more important. More CPU performance hasn't been a major driving factor vs having a laptop that runs for 12 hours. It's simply easier to add a bunch of cores and turn them all off (or slow them down) to gain power efficiency.
Not to say the performance story would be vastly different with more focus on performance over efficiency. But I'd say it does have an effect on design choices.
300% single core performance from Haswell (2015) to today.
Single core performance has improved about 10x in 20 years
And I'd argue that we've seen tons of innovation in the past 18 years aside from just "the smartphone" but it's all too easy to take for granted and forget from our current perspective.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
> * Video streaming services
Netflix video streaming launched in 2007.
> * VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
I used VS2005 a little bit in the past few years, and I was surprised to see that it contains most of the features that I want from an IDE. Honestly, I wouldn't mind working on a C# project in VS2005 - both C# 2.0 and VS2005 were complete enough that they'd only be a mild annoyance compared to something more modern.
> partial autopilot features aside from 1970's cruise control
Radar cruise control was a fairly common option on mid-range to high-end cars by 2007. It's still not standard in all cars today (even though it _is_ standard on multiple economy brands). Lane departure warning was also available in several cars. I will hand it to you that L2 ADAS didn't really exist the way it does today though.
...TomTom just getting off the ground
TomTom was founded in 1991 and released their first GPS device in 2004. By 2007 they were pretty well established.
I worked for a 3rd party food delivery service in the summer of 2007. Ordering was generally done by phone, then the office would text us (the drivers) order details for pickup & delivery. They provided GPS navigation devices, but they were stand-alone units that were slower & less accurate than modern ones, plus they charged a small fee for using it that came out of our pay.
Your post seems entirely anachronistic.
2007 is the year we did get video streaming services: https://en.wikipedia.org/wiki/BBC_iPlayer
Steam was selling games, even third party ones, for years by 2007.
I'm not sure what a "VS-Code style IDE" is, but I absolutely did appreciate Visual Studio ( and VB6! ) prior to 2007.
2007 was in fact the peak of TomTom's profit, although GPS navigation isn't really the same as general purpose mapping application.
Grocery delivery was well established, Tesco were doing that in 1996. And the idea of takeaways not doing delivery is laughable, every establishment had their own delivery people.
Yes, there are some things on that list that didn't exist, but the top half of your list is dominated by things that were well established by 2007.
Sublime Text was out by 2008. Its spiritual predecessor, TextMate, was out a few years before that.
And of course, Vim and Emacs were out long before that.
most of that list is iteration, not innovation. like going from "crappy colour printer" to "not-so-crappy colour printer"
The future is unevenly distributed.
> Video streaming services
We watched a stream of the 1994 World Cup. There was a machine at MIT which forwarded the incoming video to an X display window
and we could watch it from several states away. (The internet was so trusting in those days.)To be sure, it was only a couple of frames per second, but it was video, and an audience collected to watch it.
> EV Cars (that anyone wanted to buy)
People wanted to buy the General Motors EV1 in the 1990s. Quoting Wikipedia, "Despite favorable customer reception, GM believed that electric cars occupied an unprofitable niche of the automobile market. The company ultimately crushed most of the cars, and in 2001 GM terminated the EV1 program, disregarding protests from customers."
I know someone who managed to buy one. It was one of the few which had been sold rather than leased.
>netflix
>steam
>Sublime (Of course ed, vim, emacs, sam, acme already existed for decades by 2007)
>No they weren't TomTom already existed for years, GPS existed for years
>You're right that they already existed
>Again, already existed, glad we agree
>Tech was already there just putting it in a phone doesn't count as innovation
>NASA was driving electric cars on the moon while Elon Musk was in diapers
>I was doing that in the early 80s, but Skype is a fine pre 2007 example thanks again >Your right we didn't have 4k displays in 2007, not exactly a software innovation. This is a good example of a hardware innovation used to sell essentially the same product >? Are you sure you didn't have a bad printer there have been good color printers since the 90s let alone 2007. The price to performance arguably hasn't changed since 2007 you are just paying more in running costs than upfront. >This is definitely hardware. Scripting language 3.0 or FOTM framework isn't innovative in that there is no problem being solved and no economic gain, if they didn't exist people would use something else and that would be that. With AI the big story was that there WASN'T a software innovation and that what few innovation do exist will die to the Bitter lesson
There has been a lot of innovation - but it is focused to some niche and so if you are not in a niche you don't see it and wouldn't care if you did. Most of the major things you need have already invented - I recall word processors as a kid, so they for sure date back to the 1970s - we still need word processors and there is a lot of polish that can be added, but all innovation is in niche things that the majority of us wouldn't have a use for if we knew about it.
Of course innovation is always in bits and spurts.
This is exactly the point. People ignore that "bloat" is not (just) "waste", it is developer productivity increase motivated by economics.
The ability to hire and have people be productive in a less complicated language expands the market for workers and lowers cost.
I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
Point 1 is why growth/debt is not a good economic model in the long run. We should have a care & maintenance focused economy and center our macro scale efforts on the overall good of the human race, not perceived wealth of the few.
If we focused on upkeep of older vehicles, re-use of older computers, etc. our landfills would be smaller proportional to 'growth'.
I'm sure there's some game theory construction of the above that shows that it's objectively an inferior strategy to be a conservationist though.
I sometimes wonder how the game theorist would argue with physics.
We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
> We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
Why can you not match orders in parallel using logarithmic reduction, the same way you would sort in parallel? Is it that there is not enough other computation being done other than sorting by time and price?
It's an inherently serial problem and regulations require it to be that way. Users who submit first want their orders to be the one that crosses.
I think it is the temporal aspect of order matching - for exchanges it is an inherently serial process.
You are only able to do that because you are doing simple processing on each transaction. If you had to do more complex processing on each transaction it wouldn't be possible to do that many. Though it is hard for me to imagine what more complex processing would be (I'm not in your domain)
The order matching engine is mostly about updating an in-memory order book representation.
It is rarely the case that high volume transaction processing facilities also need to deal with deeply complex transactions.
I can't think of many domains of business wherein each transaction is so compute intensive that waiting for I/O doesn't typically dominate.
HFT would love to do more complex calculations for some of their trades. They often make the compromise of using a faster algorithm that is known to be right only 60% of the time vs the better but slower algorithm that is right 90% of the time.
That is a different problem from yours though and so it has different considerations. In some areas I/O dominates, in some it does not.
In a perfect world, maximizing (EV/op) x (ops/sec) should be done for even user software. How many person-years of productivity are lost each year to people waiting for Windows or Office to start up, finish updating, etc?
I work in card payments transaction processing and IO dominates. You need to have big models and lots of data to authorize a transaction. And you need that data as fresh as possible and as close to your compute as possible... but you're always dominated by IO. Computing the authorization is super cheap.
Tends to scale vertically rather than horizontally. Give me massive caches and wide registers and I can keep them full. For now though a lot of stuff is run on commodity cloud hardware so... eh.
One of the things I think about sometimes, a specific example rather than a rebuttal to Carmack.
The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.
So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
> So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
I would far, far rather have Windows-only software that is performant than the Electron slop we get today. With Wine there's a decent chance I could run it on Linux anyway, whereas Electron software is shit no matter the platform.
Wine doesn't even run Office, there's no way it'd run whatever native video stack Teams would use. Linux has Teams purely because Teams decided to go with web as their main technology.
Even the Electron version of Teams on Linux has a reduced feature set because there's no Office to integrate with.
Of course Wine can run office! At least Word and Excel I have used under Wine. They're probably some of the primary targets of compatibility work.
Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.
I think you’re right in that it’s an economics problem, but you’re wrong about which one.
For me this is a clear case of negative externalities inflicted by software companies against the population at large.
Most software companies don’t care about optimization because they’re not paying the real costs on that energy, lost time or additional e-waste.
Is there any realistic way to shift the payment of hard-to-trace costs like environmental clean-up, negative mental or physical health, and wasted time back to the companies and products/software that cause them?
It's the kind of economics that shifts the financial debt to accumulating waste, and technical debt, which is paid for by someone else. It's basically stealing. There are --of course-- many cases in which thorough optimizing doesn't make much sense, but the idea of just adding servers instead of rewriting is a sad state of affairs.
It doesn't seem like stealing to me? Highly optimised software generally takes more effort to create and maintain.
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
I feel like the argument is similar to that of all corporate externality pushes.
For example "polluting the air/water, requiring end-users to fill landfills with packaging and planned obscolescence" allows a company to more cheaply offer more products to you as a consumer.. but now everyone collectively has to live in a more polluted world with climate change and wasted source material converted to expensive and/or dangerous landfills and environmental damage from fracking and strip mining.
But that's still not different from theft. A company that sells you things that "Fell off the back of a truck" is in a position to offer you lower costs and greater variety, as well. Aren't they?
Our shared resources need to be properly managed: neither siphoned wastefully nor ruined via polution. That proper management is a cost, and it either has to be borne by those using the resources and creating the waste, or it is theft of a shared resource and tragedy of the commons.
> It's basically stealing.
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
> has been externalized to customers in search of ill-gotten profits.
'Externality' does not mean 'thing I dislike'. If it is the customers running the software or waiting the extra couple of seconds, that's not an externality. By definition. (WP: "In economics, an externality is an indirect cost (external cost) or benefit (external benefit) to an uninvolved third party that arises as an effect of another party's (or parties') activity.") That is just the customers picking their preferred point on the tradeoff curves.
It's like ignoring backwards compatibility. That is really cheap since all the cost is pushed to end-users (that have to relearn the UI) or second/third-party developers (that have to rewrite their client code to work with a new API). But it's OK since everyone is doing it and also without all those pointless rewrites many of us would not have a job.
> without all those pointless rewrites many of us would not have a job.
I hear arguments like this fairly often. I don't believe it's true.
Instead of having a job writing a pointless rewrite, you might have a job optimizing software. You might have a different career altogether. Having a job won't go away: what you do for your job will simply change.
Also offloaded to the miserable devs maintaining the system.
> It's basically stealing
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
From what I’m seeing people do on their computers, it barely changed from what they’ve been doing on their pentium 4 one. But now, with Electron-based software and the generals state of Windows, you can’t recommend something older than 4 years. It’s hard to not see it as stealing when you have to buy a 1000+ laptop, when a 400 one could easily do the job if the software were a bit better.
Most people today could be using excel '98 and be no less productive.
In my SO's job (HR), it's basically Word, Excel and email. And nothing more than what was available around 2005 other than some convenient utilities.
It’s only a tradeoff for the user if the user find the added features useful.
Increasingly, this is not the case. My favorite example here is the Adobe Creative Suite, which for many users useful new features became far and few between some time ~15 years ago. For those users, all they got was a rather absurd degree of added bloat and slowness for essentially the same thing they were using in 2010. These users would’ve almost certainly been happier had 80-90% of the feature work done in that time instead been bug fixes and optimization.
would you spend 100 years writing the perfect editor optimizing every single function and continueously optimizing and when will it ever be complete? No you wouldn't. Do you use python or java or C? Obviously, that can be optimized if you wrote in assembly. Practice what you preach, otherwise you'd be stealing.
Not really stealing. You could off course build software that is more optimized and with the same features but at a higher cost. Would most buyers pay twice the price for a webb app that loads in 1 sec instead of 2? Probably not.
Try loading slack and youtube on a 4 year old laptop. It’s more in the 10s, and good luck if you only have 8GB of ram.
> Do you have someone spend extra time optimising your software or do you have them produce more functionality
Depends. In general, I'd rather have devs optimize the software rather than adding new features just for the sake of change.
I don't use most of the new features in macOS, Windows, or Android. I mostly want an efficient environment to run my apps and security improvements. I'm not that happy about many of the improvements in macOS (eg the settings app).
Same with design software. I don't use most of the new features introduced by Adobe. I'd be happy using Illustrator or Photoshop from 10 years ago. I want less bloat, not more.
I also do audio and music production. Here I do want new features because the workflow is still being improved but definitely not at the cost of efficiency.
Regarding code editors I'm happy with VSCode in terms of features. I don't need anything else. I do want better LSPs but these are not part of the editor core. I wish VSCode was faster and consumed less memory though.
Efficiency is critical to my everyday life. For example, before I get up from my desk to grab a snack from the kitchen, I'll bring any trash/dishes with me to double the trip's benefits. I do this kind of thing often.
Optimizing software has a similar appeal. But when the problem is "spend hours of expensive engineering time optimizing the thing" vs "throw some more cheap RAM at it," the cheaper option will prevail. Sometimes, the problem is big enough that it's worth the optimization.
The market will decide which option is worth pursuing. If we get to a point where we've reached diminishing returns on throwing hardware at a problem, we'll optimize the software. Moore's Law may be slowing down, but evidently we haven't reached that point yet.
Ultimately it's a demand problem. If consumer demands more performant software, they would pay a premium for it. However, the opposite is more true. They would prefer an even less performant version if it came with a cheaper price tag.
You have just explained how enshitification works.
"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less. Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
Almost all of my nontechnical friends and family members have at some point complained about bloated and overly complicated software that they are required to use.
Also remember that Microsoft at this point has to drag their users kicking and screaming into using the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for their own sake, but definitely not users.
Every time I offer alternatives to slow hardware, people find a missing feature that makes them stick to what they're currently using. Other times the features are there but the buttons for it are in another place and people don't want to learn something new. And that's for free software, with paid software things become even worse because suddenly the hours they spend on loading times is worthless compared to a one-time fee.
Complaining about slow software happens all the time, but when given the choice between features and performance, features win every time. Same with workflow familiarity; you can have the slowest, most broken, hacked together spreadsheet-as-a-software-replacement mess, but people will stick to it and complain how bad it is unless you force them to use a faster alternative that looks different.
Every software you use has more bloat than useful features? Probably not. And what's useless to one user might be useful to another.
Agree WRT the tradeoff between features and elegance.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
No way.
You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
If people could, no one would ever upgrade anything anymore. Look at how hard MS has to work to force anyone to upgrade. I have never heard of anyone who wanted a new version of Windows, Office, Slack, Zoom, etc.
This is also why everything (like Photoshop) is being forced into the cloud. The vast majority of people don't want the new features that are being offered. Including buyers at businesses. So the answer to keep revenue up is to force people to buy regardless of what features are being offered or not.
> You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
I think this is more a consumer perspective than a B2B one. I'm thinking about the business case. I.e. businesses purchase software (or has bespoke software developed). Then they pay for fixes/features/improvements. There is often a direct communication between the buyer and the developer (whether it's off-the shelf, inhouse or made to spec). I'm in this business and the dialog is very short "great work adding feature A. We want feature B too now. And oh the users say the software is also a bit slow can you make it go faster? Me: do you want feature B or faster first? Them (always) oh feature B. That saves us man-weeks every month". Then that goes on for feature C, D, E, ...Z.
In this case, I don't know how frustrated the users are, because the customer is not the user - it's the users' managers.
In the consumer space, the user is usually the buyer. That's one huge difference. You can choose the software that frustrates you the least, perhaps the leanest one, and instead have to do a few manual steps (e.g. choose vscode over vs, which means less bloated software but also many fewer features).
Perfectly put. People who try to argue that more time should be spent on making software perform better probably aren't thinking about who's going to pay for that.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
Unfortunately, bloated software passes the costs to the customer and it's hard to evaluate the loss.
Except your browser taking 180% of available ram maybe.
By the way, the world could also have some bug free software, if anyone could afford to pay for it.
What cost? The hardware is dirt cheap. Programmers aren't cheap. The value of being able to use cheap software on cheap hardware is basically not having to spend a lot of time optimizing things. Time is the one thing that isn't cheap here. So there's a value in shipping something slightly sub optimal sooner rather than something better later.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
> Time is the one thing that isn't cheap here.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
> I just wish developers cared a little bit more than they do.
Ask the nice product owner to stop crushing me with their deadlines and I'll happily oblige.
> The hardware is dirt cheap. Programmers aren't cheap.
That may be fine if you can actually improve the user experience by throwing hardware at the problem. But in many (most?) situations, you can't.
Most of the user-facing software is still single-threaded (and will likely remain so for a long time). The difference in single-threaded performance between CPUs in wide usage is maybe 5x (and less than 2x for desktop), while the difference between well optimized and poorly optimized software can be orders of magnitude easily (milliseconds vs seconds).
And if you are bottlenecked by network latency, then the CPU might not even matter.
> The hardware is dirt cheap.
Maybe to you.
Meanwhile plenty of people are living paycheck-to-paycheck and literally cannot afford a phone, let alone a new phone and computer every few years.
> The hardware is dirt cheap.
It's not, because you multiply that 100% extra CPU time by all of an application's users and only then you come to the real extra cost.
And if you want to pick on "application", think of the widely used libraries and how much any non optimization costs when they get into everything...
Your whole reply is focused at business level but not everybody can afford 32GB of RAM just to have a smooth experience on a web browser.
I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.
The game is ported to switch, and it does run slow when you do big combos. You can feel it visually to the point that it's a bit annoying.
It is made in lua using love2d. That helped the developers and comes with a cost in minimal requirements (even if they aren't much for a game released in 2024).
One way to think about it is: If we were coding all our games in C with no engine, they would run faster, but we would have far fewer games. Fewer games means fewer hits. Odds are Balatro wouldn't have been made, because those developer hours would've been allocated to some other game which wasn't as good.
Balatro was started in vacation time and underwent a ton of tweaking: https://localthunk.com/blog/balatro-timeline-3aarh So if it had to be written in C, probably neither of those would have happened.
I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
[0]: https://news.ycombinator.com/item?id=43971960
Sorry, don't want to go back to a time where I could only edit ASCII in a single font.
Do I like bloat? No. Do I like more software rather than less? Yes! Unity and Unreal are less efficient than custom engines but there are 100x more titles because that tradeoff of efficiency of the CPU vs efficiency of creation.
The same is true for website based app (both online and off). Software ships 10x faster as a web page than as a native app for (windows/mac/linux/android/ios). For most, that's all I need. Even for native like apps, I use photopea.com over photoshop/gimp/krita/affinity etc because it's available everywhere no matter which machine I use or who's machine it is. Is it less efficient running in JS in the browser? Probaby. Do I care? No
VSCode, now the most popular editor in the worlds (IIRC) is web-tech. This has so many benefits. For one, it's been integrated into 100s of websites, so this editor I use is available in more places. It's using tech more people know so more extension that do more things. Also, probably arguably because of JS's speed issues, it encouraged the creation of the Language Server Protocol. Before this, every editor rolled their own language support. The LSP is arguably way more bloat than doing it directly in the editor. I don't care. It's a great idea, way more flexible. Any language can write one LSP and then all editors get support for that language.
The world DOES run on older hardware.
How new do you think the CPU in your bank ATM or car's ECU is?
Some of it does.
The chips in everyones pockets do a lot of compute and are relatively new though.
I'd be willing to bet that even a brand new iPhone has a surprising number of reasonably old pieces of hardware for Bluetooth, wifi, gyroscope, accelerometer, etc. Not everything in your phone changes as fast as the CPU.
Well I know the CPU in my laptop is already over 10 years old and still works good enough for everything I do.
My daily drivers at home are an i3-540 and and Athlon II X4. Every time something breaks down, I find it much cheaper to just buy a new part than to buy a whole new kit with motherboard/CPU/RAM.
I'm a sysadmin, so I only really need to log into other computers, but I can watch videos, browse the web, and do some programming on them just fine. Best ROI ever.
> I can watch videos
Can you watch H.265 videos? That's the one limitation I regularly hit on my computer (that I got for free from some company, is pretty old, but is otherwise good enough that I don't think I'll replace it until it breaks). I don't think I can play videos recorded on modern iPhones.
Yes, they play just fine with Gnome Videos or VLC. Both machines have a GeForce GT 710 on them.
There's a decent chance something in the room you're in right now is running an 8051 core.
Doom can run on Apple's Lightning to HDMI adapter.
A USB charger is more powerful than the Apollo Guidance Computer: https://web.archive.org/web/20240101203337/https://forresthe...
Related: I wonder what cpu Artemis/Orion is using
IBM PowerPC 750X apparently, which was the CPU the Power Mac G3 used back in the day. Since it's going into space it'll be one of the fancy radiation-hardened versions which probably still costs more than your car though, and they run four of them in lockstep to guard against errors.
https://www.eetimes.com/comparing-tech-used-for-apollo-artem...
> fancy radiation-hardened versions
Ha! What's special about rad-hard chips is that they're old designs. You need big geometries to survive cosmic rays, and new chips all have tiny geometries.
So there are two solutions:
1. Find a warehouse full of 20-year old chips.
2. Build a fab to produce 20-year old designs.
Both approaches are used, and both approaches are expensive. (Approach 1 is expensive because as you eventually run out of chips they become very, very valuable and you end up having to build a fab anyway.)
There's more to it than just big geometries but that's a major part of the solution.
I'm not sure what artemis or orion are, but you can blame defense contractors for this. Nobody ever got fired for hiring IBM or Lockheed, even if they deliver unimpressive results at massive cost.
Put a 4 nm CPU into something that goes to space and see how long it would take to fail.
One of the tradeoffs of radiation hardening is increased transistor size.
Cost-wise it also makes sense - it’s a specialized, certified and low-volume part.
I don't disagree that the engineering can be justified. But you don't need custom hardware to achieve radiation hardening, much less hiring fucking IBM.
And to be clear, I love power chips. I remain very bullish about the architecture. But as a taxpayer reading this shit just pisses me off. Pork-fat designed to look pro-humanity.
> But you don't need custom hardware to achieve radiation hardening
Citation needed
> much less hiring fucking IBM
It's an IBM designed processor, what are you talking about?!
Powerplants and planes still run on 80s hardware.
Modern planes do not, and many older planes have been retrofitted, in whole or in part, with more modern computers.
Some of the specific embedded systems (like the sensors that feed back into the main avionics systems) may still be using older CPUs if you squint, but it's more likely a modern version of those older designs.
Sure, if you think the world consists of cash transactions and whatever a car needs to think about.
If we're talking numbers, there are many, many more embedded systems than general purpose computers. And these are mostly built on ancient process nodes compared to the cutting edge we have today; the shiny octa-cores on our phones are supported by a myriad of ancilliary chips that are definitely not cutting edge.
We aren't talking numbers, though. Who cares about embedded? I mean that literally. This is computation invisible by design. If that were sufficient we wouldn't have smartphones.
In a way, and for a long time, smartphones were/are defined as embedded devices.
I still don't see how one can classify a smartphone as a general-purpose computing device, even though they have enough computing power as a laptop.
HN: Yeah! We should be go back to writing optimized code that fully uses the hardware capabilities!
Also HN: Check this new AI tool that consumes 1000x more energy to do the exact same thing we could already do, but worse and with no reproducibility
Google the goomba fallacy
.NET has made great strides in this front in recent years. Newer versions optimize cpu and ram usage of lots of fundamentals, and introduced new constructs to reduce allocations and cpu for new code. One might argue they were able because they were so bad, but it’s worth looking into if you haven’t in a while.
Wirth's Law:
>software is getting slower more rapidly than hardware is becoming faster.
https://en.wikipedia.org/wiki/Wirth%27s_law
Unfortunately there is a distinct lack of any scientific investigation or rigorous analysis into the phenomenon allegedly occurring that is called “Wirth’s Law” unlike, say, Moore’s Law despite many anecdotal examples. Reasoning as if it were literally true leads to absurd conclusions that do not correspond with any reality that I can observe so I am tempted to say that as a broad phenomenon it is obviously false and that anyone who suggests otherwise is being disingenuous. For it to be true there would have to have been no progress whatever in computing-enabled technologies, yet the real manifestations of the increase in computing resources and the exploitation thereof permeates, alters and invades almost every aspect of society and of our personal day to day lives at a constantly increasing rate.
Call it the X-Windows factor --- software gets more capable/more complex and there's a constant leap-frogging (new hardware release, stuff runs faster, software is written to take advantage of new hardware, things run more slowly, software is optimized, things run more quickly).
The most striking example of this was Mac OS X Public Beta --- which made my 400MHz PowerPC G3 run at about the same speed as my 25 MHz NeXT Cube running the quite similar OPENSTEP 4.2 (just OS X added Java and Carbon and so forth) --- but each iteration got quicker until by 10.6.8, it was about perfect.
Is there or could we make an iPhone-like that runs 100x slower than conventional phones but uses much less energy, so it powers itself on solar? It would be good for the environment and useful in survival situations.
Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.
Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.
We have much hardware on the secondary market (resale) that's only 2-3x slower than pristine new primary market devices. It is cheap, it is reuse, and it helps people save in a hyper-consumerist society. The common complaint is that it doesn't run bloated software anymore. And I don't think we can make non-bloated software for a variety of reasons.
As a video game developer, I can add some perspective (N=1 if you will). Most top-20 game franchises spawned years ago on much weaker hardware, but their current installments demand hardware not even a few years old (as recommended/intended way to play the game). This is due to hyper-bloating of software, and severe downskilling of game programmers in the industry to cut costs. The players don't often see all this, and they think the latest game is truly the greatest, and "makes use" of the hardware. But the truth is that aside from current-generation graphics, most games haven't evolved much in the last 10 years, and current-gen graphics arrived on PS4/Xbox One.
Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Ironically, upgrading a graphics card one generation (RTX 3070 -> 4070) costs about $300 if the old card is sold and $500 if it isn't. So gamers end up paying ~$400 for the latest games every few years and then rebel against paying $30 extra per game instead, which could very well be cheaper than the GPU upgrade (let alone other PC upgrades), and would allow companies to spend much more time on optimization. Well, assuming it wouldn't just go into the pockets of publishers (but that is a separate topic).
It's an example of Scott Alexander's Moloch where it's unclear who could end this race to the bottom. Maybe a culture shift could, we should perhaps become less consumerist and value older hardware more. But the issue of bad software has very deep roots. I think this is why Carmack, who has a practically perfect understanding of software in games, doesn't prescribe a solution.
> Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Producing games doesn't cost anything on a per-unit basis. That's not at all the reason for low quality.
Games could cost $1000 per copy and big game studios (who have investors to worry about) would still release buggy slow games, because they are still going to be under pressure to get the game done by Christmas.
One only needs to look at Horizon: Zero Dawn to note that the truth of this is deeply uneven across the games industry. World streaming architectures are incredible technical achievements. So are moddable engines. There are plenty of technical limits being pushed by devs, it's just not done at all levels.
You are right, but you picked a game by a studio known for its technical expertise, with plenty of points to prove about quality game development. I'd like them to be the future of this industry.
But right now, 8-9/10 game developers and publishers are deeply concerned with cash and rather unconcerned by technical excellence or games as a form of interactive art (where, once again, Guerrilla and many other Sony studios are).
> Or could we make a phone that runs 100x slower but is much cheaper? I
Probably not - a large part of the cost is equipment and R&D. It doesn't cost much more to build the most complex CPU vs a 6502 - there is only a tiny bit more silicon and chemicals. What is costly is the R&D behind the chip, and the R&D behind the machines that make the chips. If intel fired all their R&D engineers who were not focused on reducing manufacturing costs they could greatly reduce the price of their CPUs - until AMD released a next generation that is much better. (this is more or less what Henry Ford did with the model-T - he reduced costs every year until his competition adding features were enough better that he couldn't sell his cars.
[dead]
I absolutely agree with what you are saying.
Yes, it's possible and very simple. Lower the frequency (dramatically lowers power usage), fewer cores, few threads, etc. The problem is, we don't know what we need. What if a great new apps comes out (think LLM); you'll be complaining your phone is too slow to run it.
Meanwhile on every programmer's 101 forum: "Space is cheap! Premature optimization is the root of all evil! Dev time > runtime!"
Exactly. Yes I understand the meaning behind it, but the line gets drummed into developers everywhere, the subtleties and real meaning are lost, and every optimisation- or efficiency-related question on Stack Overflow is met with cries of "You're doing it wrong! Don't ever think about optimising unless you're certain you have a problem!" This habit of pushing it to extremes, inevitably leads to devs not even thinking about making their software efficient. Especially when they develop on high-end hardware and don't test on anything slower.
Perhaps a classic case where a guideline, intended to help, ends up causing ill effects by being religiously stuck to at all times, instead of fully understanding its meaning and when to use it.
A simple example comes to mind, of a time I was talking to a junior developer who thought nothing of putting his SQL query inside a loop. He argued it didn't matter because he couldn't see how it would make any difference in that (admittedly simple) case, to run many queries instead of one. To me, it betrays a manner of thinking. It would never have occurred to me to write it the slower way, because the faster way is no more difficult or time-consuming to write. But no, they'll just point to the mantra of "premature optimisation" and keep doing it the slow way, including all the cases where it unequivocally does make a difference.
Related: https://duskos.org/
Oh man, that's lovely. Awesome project!
We have customers with thousands of machines that are still using spinning, mechanical 54000 RPM drives. The machines are unbelievably slow and only get slower with every single update, its nuts.
Often, this is presented as a tradeoff between the cost of development and the cost of hardware. However, there is a third leg of that stool: the cost of end-user experience.
When you have a system which is sluggish to use because your skimped on development, it is often the case that you cannot make it much faster no matter how expensive is the hardware you throw at it. Either there is a single-threaded critical path, so you hit the limit of what one CPU can do (and adding more does not help), or you hit the laws of physics, such as with network latency which is ultimately bound by the speed of light.
And even when the situation could be improved by throwing more hardware at it, this is often done only to the extent to make the user experience "acceptable", but not "great".
In either case, the user experience suffers and each individual user is less productive. And since there are (usually) orders of magnitude more users than developers, the total damage done can be much greater than the increased cost of performance-focused development. But the cost of development is "concentrated" while the cost of user experience is "distributed", so it's more difficult to measure or incentivize for.
The cost of poor user experience is a real cost, is larger than most people seem to think and is non-linear. This was observed in the experiments done by IBM, Google, Amazon and others decades ago. For example, take a look at:
The Economic Value of Rapid Response Time https://jlelliotton.blogspot.com/p/the-economic-value-of-rap...
He and Richard P. Kelisky, Director of Computing Systems for IBM's Research Division, wrote about their observations in 1979, "...each second of system response degradation leads to a similar degradation added to the user's time for the following [command]. This phenomenon seems to be related to an individual's attention span. The traditional model of a person thinking after each system response appears to be inaccurate. Instead, people seem to have a sequence of actions in mind, contained in a short-term mental memory buffer. Increases in SRT [system response time] seem to disrupt the thought processes, and this may result in having to rethink the sequence of actions to be continued."
I generally believe that markets are somewhat efficient.
But somehow, we've ended up with the current state of Windows as the OS that most people use to do their job.
Something went terribly wrong. Maybe the market is just too dumb, maybe it's all the market distortions that have to do with IP, maybe it's the monopolístic practices of Microsoft. I don't know, but in my head, no sane civilization would think that Windows 10/11 is a good OS that everyone should use to optimize our economy.
I'm not talking only about performance, but about the general crappiness of the experience of using it.
This always saddens me. We could have things instant, simple, and compute & storage would be 100x more abundant in practical terms than it is today.
It's not even a trade off a lot of the time, simpler architectures perform better but are also vastly easier and cheaper to maintain.
We just lack expertise I think, and pass on cargo cult "best practices" much of the time.
I'm not much into retro computing. But it amazes me what people are pulling out of a dated hardware.
Doom on the Amiga for example (many consider it the main factor for the Amiga demise). Optimization and 30 years and it finally arrived
My phone isn't getting slower, but rather the OS running on it becomes less efficient with every update. Shameful.
I wonder if anyone has calculated the additional planet heating generated by crappy e.g. JS apps or useless animations
Z+6 months: Start porting everything to Collapse OS
https://collapseos.org/
The idea of a hand me down computer made of brass and mahogany still sounds ridiculous because it is, but we're nearly there in terms of Moore's law. We have true 2nm within reach and then the 1nm process is basically the end of the journey. I expect 'audiophile grade' PCs in the 2030s and then PCs become works of art, furniture, investments, etc. because they have nowhere to go.
https://en.wikipedia.org/wiki/2_nm_process
https://en.wikipedia.org/wiki/International_Roadmap_for_Devi...
The increasing longevity of computers has been impressing me for about 10 years.
My current machine is 4 years old. It's absolutely fine for what I do. I only ever catch it "working" when I futz with 4k 360 degree video (about which: fine). It's a M1 Macbook Pro.
I traded its predecessor in to buy it, so I don't have that one anymore; it was a 2019 model. But the one before that, a 2015 13" Intel Macbook Pro, is still in use in the house as my wife's computer. Keyboard is mushy now, but it's fine. It'd probably run faster if my wife didn't keep fifty billion tabs open in Chrome, but that's none of my business. ;)
The one behind that one, purchased in 2012, is also still in use as a "media server" / ersatz SAN. It's a little creaky and is I'm sure technically a security risk given its age and lack of updates, but it RUNS just fine.
Consider UX:
Click the link and contemplate while X loads. First, the black background. Next it spends a while and you're ready to celebrate! Nope, it was loading the loading spinner. Then the pieces of the page start to appear. A few more seconds pass while the page is redrawn with the right fonts; only then can you actually scroll the page.
Having had some time to question your sanity for clicking, you're grateful to finally see what you came to see. So you dwell 10x as long, staring at a loaded page and contemplating the tweet. You dwell longer to scroll and look at the replies.
How long were you willing to wait for data you REALLY care about? 10-30 seconds; if it's important enough you'll wait even longer.
Software is as fast as it needs to be to be useful to humans. Computer speed doesn't matter.
If the computer goes too fast it may even be suspected of trickery.
Obviously, the world ran before computers. The more interesting part of this is what would we lose if we knew there were no new computers, and while I'd like to believe the world would put its resources towards critical infrastructure and global logistics, we'd probably see the financial sector trying to buy out whatever they could, followed by any data center / cloud computing company trying to lock all of the best compute power in their own buildings.
If the tooling had kept up. We went from RADs that built you fully native GUIs to abandoning ship and letting Electron take over. Anyone else have 40 web browsers installed and they are each some Chromium hack?
isn't this Bane's rule?:
https://news.ycombinator.com/item?id=8902739
Wow. That's a great comment. Been on a similar path myself lately. Thanks for posting.
Well, yes, I mean, the world could run on less of all sorts of things, if efficient use of those things were a priority. It's not, though.
The priority should be safety, not speed. I prefer an e.g. slower browser or OS that isn't ridden with exploits and attack vectors.
Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.
Unfortunately currently the priority is neither.
[flagged]
Carmack is right to some extent, although I think it’s also worth mentioning that people replace their computers for reasons other than performance, especially smartphones. Improvements in other components, damage, marketing, and status are other reasons.
It’s not that uncommon for people to replace their phone after two years, and as someone who’s typically bought phones that are good but not top-of-the-line, I’m skeptical all of those people’s phones are getting bogged down by slow software.
Optimise is never a neutral word.
You always optimise FOR something at the expense of something.
And that can, and frequently should, be lean resource consumption, but it can come at a price.
Which might be one or more of: Accessibility. Full internationalisation. Integration paradigms (thinking about how modern web apps bring UI and data elements in from third parties). Readability/maintainability. Displays that can actually represent text correctly at any size without relying on font hinting hacks. All sorts of subtle points around UX. Economic/business model stuff (megabytes of cookie BS on every web site, looking at you right now.) Etc.
The goal isn't optimized code, it is utility/value prop. The question then is how do we get the best utility/value given the resources we have. This question often leads to people believing optimization is the right path since it would use fewer resources and therefore the value prop would be higher. I believe they are both right and wrong. For me, almost universally, good optimization ends up simplifying things as it speeds things up. This 'secondary' benefit, to me, is actually the primary benefit. So when considering optimizations I'd argue that performance gains are a potential proxy for simplicity gains in many cases so putting a little more effort into that is almost always worth it. Just make sure you actually are simplifying though.
You're just replacing one favorite solution with another. Would users want simplicity at the cost of performance? Would they pay more for it? I don't think so.
You're right that the crux of it is that the only thing that matters is pure user value and that it comes in many forms. We're here because development cost and feature set provide the most obvious value.
This is Carmack's favorite observation over the last decade+. It stems from what made him successful at id. The world's changed since then. Home computers are rarely compute-bound, the code we write is orders of magnitude more complex, and compilers have gotten better. Any wins would come at the cost of a massive investment in engineering time or degraded user experience.
I work on a laptop from 2014. An i7 4xxx with 32 GB RAM and 3 TB SSD. It's OK for Rails and for Django, Vue, Slack, Firefox and Chrome. Browsers and interpreters got faster. Luckily there was pressure to optimize especially in browsers.
Perfect parallel to the madness that is AI. With even modest sustainability incentives, the industry wouldn't have pulverized a trillion dollar on training models nobody uses to dominate the weekly attention fight and fundraising game.
Evidence: DeepSeek
100% agree with Carmack. There was a craft in writing software that I feel has been lost with access to inexpensive memory and compute. Programmers can be inefficient because they have all that extra headroom to do so which just contributes to the cycle of needing better hardware.
Software development has been commoditized and is directed by MBA's and others who don't see it as a craft. The need for fast project execution is above the craft of programming, hence, the code is bug-riddled and slow. There are some niche areas (vintage,pico-8, arduino...) where people can still practise the craft, but that's just a hobby now. When this topic comes up I always think about Tarkovsky's Andrei Rublev movie, the artist's struggle.
In the last Def Con 32 the badge can run full Doom on puny Pico 2 microcontroller [1].
[1] Running Doom on the Raspberry Pi Pico 2: A Def Con 32 Badge Hack:
https://shop.sb-components.co.uk/blogs/posts/running-doom-
8bit/16bit demo scene can do it, but that's true dedication.
He mentions the rate of innovation would slow down which I agree with. But I think that even 5% slower innovation rate would delay the optimizations we can do or even figure out what we need to optimize through centuries of computer usage and in the end we'd be less efficient because we'd be slower at finding efficiencies. Low adoption rate of new efficiencies is worse than high adoption rate of old efficiencies is I guess how to phrase it.
If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?
Minimalism is excellent. As others have mentioned, using languages that are more memory safe (by assumption the language is wrote in such a way) may be worth the additional complexity cost.
But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.
Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.
Reminded me of this interesting thought experiment
https://x.com/lauriewired/status/1922015999118680495
Feels like half of this thread didn't read or ignored his last line: "Innovative new products would get much rarer without super cheap and scalable compute, of course."
Tell me about it. Web development has only become fun again at my place since upgrading from Intel Mac to M4 Mac.
Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.
Chat, editor in a browser and docker don't seem the most efficient thing if put all together.
Well obviously. And there would be no wars if everybody made peace a priority.
It's obvious for both cases where the real priorities of humanity lie.
When it's free, it doesn't need to be performant unless the free competition is performant.
I think optimizations only occur when the users need them. That is why there are so many tricks for game engine optimization and compiling speed optimization. And that is why MSFT could optimize the hell out of VSCode.
People simply do not care about the rest. So there will be as little money spent on optimization as possible.
Sadly software optimization doesn't offer enough cost savings for most companies to address consumer frustration. However, for large AI workloads, even small CPU improvements yield significant financial benefits, making optimization highly worthwhile.
I already run on older hardware and most people can if they chose to - haven't bought a new computer since 2005. Perhaps the OS can adopt a "serverless" model where high computational tasks are offloaded as long as there is sufficient bandwidth.
This is the story of life in a nutshell. It's extremely far from optimized, and that is the natural way of all that it spawns. It almost seems inelegant to attempt to "correct" it.
Works without Javascript:
https://nitter.poast.org/ID_AA_Carmack/status/19221007713925...
Yes, but it is not a priority. GTM is the priority. Make money machine go brrr.
I'm already moving in this direction in my personal life. It's partly nostalgia but it's partly practical. It's just that work requires working with people who only use what hr and it hoists on them, then I need a separate machine for that.
We are squandering bandwidth similarly and that hasn’t increased as much as processing power.
I've installed OSX Sequoia on 2015 iMacs with 8 gigs of ram and it runs great. More than great actually.
Linux on 10-15 year old laptops and it runs good. if you beef up RAM and SSD then actually really good.
So for everyday stuff we can and do run on older hardware.
1. Consumers are attracted to pretty UIs and lots of features, which pretty much drives inefficiency.
2. The consumers that have the money to buy software/pay for subscriptions have the newer hardware.
The world runs on the maximization of ( - entropy / $) and that's definitely not the same thing as minimizing compute power or bug count.
It could also run on much less current hardware if efficiency was a priority. Then comes the AI bandwagon and everyone is buying loads of new equipment to keep up with the Jones.
Where lack of performance costs money, optimization is quite invested in. See PyTorch (Inductor CUDA graphs), Triton, FlashAttention, Jax, etc.
How much of the extra power has gone to graphics?
Most of it?
The world could run on older hardware if rapid development did not also make money.
Rapid development is creating a race towards faster hardware.
Really no notes on this. Carmack hit both sides of the coin:
- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths
- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them
True for large corporations. But for individuals the ability to put what was previously an entire stacks in a script that doesn't call out to the internet will be a big win.
How many people are going to write and maintain shell scripts with 10+ curls? If we are being honest this is the main reason people use python.
As long as sufficient amounts of wealthy people are able to wield their money as a force to shape society, this is will always be the outcome.
Unfortunately,in our current society a rich group of people with a very restricted intellect, abnormal psychology, perverse views on human interaction and a paranoid delusion that kept normal human love and compassion beyond their grasp, were able to shape society to their dreadful imagination.
Hopefully humanity can make it through these times, despite these hateful aberrations doing their best to wield their economic power to destroy humans as a concept.
don't major cloud companies do this and then sell the gains as a commodity?
Carmack is a very smart guy and I agree with the sentiment behind his post, but he's a software guy. Unfortunately for all of us hardware has bugs, sometimes bugs so bad that you need to drop 30-40% of your performance to mitigate them - see Spectre, Meltdown and friends.
I don't want the crap Intel has been producing for the last 20 years, I want the ARM, RiscV and AMD CPUs from 5 years in the future. I don't want a GPU by Nvidia that comes with buggy drivers and opaque firmware updates, I want the open source GPU that someone is bound to make in the next decade. I'm happy 10gb switches are becoming a thing in the home, I don't want the 100 mb hubs from the early 2000s.
This is a double edge sword problem, but I think what people are glazing over with the compute power topic is power efficiency. One thing I struggle with home labing old gaming equipment is the consideration to the power efficiency of new hardware. Hardly a valid comparison, but I can choose to recycle my Ryzen 1700x with a 2080ti for a media server that will probably consume a few hundred watts, or I can get a M1 that sips power. The double edge sword part is that Ryzen system becomes considerably more power efficient running proxmox or ubuntu server vs a windows client. We as a society choose our niche we want to leverage and it swings with and like economics, strapped for cash, choose to build more efficient code; no limits, buy the horsepower to meet the needs.
I'm going to be pretty blunt. Carmack gets worshiped when he shouldn't be. He has several bad takes in terms of software. Further, he's frankly behind the times when it comes to the current state of the software ecosystem.
I get it, he's legendary for the work he did at id software. But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
He seems to have a perpetual view on the state of software. Interpreted stuff is slow, networks are slow, databases are slow. Everyone is working with Pentium 1s and 2MB of ram.
None of these are what he thinks they are. CPUs are wicked fast. Interpreted languages are now within a single digit multiple of natively compiled languages. Ram is cheap and plentiful. Databases and networks are insanely fast.
Good on him for sharing his takes, but really, he shouldn't be considered a "thought leader". I've noticed his takes have been outdated for over a decade.
I'm sure he's a nice guy, but I believe he's fallen into a trap that many older devs do. He's overestimating what the costs of things are because his mental model of computing is dated.
> Interpreted languages are now within a single digit multiple of natively compiled languages.
You have to be either clueless or delusional if you really believe that.
Let me specify that what I'm calling interpreted (and I'm sure carmack agrees) is languages with a VM and JIT.
The JVM and Javascript both fall into this category.
The proof is in the pudding. [1]
The JS version that ran in 8.54 seconds [2] did not use any sort of fancy escape hatches to get there. It's effectively the naive solution.
But if you look at the winning C version, you'll note that it went all out pulling every single SIMD trick in the book to win [3]. And with all that, the JS version is still only ~4x slower (single digit multiplier).
And if you look at the C++ version which is a near direct translation [4] which isn't using all the SIMD tricks in the books to win, it ran in 5.15. Bringing the multiple down to 1.7x.
Perhaps you weren't thinking of these JIT languages as being interpreted. That's fair. But if you did, you need to adjust your mental model of what's slow. JITs have come a VERY long way in the last 20 years.
I will say that languages like python remain slow. That wasn't what I was thinking of when I said "interpreted". It's definitely more than fair to call it an interpreted language.
[1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[2] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[3] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[4] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
fwiw There are a few naive un-optimised single-thread #8 n-body programs transliterated line-by-line literal style into different programming languages from the same original. [1]
> a single digit multiple
By which you mean < 10× ?
So not those Java -Xint, PHP, Ruby, Python 3 programs?
> interpreted
Roberto Ierusalimschy said "the distinguishing feature of interpreted languages is not that they are not compiled, but that any eventual compiler is part of the language runtime and that, therefore, it is possible (and easy) to execute code generated on the fly." [2]
[1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[2] "Programming in Lua" 1st ed p57
A simple do-nothing for loop in JavaScript via my browser's web console will run at hundreds of MHz. Single-threaded, implicitly working in floating-point (JavaScript being what it is) and on 2014 hardware (3GHz CPU).
> But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
Why isn't it?
> Interpreted stuff is slow
Well, it is. You can immediately tell the difference between most C/C++/Rust/... programs and Python/Ruby/... Either because they're implicitly faster (nature) or they foster an environment where performance matters (nurture), it doesn't matter, the end result (adult) is what matters.
> networks are slow
Networks are fast(er), but they're still slow for most stuff. Gmail is super nice, but it's slower than almost desktop email program that doesn't have legacy baggage stretching back 2-3 decades.
hes a webdev hurt by the simple observation interpreted languages will always be slower than native ones. the static analysis comment being particularly odd
My professor back in the day told me that "software is eating hardware". No matter how hardware gets advanced, software will utilize that advancement.
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
Imagine software engineering was like real engineering, where the engineers had licensing and faced fines or even prison for negligence. How much of the modern worlds software would be tolerated?
Very, very little.
If engineers handled the Citicorp center the same way software engineers did, the fix would have been to update the documentation in Confluence to not expose the building to winds and then later on shrug when it collapsed.
"If this country built bridges they way it builds [information] systems, we'd be a nation run by ferryboats." --Tim Bryce
Developers over 50ish (like me) grew up at a time when CPU performance and memory constraints affected every application. So you had to always be smart about doing things efficiently with both CPU and memory.
Younger developers have machines that are so fast they can be lazy with all algorithms and do everything 'brute force'. Like searching thru an array every time when a hashmap would've been 10x faster. Or using all kinds of "list.find().filter().any().every()" chaining nonsense, when it's often smarter to do ONE loop, and inside that loop do a bunch of different things.
So younger devs only optimize once they NOTICE the code running slow. That means they're ALWAYS right on the edge of overloading the CPU, just thru bad coding. In other words, their inefficiencies will always expand to fit available memory, and available clock cycles.
Yeah, having browsers the size and complexities of OSs is just one of many symptoms. I intimate at this concept in a grumbling, helpless manner somewhat chronically.
There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.
I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.
While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.
I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]
Edit: 1. Assuming open source doesn't persevere
The world will seek out software optimization only after hardware reaches its physical limits.
We're still in Startup Land, where it's more important to be first than it is to be good. From that point onward, you have to make a HUGE leap and your first-to-market competitor needs to make some horrendous screwups in order to overtake them.
The other problem is that some people still believe that the masses will pay more for quality. Sometimes, good enough is good enough. Tidal didn't replace iTunes or Spotify, and Pono didn't exactly crack the market for iPods.
I mean, if you put win 95 on a period appropriate machine, you can do office work easily. All that is really driving computing power is the web and gaming. If we weren't doing either of those things as much, I bet we could all quite happily use machines from the 2000s era
I do have Windows 2000 installed with IIS (and some office stuff) in a ESXi for fun and nostalgia. It serves some static html pages within my local network. The host machine is some kind of i7 machine that is about 7-10 years old.
That machine is SOOOOOO FAST. I love it. To be honest, that tasks that I was doing back in the day are identical to today
Well, it is a point. But also remember the horrors of the monoliths he made. Like in Quake (123?) where you have hacks like if level name contains XYZ then do this magic. I think the conclusion might be wrong.
Code bloat: https://en.wikipedia.org/wiki/Code_bloat
Software bloat > Causes: https://en.wikipedia.org/wiki/Software_bloat#Causes
Program optimization > Automated and manual optimization: https://en.wikipedia.org/wiki/Program_optimization#Automated...
Probably, but we'd be in a pretty terrible security place without modern hardware based cryptographic operations.
[dead]
[dead]
[flagged]
Forcing the world to run on older hardware because all the newer stuff is used for running AI.
Very much so!
Let's keep the CPU efficiency golf to Zachtronics games, please.
I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.
And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.
I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.
There's servers and there's all of the rest of consumer hardware.
I need to buy a new phone every few years simply because the manufacturer refuses to update it. Or they add progressively more computationally expensive effects that makes my old hardware crawl. Or the software I use only supports 2 old version of macOS. Or Microsoft decides that your brand new cpu is no good for win 11 because it's lacking a TPM. Or god help you if you try to open our poorly optimized electron app on your 5 year old computer.
But Carmack is clearly talking about servers here. That is my problem -- the main audience is going to read this and think about personal compute.
All those situations you describe are also a choice made so that companies can make sales.
It shows up in different ways, and I agree that some of my examples are planned obsolescence.
I'm not so sure they're that different though. I do think that in the end most boil down to the same problem: no emphasis or care about performance.
Picking a programming paradigm that all but incentivizes N+1 selects is stupid. An N+1 select is not an I/O problem, it's a design problem.
> I/O is almost always the main bottleneck.
People say this all the time, and usually it's just an excuse not to optimize anything.
First, I/O can be optimized. It's very likely that most servers are either wasteful in the number of requests they make, or are shuffling more data around than necessary.
Beyond that though, adding slow logic on top of I/O latency only makes things worse.
Also, what does I/O being a bottleneck have to do with my browser consuming all of my RAM and using 120% of my CPU? Most people who say "I/O is the bottleneck" as a reason to not optimize only care about servers, and ignore the end users.
I/O _can_ be optimized. I know someone who had this as their fulltime job at Meta. Outside of that nobody is investing in it though.
I'm a platform engineer for a company with thousands of microservices. I'm not thinking on your desktop scale. Our jobs are all memory hogs and I/O bound messes. Across all of the hardware we're buying we're using maybe 10% CPU. Peers I talk to at other companies are almost universally in the same situation.
I'm not saying don't care about CPU efficiency, but I encounter dumb shit all the time like engineers asking us to run exotic new databases with bad licensing and no enterprise features just because it's 10% faster when we're nowhere near experiencing those kinds of efficiency problems. I almost never encounter engineers who truly understand or care about things like resource contention/utilization. Everything is still treated like an infinite pool with perfect 100% uptime, despite (at least) 20 years of the industry knowing better.
I'm looking at our Datadog stats right now. It is 64% cpu 36% IO.
That's great for you. How much infrastructure do you run?
Small Python backend deployment 30 machines 1 DB.
based
I'd much prefer Carmack to think about optimizing for energy consumption.
These two metrics often scale linearly.