High quality software solves users problems better than competitors. It functions reliably, performantly, and securely while being cheap to extend, operate and configure. High quality software is hard to create, and is a competitive advantage for businesses who's product is software.
Software quality doesn't happen by accident, but it's difficult to measure with numbers, so business leaders don't usually have much insight into how high quality their software really is. Quality is something you feel. Performance numbers, lead time for change, and defect rates can be indicators. But those measurements are often too noisy to be useful as a measure of quality, especially over short time periods. Of course their biggest limitation is that they can only be measured once your software is out in the wild. Adding quality into a codebase after it has lots of users is 10-100x (these are made up numbers) more effort than doing it right from the start, so this presents a bit of a paradox. Measuring quality is difficult without users, but once you have users it's too late to add quality in.
I've come to the conclusion that the reason high quality software is difficult to make is the combinatoric explosion of states that even a relatively simple program can have. Ensuring every state is handled correctly is impossible, you will need more unit tests than atoms in the universe to check that my CRUD app handles every possible interleaving of requests that users could send it. But luckily many states are not different enough to be worth testing. If I write a program that tells you whether a number is positive or negative, I don't need to test it against every number, I just need a positive number, negative number, and zero. Oh and overflow, and underflow, and invalid characters, and a request that is too large for my server, and a million requests per second, etc. Even the simplest service I can think of has more edge cases than you realize. There's probably an abstract math term for this that I don't know, but there are states of programs that are importantly different, and should be verified separately, and states that are superficially different, where it's trivial to prove that if the program works for one it works for all.
As programmers, we can improve the verifiability of software by limiting the number of states that are importantly different, and use engineering practices like automated testing to lock those traits in place. As product designers we should be careful about designing features that interact with each other in complex ways, because the combinatoric explosion grows quickly.
The rubber really meets the road when real people start using the software though. When thousands of people start using a piece of low quality software, they will collectively put it through millions of untested states, many of which were probably never envisioned by product and software designers. It's common for brand new software to go through a period of instability in the beginning. This can be a make or break moment where engineering and product clean up the product and tame it into a high quality state, or drown in an endless sea of bugs while a competitor learns the product lessons and creates a higher quality version of your product.
But if you can make it past this phase, you will have a huge advantage over any newcomers, because no competitor can test their software as thoroughly as yours has already been tested. I will always trust postgres over a database engine that was written this year, because there just hasn't been enough usage for that new engine to expose all of it's flaws. One of the biggest mistakes you can make is to take a battle tested piece of code, and replace it with something new just for the sake of it being newer. In the physical world materials degrade and newer=better can be a useful heuristic. But software doesn't rot or rust, and software that's been used has been exposed to millions of hours of testing that has likely exposed hundreds or thousands of bugs that were fixed. This is real value that is difficult for new competitors to replicate. As an industry we often talk about the cost of legacy, but there are serious advantages to keeping around battle tested software. To be explicit, I'm not advocating for never changing legacy code. I'm advocating for iterating on legacy code by applying lessons learned from real world usage, so you can keep the good parts that are battle tested, and fix the parts that break in the real world.
So how do you make high quality software? Limit combinatorial explosion through software engineering practices and product design, iterate on software over time with lessons learned from real world usage, and don't throw away hard-won battle tested code.