Skip to main content

2 posts tagged with "Observability"

Posts about Observability

View All Tags

How We Build World-Class Tech Products?

· 8 min read
Aditya Kumar
Founder, OLogNlabs

In my previous blog post, I discussed how to objectively measure website and app quality and evaluate quality from day one using a well-designed observability setup with open telemetry at its base.

Toward the end of that post, I covered our definition of a good tech product. This is how we define it -

    1. App launch time (or website loading time) should be as fast as possible. Less than a second is the benchmark here.
    2. API response time should be as low as possible, specifically P99 and P95 latency. Less than 150 milliseconds is a good number to start.
    3. Database query and write times should be as low as possible. Read latency should be less than ten milliseconds. (For a single query).
    4. The crash-free rate should be as high as possible. The last three platforms I built had a 100% crash-free rate (yes, it's possible) at a sufficiently high scale. But anything above 98% is a good number here.
    5. The Number of bugs users encounter per month should be a single digit (less than 10). Bugs are inevitable, especially if you are developing rapidly. However, the number can be kept in single digits if careful enough.
    6. The app/website/platform should have the same performance at any scale for any feature.
    7. The cost of the entire tech product should be as low as possible. If your infrastructure is over-provisioned, you may get a short time gain in performance, but you are setting yourself up for failure when the actual scale hits. Also, you are just hurting your bottom line.

Some potential clients read my post and asked me the real question - How do we do this? How do we make such bold promises?

How to Measure Website and App Quality Objectively

· 10 min read
Aditya Kumar
Founder, OLogNlabs

In the last few months, I had several chats with early-stage and Series A companies looking to build web and mobile applications. After the first few calls, a pattern started emerging. Most of them had already signed up with a service company and were unhappy with their services. The main reason, apart from the massive cost, is product quality.

Both mobile and web applications were full of bugs, had performance janks, and crashed frequently, especially when paid marketing campaigns were running on them. This led to a loss of many marketing dollars as well.

The bigger problem was that there was no accountability. Whenever they reported bugs to these service companies, they could not replicate or figure out the problem, leading to even more frustration. I had the opportunity to conduct quick code reviews on several codebases, and I observed some familiar issues, such as poor system architecture, low code quality, and a lack of documentation. However, the most surprising finding was the absence of any observability setup.

Please note - tech product quality is objectively measurable!

Many of these founders were unaware of how to establish accountability because they did not understand that they could implement something called observability from the start.