Your search for super efficient big O notation in interviews is less important and it is demonstrated by the Real World Examples.
Big O Notation in Interviews
Traditionally, big O notation has been a staple in coding interviews to assess a candidate’s ability to understand and optimize algorithms. However, in my personal opinion – there’s a growing demonstration from the market that – understanding big O is just one part of being an effective software engineer. Practical problem-solving, system design, and understanding of actual performance in real-world systems should carry more weight.
Company Examples
Facebook (Meta):
Mark Zuckerberg himself have had a lecture – wher he technically described how he worked around the algorithmic complexity of the search in graph algorithm. Simply said – the search was executed inside a box – only in limited sizes of data. This way – the time of execusion is kept small.
Different Storage Systems
In the same lecture he also mentions about scaling the traditional SQL databases. From my experience there are several options beyond old school RDBMS:</p>
- NoSQL Databases: Companies often use NoSQL databases for scalability and flexibility, where the traditional big O might not directly apply due to the varied query patterns and data models.
- Lucene for Searching: Lucene and similar search engines optimize for search efficiency beyond just big O, focusing on relevance, speed, and the ability to handle complex queries over large datasets.
- File Caches
- Keep stuff in RAM memory to improve specific applications logic.
- Custom solutions mixing all the above.
Deep Learning
Reent success of Deepseek has demonstrated that better perofming LLM could archived by
- minimizing decimal point accuracy
- limitting the data set information
In machine learning, the focus can shift away from traditional big O. Right now it is more about
- model accuracy,
- training time,
- inference speed,
- and the amount of data that can be processed.
Techniques here include optimizing for GPU usage, quantization, pruning, or using different precision levels for training and inference to balance between performance and accuracy.
Specific Use Cases
Even in the form before LLMs – the Software was lifting the weight from repetitive tasks to specialized software that could run on almost any software. A little reminder – most servers are Linux. If you have very clear and specific idea how to optimize some process – you will not need super expensive GPUs.
This articles is continuation from Algorithmic Complexity – https://programtom.com/dev/2023/06/25/complexities-beyond-algorithmic-time-on/ – with the very recent example.