In the software nowadays complexities exist – that are beyond algorithmic time that are teached in the University and asked in interviews.
Time Complexity
It is about – choosing the better algorithms and data structures – to minimize the number of operations. The right choice becomes critical when the amount of data increases exponentially. If the algorithm iterates over the information instead of going directly to the point – it will consume the CPU of the server and will make the software inpractical to use.
But, there are several other factors – besides the algorithms for the plain business logic, the influence the speed of the applications.
Call stack increase
When the software becomes layered and complex – the execution of simple tasks transforms into – multiple nested function calls, which can lead to memory limitations and potential stack overflow errors in highly recursive or deeply nested code. Such are technologies that use runtime interpreting – like PHP, Python, JavaScript, although – there are workarounds for that.
Interpreted vs Compiled code execution
Interpreted code execution has build-in weakness that increases the work on the server. It requires parsing the program every time. This could result in slower performance compared to compiled code, where the entire code is translated into machine language beforehand for direct execution, leading to faster execution times. Somewhere in the middle, but maybe closer to compiled are the platforms with LLVM approach. They have some overhead, but minimize Just In Time code translation – at every execution.
Because of the above architectural approach – iOS apps (initially coded in Objective C) are much faster – with less hardware computing power than any Android device (that has its apps written in modified Java Virtual Machine). To overcome this Google introduced ahead of time compilation and native like libraries and frameworks – like Flutter.
Data Storage (RAM, File System, Key-Value, Database)
Caching data at different layers (e.g., application level, database level) can significantly reduce response times and enhance scalability.
One od the approach is the implement caching at the Data storage loevel – You could choose RAM, file systems HDD vs SSD, key-value storage systems and databases, each with its own complexities.
- RAM provides fastest speed for immediate access, but it is limited. It is also a volatile storage – because of the dependency of electric power.
- In the mean time – file systems offer persistent storage but may require disk I/O operations. SSD storage offer greather speed – because of the better hardware implementation – the reduce iterations through the memory.
- Key-value stores and databases involve additional complexities such as data modeling, querying, indexing, and ensuring data integrity.
Non-Blocking I/O
There is a limit of opened files/sockets in the Linux System. Non-blocking I/O allows an application to continue processing other tasks while waiting for I/O operations (such as reading from or writing to a file or a network socket) to complete. This is crucial for applications that need to handle many concurrent connections without being blocked by slow I/O operations.
Using non-blocking approach allows:
- Asynchronous Programming
- Еvent-Driven Architectures
The above creates the need for special choosing of
- Frameworks and Libraries – that support non-blocking operations, such as reactive programming libraries or event-driven frameworks.
- Web Servers and Application Servers – Non-blocking web servers and application servers, like those based on technologies such as Node.js, can efficiently handle a large number of concurrent connections due to their non-blocking nature.
Database speed – transactions/batching, queries, indexes
Modeling correclty the database and the operationm may result in slow or fast data access.
- Efficient database design and query optimization can significantly impact performance.
- Transactions allow efficient and concurrent data modifications, ensuring data consistency and integrity.
- Grouping operations together in a transaction and Batching operations can improve performance by reducing the number of round trips to the database, enabling efficient bulk data processing and minimizing network and write overhead.
- Well-optimized queries and indexes enhance search and retrieval performance, enabling faster data access and reducing query response times.
But, there could be some cons of database for transactions, batching operations, queries, and indexes – if not implemented properly:
- Overly frequent or long-running transactions can lead to contention and lock conflicts, reducing concurrency and potentially impacting overall system performance.
- Large batch operations may consume significant system resources, such as memory and disk space, which can affect the responsiveness of the database and other concurrent operations.
- Poorly designed or outdated indexes can result in increased query execution time and slower performance, especially when dealing with large datasets or complex queries.
- Smart filtering mechanisms help retrieve only the necessary data, reducing the workload on the system.
For the above reasons and to minimize database load – caching /RAM, File System, Messaging applications are used.
Too many micro-services
Too many micro-services can introduce complexity due to increased network communication, coordination, and management overhead. I think twitter did some rework after Musk baught it.
It requires careful orchestration, load balancing, and monitoring to ensure proper functioning and avoid issues like service dependencies, scalability challenges, and performance bottlenecks.
HTTP requests/ client side caching – live socket
HTTP requests and clientside caching can impact performance and user experience. Efficient use of live sockets or websockets can provide real-time communication, reducing the need for repeated HTTP requests. Clientside caching can improve response times by storing and reusing previously fetched data, minimizing network traffic.
Physical distance between the client and ther server
The physical distance between the client and server affects network latency and can introduce delays in data transmission. Longer distances increase the time taken for data to travel back and forth, potentially impacting response times and user experience. Content delivery networks (CDNs) are often used to mitigate this by caching and serving content from servers closer to the user’s location.
Use of torrents for content streaming
The use of torrents for content streaming introduces complexities related to peer-to-peer (P2P) file sharing, including managing connections, error handling, data distribution, and ensuring data integrity across multiple sources. But, in the same time – torrents can provide efficient and decentralized distribution.
- Linux distributions offer P2P image mirroring
- Games with big resources like Blizzard minimize their server load by integrating a torrent inside their installers
- Here are some more https://www.makeuseof.com/tag/8-legal-uses-for-bittorrent-youd-be-surprised/