The large language models (LLMs) modify the human behaviour – Instead of the search engines – autocompleting their untyped words, now the LLMs need specific details so the result to be the desired result.
GPT-4, Llama, DeepSeek etc, are changing how humans interact with technology, particularly in terms of how we formulate queries and requests. This shift is notable when compared to traditional search engines, which often rely on autocomplete or keyword-based searches
1. Precision in Queries
- Search Engines: Traditional search engines like Google are designed to work with partial or vague queries. Autocomplete and keyword matching help users find relevant information even if their input is incomplete or imprecise.
- LLMs: In contrast, LLMs often require more specific and detailed inputs to generate accurate and relevant responses. For example, instead of typing “weather,” a user might need to ask, “What is the weather in New York City today?” to get a precise answer.
2. Contextual Understanding
- Search Engines: Search engines primarily match keywords to indexed web pages. They don’t inherently understand context or intent in the same way LLMs do.
- LLMs: These models are designed to understand context and intent, which means users can ask more complex, conversational questions. However, this also means that users need to provide enough context for the model to generate a useful response. For instance, asking “Explain quantum computing” might yield a general overview, but adding “Explain quantum computing to a 10-year-old” will tailor the response to a specific audience.
3. Shift from Keyword-Based to Natural Language Queries
- Search Engines: Users are accustomed to typing short, keyword-based queries (e.g., “best restaurants NYC”).
- LLMs: With LLMs, users are increasingly using full sentences or even paragraphs to articulate their questions or requests. This shift reflects a move toward more natural, conversational interactions with technology.
4. Expectation of Personalization
- Search Engines: While search engines can personalize results based on user history, they generally provide the same set of results for a given query.
- LLMs: Users often expect LLMs to provide personalized responses based on the specific details they provide. For example, asking “What should I watch tonight?” might lead to a follow-up question from the model about your preferences or past viewing history to give a tailored recommendation.
5. Learning Curve for Users
- Search Engines: Most people are familiar with how to use search engines effectively, often relying on trial and error to refine their queries.
- LLMs: There’s a learning curve associated with using LLMs effectively. Users need to learn how to phrase their questions in a way that elicits the desired response, which can involve trial and error, especially when dealing with complex or nuanced topics.
6. Impact on Information Retrieval
- Search Engines: Search engines provide a list of links to external sources, leaving it to the user to sift through and find the information they need.
- LLMs: LLMs synthesize information and provide direct answers, which can save time but also requires users to trust the model’s ability to accurately summarize or interpret information.
7. Ethical and Behavioral Considerations
- Search Engines: The ethical concerns around search engines often revolve around privacy, data collection, and the potential for echo chambers.
- LLMs: With LLMs, there are additional concerns about misinformation, bias in generated content, and the potential for over-reliance on AI for decision-making. Users may also become less critical of information if they perceive the AI as authoritative.
The rise of LLMs is reshaping how humans interact with technology, pushing us toward more precise, contextual, and conversational interactions. While this shift offers many benefits, such as more personalized and immediate responses, it also requires users to adapt their behavior and develop new skills in query formulation. As LLMs continue to evolve, so too will the ways in which we engage with them, potentially leading to even more profound changes in human-computer interaction. I’ve learned this:
- with my own implementation of GPT Wrappers and Large Language Model wrappers. They need special context and even additional training to be useful
- delivering my app – Talk to AI – https://programtom.com/dev/product/talk-to-ai-flutter-front-end-and-spring-boot-back-end/ to non-tech users. They expect certain behaviour
- results – with minimal interactions
- Precise, recent data – that is not part of the Large Language Model and it requires operators (or front-end software above the GPT), to get the information.
- app behaviour – similar to search – with minimal words and interactions.