Kaspersky has illuminated a shift in AI usage during the holiday season. Besides being a reliable shopping or planning assistant, AI has emerged as capable of delivering emotional support – particularly among the Gen Z and Millennial cohorts. However, Kaspersky experts warn that placing too much trust in AI can threaten data security.
In the run-up to the Christmas holidays, Kaspersky conducted a survey* to find out how people leverage AI-powered tools to make the most of their free time and streamline holiday preparations, and to highlight the potential cyberthreats that result.
It turns out that AI’s popularity in the 2025/2026 holiday is rather high, with 74% of survey participants indicating they plan to incorporate AI into their holiday activities. The younger generation demonstrated the strongest enthusiasm for AI usage, with 86% of respondents aged 18-34 expressing their intention to resort to AI during the holiday period.
According to the survey, more than half of AI users planned to use the tools during the holidays to search for recipes (56%) or restaurants and accommodation (54%), underscoring AI's ongoing significance in simplifying research processes and reducing search-related time commitments.
However, AI-as-idea generator also received a great response from the audience. The survey found that 50% of users rely on AI assistance for brainstorming gift ideas, ways to celebrate, or tips on Christmas decoration. The same number of respondents plan to ask AI to generate ideas on how to spend their free time.
During the holidays half of respondents regard AI as a shopping assistant, which can help them to create shopping lists, find the best deals or analyse reviews. The younger generation showed high interest in AI as a budget planner (50%), while older people (aged 55+) are less passionate about allowing AI to manage their expenses (31%), preferring to use it to search for recipes (59%) and generate gift ideas (41%).
Modern AI tools enable holiday shoppers to find offers that perfectly match individual preferences and budget constraints with just a few clicks. However, the reliability of chatbot-generated information remains a significant concern. Kaspersky recommended that shoppers check all links provided by AI before clicking on them, as they may contain malicious or phishing content. To mitigate this risk, cybersecurity experts recommend employing a security solution empowered with AI-based phishing detection tools.
Beyond its capacity to tackle diverse challenges and generate new ideas, AI has assumed a new role: serving as a virtual companion capable of offering emotional assistance. Nearly three in 10 (29%) of those who use AI during holidays consider talking to it when they feel unhappy. Zoomers (Gen Zers) and Millennials show the biggest interest in AI-powered support among all ages, with 35% of respondents voting for this option. The older generation demonstrated a more restrained interest – only 19% of respondents aged 55 and older consider talking to AI when they're upset.
Kaspersky highlighted that while communication with AI services may seem personal and private, most chatbots are owned by commercial companies with their own data collection and processing policies. To enhance data privacy the company suggested:
- Before starting any conversation, review the privacy policy of the AI tool you’re using. Some AI providers may use your emotional conversations to infer information about you, which can be used for targeted advertising or even sold to third-party marketing firms. Check whether you can opt out of using your chats for such purposes as model training or marketing to minimise the amount of data collected.
- Try to avoid sharing deeply personal, identifying, or financial information with AI chatbots. Treat your messages as you would a public social post – never assume absolute confidentiality.
- Stick to AI services from reputable companies with strong privacy and security track records. Avoid using anonymous or unknown bots that could be designed to harvest data. Malicious or fake AI bots may attempt to extract personal information to commit fraud, phishing, or blackmail. To protect your data, use a security solution that prevents clicking on unreliable links.
“As LLM models rapidly evolve, their potential for engaging in meaningful dialogue with users grows as well. However, it's important to bear in mind that they learn to answer from data, most of which is sourced from the Internet, meaning they are prone to regurgitate the error and biases from the text used for training. It’s highly recommended to approach AI suggestions with a healthy dose of scepticism and try to avoid oversharing,” commented Vladislav Tushkanov, Group Manager at Kaspersky AI Technology Research Center.
*The study was conducted by Kaspersky’s market research centre in November 2025. Three thousand respondents from 15 countries (Argentina, Chile, China, Germany, India, Indonesia, Italy, KSA, Malaysia, Mexico, South Africa, Spain, Turkey, UK, and the UAE) took part in the survey.