Trend Micro has unveiled its plans for protecting consumers as they increasingly adopt generative AI applications alongside the expected growth in adoption of AI PCs. Designed to address the unique opportunities and risks posed by AI, Trend’s strategy helps consumers embrace the use of generative AI applications safely to lower their risk of being harmed by the use and abuse of AI. In addition, Trend Micro has, and will continue, to innovate using AI in its own cybersecurity solutions and services for greater security effectiveness and efficiencies.
While AI presents great promises to consumers, it has also afforded cybercriminals the opportunity to more effectively perpetrate scams and fraud, in the form of tactics such as harder to detect phishing emails or deepfake videos. Additionally, there are potential vulnerabilities to AI applications themselves that, without protection, may result in consumers unknowingly being misdirected or unintentionally exposing their private data.
Eva Chen, Co-Founder and CEO at Trend said, “Generative AI represents an infrastructure shift that requires us to use our long history of understanding digital threats to develop innovative solutions and meet the moment. Our goal is to ensure every user, from individual consumers to the largest organisations in the world, can harness the full potential of AI and AI PCs securely and confidently.”
The arrival of AI PCs also represents another shift in how consumers will use AI applications in the future, thanks to the neural processing units (NPUs) that power them. Consumers will now be able to use AI applications locally on their devices, promising better speed and privacy than having to access or send data to be processed in the cloud. This additional computing power also presents the opportunity for Trend to deliver some of its cloud AI capabilities on device and introduce new capabilities that take advantage of the efficiency and data privacy benefits that AI PCs offer.
Trend Micro will be delivering solutions for consumers in the second half of 2024 that are designed to protect AI, protect users from AI abuses, and leverage AI in both existing and new products and services including:
- AI Application Protection: Protection capabilities against tampering with AI applications, contaminating the AI application with malicious input, and preventing AI applications from inadvertently or maliciously accessing sensitive data.
- Malicious AI Content Protection: Solutions that protect consumers from malicious AI-generated or AI-altered content.
- NPU-powered Cybersecurity Capabilities: Solutions designed for AI PCs that leverage the additional compute power.
As a sign of the significance of AI threats, MITRE has introduced its ATLAS matrix which assesses the risks associated with AI assets. The MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems. Trend is mapping its capabilities to this matrix where applicable for consumer use cases.