This week, the AI world has been abuzz with news about DeepSeek, a Chinese-developed AI platform that has rapidly gained popularity. However, this meteoric rise has been accompanied by significant cybersecurity concerns that highlight the critical importance of proactive, transparent security measures - especially in the AI industry.
The DeepSeek Controversy
DeepSeek, an AI platform that quickly surpassed ChatGPT as the most downloaded AI app on Apple's App Store, recently found itself at the center of a cybersecurity storm when it became the target of a large-scale cyberattack. This incident has raised serious questions about the safety and security of emerging AI technologies.
Protecting Data Security
The platform's security issues are multifaceted. One primary concern is data security. DeepSeek collects user data, including email addresses, device information, IP addresses, and behavioral data. This information is stored in China, raising alarms about potential state surveillance and national security risks. The company claims to use reasonable security measures, but the high-risk cybercrime environment in China amplifies concerns about data breaches and unauthorized access. It became clear that DeepSeek's model could be jailbroken to generate harmful content, including ransomware development instructions and fabrication of sensitive information. Unlike Western AI platforms with stricter safety measures, DeepSeek has fewer guardrails preventing the generation of illegal content.
Exploitation by Malicious Actors
The surge in DeepSeek's popularity has not gone unnoticed by cybercriminals. The platform's name has already been misused to launch fake cryptocurrency tokens, prompting the company to issue warnings about potential scams. Additionally, there have been reports of cloned websites impersonating DeepSeek, pushing malware disguised as DeepSeek apps and browser extensions, and phishing campaigns attempting to trick users into sharing sensitive information.
The Role of Trust Centers in Mitigating AI Risks in the B2B arena
Prioritizing security and transparency in today’s ever evolving trust landscape continues to be paramount — especially in the B2B space. To help build confidence in AI technologies, leading companies like OpenAI are leveraging Trust Centers to provide an externally facing, secure portal that proactively shares the company’s security and trust posture with customers. Their Trust Center showcases data handling practices, compliance certifications, controls and more. Plus, built in notification capabilities proactively alert customers of vulnerabilities and next steps.
In the wake of incidents like the DeepSeek cyberattack, a Trust Center is a vital — especially for enterprise security teams — in maintaining customer trust.
Implementing a Trust Center: A Proactive Approach to Customer Protection
Don’t just react — take a proactive stance with customers and embrace the transparency they need to have trust when working with your company. A Trust Center is an important testament to your commitment to their security - especially when it employs best practices.
The benefits of an enterprise-grade Trust Center include:
- Centralized Security Information: A Trust Center provides a single source of truth for all security, privacy, and AI policy related information. This SSOT offers customers self-serve access to your company’s security and trust posture.
- Rapid Response to Incidents: In the event of a security incident, a Trust Center allows for quick communication of updates and mitigation strategies to all stakeholders.
- Compliance Demonstration: Companies can easily showcase their compliance with various security standards and regulations, building customer confidence and trust.
- Automated Security Reviews: By streamlining the security review process, Trust Centers eliminates the back and forth burden on both the company and its customers, leading to faster deal closures.
- Continuous Improvement: Analytics provided by platforms like SafeBase allow companies to continuously refine their security practices based on real data and customer interactions.
Be prepared for new AI security questionnaire trends
The DeepSeek incident serves as a stark reminder of the potential risks associated with rapidly advancing AI technologies. International standards like ISO 42001, introduced in December 2023, provides a comprehensive framework for organizations to develop, deploy, and operate AI systems responsibly, but many organizations are still in the process of understanding and implementing its requirements.
This can leave blank space around your company’s AI practices and its impact on your customers’ security as you gather information internally. So how do privacy and security teams ensure you have answers to the questions customers need most when it comes to AI? Where should you prioritize efforts?
SafeBase recently looked at Trust Center Activity from over 800 customers to land on the most frequently asked AI questions on inbound questionnaires. Since achieving full compliance with ISO 42001 is a complex and time consuming process with multiple phases, teams can use this list in the interim to prioritize answers to these most commonly asked questions around AI.
By proactively addressing security concerns, providing real-time updates, and offering a centralized platform for all security-related information, companies can protect their customers and build lasting trust in an increasingly complex digital landscape.
In the age of AI, where data is the new currency, trust is the ultimate differentiator.